SotW: Check Consequentialism

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T01:35:13.161Z · LW · GW · Legacy · 309 comments

Contents

  Exercise Prize:  Check Consequentialism
  Discussion:
  Pain points & Pluses:
  Teaching & exercises:
None
309 comments

(The Exercise Prize series of posts is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills.  The difficulty is coming up with exercises interesting enough, with a high enough hedonic return, that people actually do them and remember them; this often involves standing up and performing actions, or interacting with other people, not just working alone with an exercise booklet and a pencil.  We offer prizes of $50 for any suggestion we decide to test, and $500 for any suggestion we decide to adopt.  This prize also extends to LW meetup activities and good ideas for verifying that a skill has been acquired.  See here for details.)


Exercise Prize:  Check Consequentialism

In philosophy, "consequentialism" is the belief that doing the right thing makes the world a better place, i.e., that actions should be chosen on the basis of their probable outcomes.  It seems like the mental habit of checking consequentialism, asking "What positive future events does this action cause?", would catch numerous cognitive fallacies.

For example, the mental habit of consequentialism would counter the sunk cost fallacy - if a PhD wouldn't really lead to much in the way of desirable job opportunities or a higher income, and the only reason you're still pursuing your PhD is that otherwise all your previous years of work will have been wasted, you will find yourself encountering a blank screen at the point where you try to imagine a positive future outcome of spending another two years working toward your PhD - you will not be able to state what good future events happen as a result.

Or consider the problem of living in the should-universe; if you're thinking, I'm not going to talk to my boyfriend about X because he should know it already, you might be able to spot this as an instance of should-universe thinking (planning/choosing/acting/feeling as though within / by-comparison-to an image of an ideal perfect universe) by having done exercises specifically to sensitize you to should-ness.  Or, if you've practiced the more general skill of Checking Consequentialism, you might notice a problem on asking "What happens if I talk / don't talk to my boyfriend?" - providing that you're sufficiently adept to constrain your consequentialist visualization to what actually happens as opposed to what should happen.

Discussion:

The skill of Checking Consequentialism isn't quite as simple as telling people to ask, "What positive result do I get?"  By itself, this mental query is probably going to return any apparent justification - for example, in the sunk-cost-PhD example, asking "What good thing happens as a result?" will just return, "All my years of work won't have been wasted!  That's good!"  Any choice people are tempted by seems good for some reason, and executing a query about "good reasons" will just return this.

The novel part of Checking Consequentialism is the ability to discriminate "consequentialist reasons" from "non-consequentialist reasons" - being able to distinguish that "Because a PhD gets me a 50% higher salary" talks about future positive consequences, while "Because I don't want my years of work to have been wasted" doesn't.

It's possible that asking "At what time does the consequence occur and how long does it last?" would be useful for distinguishing future-consequences from non-future-consequences - if you take a bad-thing like "I don't want my work to have been wasted" and ask "When does it occur, where does it occur, and how long does it last?", you will with luck notice the error.

Learning to draw cause-and-effect directed graphs, a la Judea Pearl and Bayes nets, seems like it might be helpful - at least, Geoff was doing this while trying to teach strategicness and the class seemed to like it.

Sometimes non-consequentialist reasons can be rescued as consequentialist ones.  "You shouldn't kill because it's the wrong thing to do" can be rescued as "Because then a person will transition from 'alive' to 'dead' in the future, and this is a bad event" or "Because the interval between Outcome A and Outcome B includes the interval from Fred alive to Fred dead."

On a five-second level, the skill would have to include:

In practice, it may be obvious that you're making a mistake as soon as you think to check consequences.  I have 'living in the should-universe' or 'sunk cost fallacy' cached to the point where as soon as I spot an error of that pattern, it's usually pretty obvious (without further deliberative thought) what the residual reasons are and whether I was doing it wrong.

Pain points & Pluses:

(When generating a candidate kata, almost the first question we ask - directly after the selection of a topic, like 'consequentialism' - is, "What are the pain points?  Or pleasure points?"  This can be errors you've made yourself and noticed afterward, or even cases where you've noticed someone else doing it wrong, but ideally cases where you use the skill in real life.  Since a lot of rationality is in fact about not screwing up, there may not always be pleasure points where the skill is used in a non-error-correcting, strictly positive context; but it's still worth asking each time.  We ask this question right at the beginning because it (a) checks to see how often the skill is actually important in real life and (b) provides concrete use-cases to focus discussion of the skill.)

Pain points:

Checking Consequentialism looks like it should be useful for countering:

(This list is not intended to be exhaustive.)

Pleasure points:

Also, consequentialism is the foundation of expected utility, which is the foundation of instrumental rationality - this is why we're considering it as an early unit.  (This is not directly listed as a "pleasure point" because it is not directly a use-case.)

Constantly asking about consequences seems likely to improve overall strategicness - not just lead to the better of two choices being taken from a fixed decision-set, but also having goals in mind that can generate new perceived choices, i.e., improve the overall degree to which people do things for reasons, as opposed to not doing things or not having reasons.  (But this is a hopeful eventual positive consequence of practicing the skill, not a use-case where the skill is directly being applied.)

Teaching & exercises:

This is the part that's being thrown open to Less Wrong generally.  Hopefully I've described the skill in enough detail to convey what it is.  Now, how would you practice it?  How would you have an audience practice it, hopefully in activities carried out with each other?

The dumb thing I tried to do previously was to have exercises along the lines of, "Print up a booklet with little snippets of scenarios in them, and ask people to circle non-consequentialist reasoning, then try to either translate it to consequentialist reasons or say that no consequentialist reasons could be found."  I didn't do that for this exact session, but if you look at what I did with the sunk cost fallacy, it's the same sort of silly thing I tried to do.

This didn't work very well - maybe the exercises were too easy, or maybe it was that people were doing it alone, or maybe we did something else wrong, but the audience appeared to experience insufficient hedonic return.  They were, in lay terms, unenthusiastic.

At this point I should like to pause, and tell a recent and important story.  On Saturday I taught an 80-minute unit on Bayes's Rule to an audience of non-Sequence-reading experimental subjects, who were mostly either programmers or in other technical subjects, so I could go through the math fairly fast.  Afterward, though, I was worried that they hadn't really learned to apply Bayes's Rule and wished I had a small little pamphlet of practice problems to hand out.  I still think this would've been a good idea, but...

On Wednesday, I attended Andrew Critch's course at Berkeley, which was roughly mostly-instrumental LW-style cognitive-improvement material aimed at math students; and in this particular session, Critch introduced Bayes's Theorem, not as advanced math, but with the aim of getting them to apply it to life.

Critch demonstrated using what he called the Really Getting Bayes game.  He had Nisan (a local LWer) touch an object to the back of Critch's neck, a cellphone as it happened, while Critch faced in the other direction; this was "prior experience".  Nisan said that the object was either a cellphone or a pen.  Critch gave prior odds of 60% : 40% that the object was a cellphone vs. pen, based on his prior experience.  Nisan then asked Critch how likely he thought it was that a cellphone or a pen would be RGB-colored, i.e., colored red, green, or blue.  Critch didn't give exact numbers here, but said he thought a cellphone was more likely to be primary-colored, and drew some rectangles on the blackboard to illustrate the likelihood ratio.  After being told that the object was in fact primary-colored (the cellphone was metallic blue), Critch gave posterior odds of 75% : 25% in favor of the cellphone, and then turned around to look.

Then Critch broke up the class into pairs and asked each pair to carry out a similar operation on each other:  Pick two plausible objects and make sure you're holding at least one of them, touch it to the other person while they face the other way, prior odds, additional fact, likelihood ratio, posterior odds.

This is the sort of in-person, hands-on, real-life, and social exercise that didn't occur to me, or Anna, or anyone else helping, while we were trying to design the Bayes's Theorem unit.  Our brains just didn't go in that direction, though we recognized it as embarrassingly obvious in retrospect.

So... how would you design an exercise to teach Checking Consequentialism?

309 comments

Comments sorted by top scores.

comment by Axel · 2012-03-24T15:01:50.691Z · LW(p) · GW(p)

So... how would you design an exercise to teach Checking Consequentialism?

I would check to see if such a thing already exists or if there are people who have experience designing such things. I know of a Belgian non-profit 'Center for Informative Games' that not only rents games designed to teach certain skills but will also help you create your own.

From their site: On request C.I.S. develops games for others. The applicant provides the content of the game, while C.I.S. develops the conceptual and game technical part to perfection. The applicant has the opportunity to attend some game tests and to redirect when necessary.

They also offer coaching if you want to work on your own: Do you want to create a game concept on your own, but you don't know where to start? No worries C.I.S. can give you a hand. During a number of concrete working sessions C.I.S. facilitates your development. In between sessions the applicant continues the work to, finally, end up with a solid game.

I have enjoyed their games in the past and can attest to their quality. The obvious problem is that it's a purely Belgian organization and the 'search' function only works with Dutch words. However if you want to check them out I'd be happy to act as a go-between. Since a couple of months there is even a Brussels LW meetup, I'm certain I could get a couple of members to help in the production process (again, if this seems interesting)

comment by HonoreDB · 2012-03-25T05:31:27.146Z · LW(p) · GW(p)

In a group, with a leader who knows the exercise:

Get a volunteer to act as a judge (or a few to act as a jury, in a large group). Have her leave the room. The leader presents the rest with a short set of Contrived Hypothetical Situations, each with finite options and either clearly-defined outcomes for each option, or a probabilistic distribution of outcomes for each option. The leader says, "Please write down your choice for each problem, sign your paper, and turn it in to me. Then I'll call in the judge, and have her decide on each problem. You get a point wherever her decision agrees with yours. The winner is the one with the most points." When the judge is called in, however, the leader doesn't tell them the actual problems. Rather, the leader just reports the outcomes (or distributions), and asks them to choose which outcome or distribution is best. The winners are announced based on that.

Example: One of the situations given is some variant of the trolley problem. When the judge comes in, she is just asked whether she'd prefer one person to get hit by a trolley, or five. Everybody laughs as she replies "...one?"

Example: The problem given to the group is "You drive 45 minutes away from home to go to a new restaurant for dinner. When you get there, you discover that you dislike the ambience and the selection is poor. You remember that you have decent leftovers at home. You're mildly hungry. Do you try the restaurant anyway (25-minute wait, 10% very enjoyable meal, 10% decent meal, 80% unenjoyable meal) or just head back home (5-minute-prep once you get home, 100% chance decent meal)?" The problem given to the judge is "You're mildly hungry. In 25 minutes, you can have a meal that is (10% very enjoyable, 10% decent, 80% unenjoyable). Or, in 50 minutes, you can have a guaranteed decent meal."

Replies from: JKR, army1987, AspiringKnitter, Zvi
comment by JKR · 2012-04-03T19:20:21.556Z · LW(p) · GW(p)

I think this is a fantastic idea, with a patch that is much easier than those suggested by the other responses. Simply tell everyone that for the purposes of this exercise, only that information directly presented in the example is to be considered. People sometimes overlook relevant information or clever third options, and these situations are to be judged only based on the data being considered by the hypothetical person in the given scenario.

If there is any concern about this set up encouraging people to think about things with an insufficient amount of thoroughness, you can save some time at the end for a just-for-fun period where everyone gets to offer their clever workarounds and extra information that would have changed what the proper decision was, had it been considered.

comment by A1987dM (army1987) · 2012-03-25T19:30:48.766Z · LW(p) · GW(p)

Two details the judge isn't told about are 1) you would have to pay for the former meal, but not for the latter, and 2) if you stay in the restaurant, you gain useful information you'll be able to take in account the next time you might want to eat there.

Replies from: HonoreDB
comment by HonoreDB · 2012-03-26T15:33:03.304Z · LW(p) · GW(p)

1) is patchable by specifying that the leftovers are non-perishable, so eating them is equivalent to buying a meal.

2) Either the judge is told that the variable meal is repeatable if it's good, or we specify in the group problem that you're not going back there no matter what.

comment by AspiringKnitter · 2012-04-03T19:44:37.541Z · LW(p) · GW(p)

Couldn't the problems others have brought up regarding this scenario be fixed by specifying that this is your last meal ever before the world ends tomorrow morning before breakfast? Then neither information nor money is valuable anymore.

Replies from: dlthomas
comment by dlthomas · 2012-04-03T19:59:45.682Z · LW(p) · GW(p)

I think I'd make a decision other than "try that new restaurant on the outskirts of town" for the evening before the world ends. If I don't know the world is going to end, then my decision now mightn't be optimal in light of that additional information (maybe that still tests something interesting, but it isn't quite the same thing).

Replies from: AspiringKnitter
comment by AspiringKnitter · 2012-04-04T02:59:51.077Z · LW(p) · GW(p)

Hmm. That could be a good point. If the world were ending, I probably wouldn't waste time on a sit-down meal.

How about if it's your last day in the country and you'll be fleeing to escape religious persecution tomorrow, taking nothing with you?

comment by Zvi · 2012-03-27T21:10:36.437Z · LW(p) · GW(p)

If you stay, you gain information about the restaurant. There's the dollar cost of dining out. It's actually not as easy as it looks to generate a "clean" example.

How much need we worry about excluding consequences we can't consciously list and/or quantify?

Replies from: Raemon
comment by Raemon · 2012-03-27T21:28:56.607Z · LW(p) · GW(p)

I patched this example by saying "you're on vacation in another city", so the value of information is mostly negligible.

But yeah, it's still pretty hard. Also, ideally not all of our examples end up being instances of sunk-cost-fallacy.

comment by fubarobfusco · 2012-03-24T01:50:07.671Z · LW(p) · GW(p)

The novel part of Checking Consequentialism is the ability to discriminate "consequentialist reasons" from "non-consequentialist reasons" - being able to distinguish that "Because a PhD gets me a 50% higher salary" talks about future positive consequences, while "Because I don't want my years of work to have been wasted" doesn't.

It's possible that asking "At what time does the consequence occur and how long does it last?" would be useful for distinguishing future-consequences from non-future-consequences - if you take a bad-thing like "I don't want my work to have been wasted" and ask "When does it occur, where does it occur, and how long does it last?", you will with luck notice the error.

"Intense, long, certain, speedy, fruitful, pure—
Such marks in pleasures and in pains endure.

— Jeremy Bentham's mnemonic for the signs of "consequentialist reasons"

EDIT: It occurs to me that I should explain this more. Bentham was trying to popularize consequentialism and remind his readers of what sorts of things count as consequentialist reasons to prefer a particular outcome. Eliezer suggests that we should ask about the closeness in time and the duration of a consequence. Bentham mentions these ("speedy" and "long") but also includes some others:

  • Intensity — How great of a pleasure or benefit does this consequence bring?
  • Certainty — How sure are we that it will actually happen? Is it a relatively sure thing, or just a chance at one?
  • Fruitfulness — Will this pleasure be followed by others of the same kind, or will it exhaust itself?
  • Purity — Is this a "pure pleasure" or a "mixed blessing"? What are the downsides; what's the catch?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-24T00:02:45.753Z · LW(p) · GW(p)

Cleverness-related failure mode (that actually came up in the trial unit):

One shouldn't try too hard to rescue non-consequentialist reasons. This probably has to be emphasized especially with new audiences who associate "rationality" to Spock and university professors, or audiences who've studied pre-behavioral economics, and who think they score extra points if they come up with amazingly clever ways to rescue bad ideas.

Any decision-making algorithm, no matter how stupid, can be made to look like expected utility maximization through the transform "Assign infinite negative utility to departing from decision algorithm X". This in essence is what somebody is doing when they say, "Aha! But if I stop my PhD program now, I'll have the negative consequence of having abandoned a sunk cost!" (Sometimes I feel like hitting people with a wooden stick when they do this, but that act just expresses an emotion rather than having any discernible positive consequences.) This is Cleverly Failing to Get the Point if "not wanting to abandon a sunk cost", i.e., the counterintuitive feel of departing from the brain's previous decision algorithm, is treated as an overriding consideration, i.e., an infinite negative utility.

It's a legitimate future consequence only if the person says, "The sense of having abandoned a sunk cost will make me feel sick to my stomach for around three days, after which I would start to adjust and adapt a la the hedonic treadmill". In this case they have weighed the intensity and the duration of the future hedonic consequence, rather than treating it as an instantaneous infinite negative penalty, and are now ready to trade that off against other and probably larger considerations like the total amount of work required to get a PhD.

Replies from: fubarobfusco, pjeby, None, GDC3, jimmy, Strange7, Will_Newsome
comment by fubarobfusco · 2012-03-24T04:33:10.317Z · LW(p) · GW(p)

This probably has to be emphasized especially with new audiences who associate "rationality" to Spock and university professors, or audiences who've studied pre-behavioral economics, and who think they score extra points if they come up with amazingly clever ways to rescue bad ideas.

One of the other models people have for the rationalizing sort of "rationality" is that of lawyers.

Lawyers are very good at logic — the LSAT, the entrance examination for U.S. law schools, leans heavily on logic puzzles — but the whole point of being a trial or appeals lawyer is to come up with clever (and socially respectable) arguments for whatever position your client may have at the moment.

This extends past real-world lawyerhood. The tabletop role-playing game crowd have the expression "rules lawyer" for a person who comes up with clever arguments for why their character should get away with whatever they want to at the moment.

Replies from: pnrjulius
comment by pnrjulius · 2012-04-04T20:50:36.902Z · LW(p) · GW(p)

Indeed I think this is the central problem with the way most people use their powers of reasoning. (It even has a name: "the argumentative theory of reason".) They start with a conclusion, and work backwards to find rational (or at least rational-sounding) ways of supporting that conclusion.

We all do this automatically; it may be the very thing our brains evolved to do. We have to work very hard to get ourselves to do the opposite, start with evidence and use reasoning based on the evidence to decide on our conclusion. I'd say most scientists manage to do this right maybe half the time, and most laypeople almost never manage it.

comment by pjeby · 2012-03-24T19:38:36.628Z · LW(p) · GW(p)

Sometimes I feel like hitting people with a wooden stick when they do this, but that act just expresses an emotion rather than having any discernible positive consequences.)

My normal response is, "so what's bad about that?" and go a few rounds until the person has to struggle for an answer... the teachable moment where I can say, "you see what you're doing? you're just making stuff up. What's actually going to happen?"

(That being said, it would definitely have been helpful for me in the past if I had thought to confine questions of consequences to things happening at a point-in-time. I eventually figured out that I needed to ask that for things people were thinking about or remembering, but there was a long time where I also had the hit-them-with-a-stick frustration to this kind of response.)

The only suggestion I have for exercises is to make people write down their own thinking (or state their thinking out loud), and then read it back as a kind of grammar-checking exercise. Are these abstract nouns or concrete nouns? Do they describe a point in time or some sort of vague non-timey thing?

I've done some similar things with small groups, though, and one thing that becomes quickly apparent is that everybody already knows when somebody else is doing it wrong. The part of the exercise that's hard, is learning to apply it to your own thoughts or utterances, and for that, it helps to externalize them first, then treat them as input.

To put it another way, the prerequisite 5-second skill for consequence checking is reflecting on what you just said or thought. If people don't reflect on their utterances, no further debiasing skills can be applied.

comment by [deleted] · 2012-03-29T19:42:19.883Z · LW(p) · GW(p)

This is perhaps ironic because I have been going through precisely this PhD sunk-cost problem for the past few months, but regret bias is a serious part of behavior psychology. I've been dissatisfied with the direction that publication standards are moving in my current field (computer vision) for a while, and as a result have had a tough time finding an adviser/project match that would let me do things at a more abstract mathematical level. No one is very interested in those papers. Ultimately, over a two-year period, I reasoned that it was better for me to leave the PhD program, find a job that allowed me to pursue certain goals, and to leave research ideas to my own spare time. The single most difficult hurdle in reaching this decision was feeling very worried that I would regret leaving my institution (Harvard) because everyone tells me that a PhD from Harvard "opens lots of doors" and lots of people who I trust and think are non-trivially intelligent have insisted that unpleasantly sticking it out in the PhD program just to obtain the credential is absolutely the best thing.

My own assessment is that I will do just fine without that particular credential and that being able to use personal time to pursue the research I care about, even if I ultimately am not talented enough to publish any of it on my own, will be more fulfilling. But this was a damn hard conclusion to come by. I felt stressed and nervous, concerned that I will hate my future job's working conditions and beat myself up over not sticking it out at Harvard. I largely made it into Harvard through sheer, stupid ability to work unreasonably long hours to self-teach. That is, by stubbornly never quitting; it's not easy, however rational I wish to be, to feel free of these kinds of self-identity stigmas (e.g. don't be a quitter).

I guess what I'm trying to say is that perceived future pain of regretting a decision is a legitimate consequence to consider. And sometimes that is absolutely a consequence that one should wish to avoid. To offer another example from my own life, a family member was in a position where she became pregnant unexpectedly while she was an unmarried 19-year-old college student. After many talks about the situation in general, I was asked what my own opinion was about the option of getting an abortion. I said it seemed like a reasonable option and might ultimately be the best thing, obviously modulo the person's personal beliefs. Ultimately, however, this family member chose not to get the abortion because of the counterfactual regret of having terminated a potential life.

The person said, "if I get an abortion, then in the future I will remember that I did that thing and (as far as I can tell right now) I will always feel visceral pain about that." That is a legitimate future consequence.

I think the problem that you want to isolate is different than just regret bias. I think the problem you want to address is the fact that a person's current self is usually slavishly in the service of the remembering self as Kahneman puts it. We buy things because we think they will provide lots of utility, but then a few months later we don't even use them any more. We prefer to keep our hands in painfully cold water for a longer total time as long as the last bit of the time includes warmer water (and thus fosters a more pleasant transitional memory). And we think we will regret something a lot more than we really will.

You want to design exercises that bring about a stark comparison between how you think you will remember something vs. the actual facts of the matter. And then focus on situations where the first component (how you think you will remember something) actually should matter (perhaps an abortion is a good example of that), and how the cognitive machinery that applies to problems like that one is completely inappropriate for problems like "should I upgrade to the new iPad2 because of the shinier screen", yet we use the same cognitive software for both problems.

This sort of thing has been looked at w.r.t. dietary decisions before. I believe the results showed that when you are under a cognitive load, you'll make consistently poorer snack choices than when you are not being asked to answer hard questions. Imagine how much more this would be influenced by the stresses of a situation like anguishing about whether to leave a PhD program.

I'm not optimistic that there is an easy way to address this. It seems to fit in with Hanson's near/far mode ideas as well. When in near mode, we'll be more capable of isolating practical constraints and consequences of a decision. But if a question immediately puts us in far mode, it's much harder.

Consider the difference between "Should I leave my PhD program?" and "Should Jeff leave Jeff's PhD program?" As much as people fail to pick out the consequences in their own decisions, I would suspect it's far worse when migrated to another person's issue. We tend to give advice in far mode, but always expect to receive advice in near mode.

There's just a lot to disentangle about this. My opinion is that it would be better to break up the problem of "Why don't people make sound consequentialist decisions?" into a bunch of smaller domain-specific sub-problems, and then to build the small tests around recognizing those sub-issues. Once people are good at dealing with any given sub-issue conditioned on the event that they recognize that they are in that sub-issue, then move on to exercises that teach you how to recognize potential sub-issues.

comment by GDC3 · 2012-03-25T02:33:29.768Z · LW(p) · GW(p)

I think it's important to try to convert the reason to a consequentialist reason every time actually; it's just that one isn't done at that point, you have to step back and decide if the reason is enough. Like the murder example one needs to avoid dismissing reasons for being in the wrong format.

"I don't want to tell my boyfriend because he should already know" translates to: in the universe in which I tell my boyfriend he learns to rely on me to tell him these things a little more and his chance of doing this sort of thing without my asking decreases in the future. You then have to ask if this supposed effect is really true and if the negative consequence is strong enough, which depends on things like the chances that he'll eventually figure it out. But converting the reason gets you answering the right questions.

Sunk cost fallacy could be a sign that you don't trust your present judgement compared to when you made the original decision to put the resources in. The right question is to ask why you changed your mind so strongly that the degree isn't worth it even at significantly less additional cost. It is because of new information, new values, new rationality skills or just being in a bad mood right now.

An advantage is that you feel just as clever for coming up with the right questions whatever you decide, which out to make this a bit easy to motivate yourself to implement.

Replies from: handoflixue
comment by handoflixue · 2012-03-29T21:12:06.044Z · LW(p) · GW(p)

Sunk cost fallacy could be a sign that you don't trust your present judgement compared to when you made the original decision to put the resources in

Definitely useful. I personally find the two have a very different emotional/internal "flavor" - I can tell when I want to avoid a sunk cost vs when I'm in a bad mood and just don't want to deal with a cost - but that's not necessarily always true of me, much less anyone else.

comment by jimmy · 2012-03-24T23:13:22.285Z · LW(p) · GW(p)

It's a legitimate future consequence only if the person says, "The sense of having abandoned a sunk cost will make me feel sick to my stomach for around three days, after which I would start to adjust and adapt a la the hedonic treadmill"

I wouldn't even allow that. I much prefer to treat such a sense as a (misguided) signal about the map, rather than a piece of territory that I intrinsically care about. Seeing things with this framing allows you to explore the signals with less distortion, and allows them to go away more easily once you take them into account. If you start treating them as things to worry about, then you get sadness about sadness, fear about fear, and other information cascades that can be quite destructive.

Additionally, on the cases where the irrational discomfort actually sways your decision over the threshold, you're training yourself to listen to things that should not exist in the first place, which just reinforces the problem.

Replies from: handoflixue
comment by handoflixue · 2012-03-29T21:07:25.948Z · LW(p) · GW(p)

You're training yourself to listen to things that should not exist in the first place

This strikes me as a perfect lead-in to Spock style "Bah, my emotions SHOULDN'T exist, therefor I will just IGNORE them". This does not work well.

If we ignore a REAL negative consequence in our planning, we're going to get frustrated when the consequence happens anyway, because now it's an UNEXPECTED negative consequence of our decision. If we further decide that we're not REALLY having that negative consequence, then it will get further exacerbated by our unwillingness to accept the situation, and therefor our inability to actually do anything to fix the situation. It's entirely possible that we're now miserable for two weeks instead of three days.

Heck, It's entirely possible the whole thing could have been fixed by thinking about it and saying "I would normally feel bad, but since I'm aware of this, I can instead just remind myself of the awesome rational decision I'm making, and how cool my life is because of this Rationality thing!", possibly supplemented by a celebratory slice of cake to reinforce that this is a positive, not a negative, event. (And cake makes everyone happy!)

Replies from: jimmy
comment by jimmy · 2012-03-29T23:06:32.547Z · LW(p) · GW(p)

This strikes me as a perfect lead-in to Spock style "Bah, my emotions SHOULDN'T exist, therefor I will just IGNORE them". This does not work well.

No no no, not that. That's terrible!

"listen" is ambiguous - oops. You want to acknowledge the feeling, but not act on it. Once you can acknowledge it, you can realize that it doesn't make sense, and then release that feeling and be done with it.

Replies from: handoflixue
comment by handoflixue · 2012-03-30T21:38:52.658Z · LW(p) · GW(p)

If I'm hungry, I can't just ignore that and continue to function at 100%. I can go eat and restore my blood sugar, or I can delay that hunger and function at less-than-peak efficiency because my body does not have all the resources it needs.

Emotions are the same way - if I feel upset or a sense of loss, I have to address that emotion. This is not always a simple "acknowledge and release" 5 minute process.

Believing otherwise will screw me up just as badly as believing I can cure hunger by "acknowledging and releasing" it instead of eating lunch.

Replies from: jimmy
comment by jimmy · 2012-03-31T04:08:04.156Z · LW(p) · GW(p)

I think we agree a lot more than you realize. Pretending that you aren't feeling emotion that you are feeling is a recipe for disaster. In your analogy, I recommend the equivalent of eating.

However, this doesn't mean that you yield to the emotions when they're pushing you towards bad decisions. It also doesn't mean you pretend that it has to be some big ordeal to fix the problem right. Those are both very bad ideas for more reasons than are obvious.

"eating" can be anywhere from a split second automatic response to a extended ordeal. If you know what you're doing, the Phd example is not more than a 5 minute process - I've walked people through worse things in about that time.

Replies from: Blueberry
comment by Blueberry · 2012-04-02T08:42:23.709Z · LW(p) · GW(p)

If you know what you're doing, the Phd example is not more than a 5 minute process - I've walked people through worse things in about that time.

Please elaborate!

Replies from: jimmy
comment by jimmy · 2012-04-02T09:04:01.032Z · LW(p) · GW(p)

I "cheated" a bit, in that I had them spend ~15-20 minutes with a chat bot that taught them some skills for getting in touch with those parts of their mind. Actually working through the problem was a few minutes of text chat that basically pointed out that there was no magic option and that they needed to let go of the problem emotions. All the real magic was in putting them in the state of mind to shut up and listen.

I talk about it a bit here

Replies from: handoflixue
comment by handoflixue · 2012-04-02T23:28:31.631Z · LW(p) · GW(p)

I suppose the best analogy I could offer here is getting robbed. It takes maybe 5 minutes to get robbed. There's (usually) nothing you can do to fix the situation or recoup the cost. But people still feel bad about it for a while.

Your link seems to suggest, more or less, using hypnosis to just wipe out this guilt - except the examples you give don't really seem to address that emotional side at all. You're focusing on the intellectual acceptance of "yes, I should drop the PhD", which isn't what I'm talking about. I'm talking about the emotional baggage that comes with that, the sense that you've wasted 2 years as a sunk cost that isn't even doing you any good at this point.

If you're really hypnotizing way that guilt, that emotional response, then I guess I am misunderstanding you. Is that the case? Because I would say that, based on my personal experience, that is seriously dangerous territory. Not to say you shouldn't trust yourself with it - I do it myself. But it is a technique I have seen cause a lot of people serious problems, and definitely not one I'd teach casually.

Replies from: jimmy
comment by jimmy · 2012-04-03T01:44:45.667Z · LW(p) · GW(p)

If you're really hypnotizing way that guilt, that emotional response, then I guess I am misunderstanding you. Is that the case?

Haha! YES!

...people still feel bad about it for a while.

Yep. They're doing it wrong.

You're focusing on the intellectual acceptance of "yes, I should drop the PhD"

No no no no no! I'm talking about the emotional acceptance. It is a very different thing than intellectual acceptance, but that does not mean they can't track each other. If your mind is organized well, they do track.

Have you read Kaj's post on overcoming suffering and suffering as attention allocation conflicts? This is basically what I'm talking about

I would say that, based on my personal experience, that is seriously dangerous territory

There is a very important distinction between suppressing emotion (perhaps successfully) and eliminating the cause of the emotion directly by coming up with a better way of handling the conflict. The latter is healthy and quite low risk compared to the null option. This is what I do - with or without "hypnosis".

Suppressing emotion is a recipe for disaster.

comment by Strange7 · 2012-03-30T22:26:07.721Z · LW(p) · GW(p)

(Sometimes I feel like hitting people with a wooden stick when they do this, but that act just expresses an emotion rather than having any discernible positive consequences.)

It would have the consequence of conditioning in the subject's mind an association between a particular thought process and being hit with a stick. Most people don't like being hit with sticks, so the association is likely to make them avoid that particular thought process. Do you not consider "teaching people to avoid a dangerously stupid thought process" a positive consequence?

Replies from: Bluehawk
comment by Bluehawk · 2012-04-04T21:43:03.549Z · LW(p) · GW(p)

Actually they would associate the stick with a number of things, including but not limited to the stupid thought process. They would be quite likely to associate the stick with their encounter with Eliezer, and to their (failed) attempt to converse with and/or follow his thought processes. Mind: They associate the stick with all aspects of the attempt, not only with the failure.

It might work in a Master/Apprentice scenario where the stick-hitting-victim is bindingly pre-committed to a year of solitude with Stick-Happy!Eliezer in order to learn from him the art of Cognitive Kung Fu. This is the only scenario I can immediately visualize in which the stick-hitting victim would not immediately decide that Stick-Happy!Eliezer is a person they can get away with avoiding, and possibly with reporting to the police for assault.

EDIT01: This is assuming that the experiential sample size is 1.

Replies from: Strange7
comment by Strange7 · 2012-04-06T03:21:06.005Z · LW(p) · GW(p)

I was only pointing out that arguably-positive consequences would be present. I agree that they most likely would not predominate outside controlled conditions, and the overall decision not to engage in spontaneous armed assault was a wise one.

comment by Will_Newsome · 2012-03-24T11:52:57.325Z · LW(p) · GW(p)

Rationalization is an important skill and should be rewarded, not punished. If you never try to rationalize others' decisions then you won't notice when they actually do have a good justification, and if you never practice rationalization then you'll never get good enough at it to find their justifications when they exist. The result is gross overconfidence in the stupidity of the opposing side and thus gross overconfidence in one's own rationality. That leads to tragedies and atrocities, both personal and societal.

Replies from: Alicorn, handoflixue
comment by Alicorn · 2012-03-24T17:27:39.181Z · LW(p) · GW(p)

Perspective-taking is a separate "skill" from rationalizing one's own behavior.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-24T18:28:56.522Z · LW(p) · GW(p)

Hm, is perspective-taking the same skill that I was talking about? I can't tell. Also I thought that Eliezer's examples were phrased in the hypothetical, and thus it'd be rationalizing others' beliefs/behavior, not one's own. I'm not sure to what extent rationalizing a conclusion and rationalizing one's own behavior are related. Introspectively, the defensiveness and self-justifying-ness inherent to the latter makes it a rather different animal.

comment by handoflixue · 2012-03-29T21:01:42.776Z · LW(p) · GW(p)

"Coming up with explanations" is a good skill.

"Coming up with a single, stupid explanation, failing to realize it is stupid, and then using it as an excuse to cease all further thought" is a very, very bad skill.

Thinking "well, but abandoning a sunk cost actually IS a negative future event" is smart IFF you then go "I'd be miserable for three days. How does that weigh against years spent in the program?"

It's very, very bad, however, if you stop there and continue to spend 2 years on a PhD just because you don't want to even THINK about those three days of misery.

I think understanding this dichotomy is critical. If you stop even thinking "well, but abandoning a sunk cost IS a negative future event" because you're afraid of falling in to the trap of then avoiding all sunk costs, then you're ignoring real negative consequences to your decisions.

comment by Dr_Manhattan · 2012-03-27T16:04:05.983Z · LW(p) · GW(p)

Not attempting to answer the question, but I've been nursing a thought about the rationality org that might be useful: The nearby Stanford University has a world-renown program in "decision sciences" http://decision.stanford.edu/ which is basically "how to make decisions scientifically"; they virtually invented influence diagrams, they teach biases as a part of the program, etc. The head of the program, Ronald Howard, also co-founded http://www.decisioneducation.org/ , his teen-oriented "rationality org".

  • there is probably things to learn from both

  • if "rationality org" has a value proposition to these organizations they can be useful in teaching opportunities and for credibility building

Replies from: Vladimir_Nesov, Eliezer_Yudkowsky, Incorrect
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T23:16:15.230Z · LW(p) · GW(p)

These do indeed sound like people to talk to, and local too - thanks!

Replies from: Dr_Manhattan, MichaelVassar
comment by Dr_Manhattan · 2012-03-31T14:31:17.556Z · LW(p) · GW(p)

The other guy to talk to (unless you decide to strategically approach younger faculty) is Ross Shachter, inventor of bayes-ball algorithm and some other interesting AI papers.

comment by MichaelVassar · 2012-03-30T00:34:38.459Z · LW(p) · GW(p)

Please tell me how that goes. Also, you planning to look into this, or Anna et al?

comment by Incorrect · 2012-03-30T03:39:58.252Z · LW(p) · GW(p)

A comma broke one of the links in your post.

comment by handoflixue · 2012-04-03T00:12:40.137Z · LW(p) · GW(p)

I am reminded of a game we played in elementary school:

There are 100 pieces of candy in a jar, and 20 students. Each student gets to vote "share" or "selfish". If EVERYONE votes to share, the candy is split evenly. If ONE person votes "selfish", they get all the candy. If MORE than one person votes "selfish", no one gets candy, and the experiment is repeated until either the candy is distributed, or X iterations (3-5 seems normal) have passed.

Before each iteration, the students are allowed to discuss strategy. The solution, of course, is for a single trustworthy person to make a binding commitment to vote "selfish" and then evenly distribute the candy. By pre-commiting to vote "selfish", everyone else knows that voting "selfish" themselves will result in no candy - unlike a commitment to have everybody share.

I've always considered it a decent test of group rationality and social skills whether or not this consensus actually gets reached before the final iteration. I've seen groups that hit on this, had a single iteration with a few outliers testing it just to be sure that the person would really vote "selfish" like they said, and then implementing the strategy. I've seen others where 10-20% of the audience simply would not believe the person who made the pre-commitment, and so there was never a successful iteration

Replies from: TheOtherDave, AspiringKnitter
comment by TheOtherDave · 2012-04-03T00:48:50.165Z · LW(p) · GW(p)

This all hinges on trusting the supposedly trustworthy person, of course.

Replies from: handoflixue
comment by handoflixue · 2012-04-03T20:30:08.752Z · LW(p) · GW(p)

Yes, but this being a school setting, "I made a promise to the teacher" is usually considered sufficiently binding. For a more advanced level of rationality, I'd say that coming up with a plausibly binding commitment strategy would be part of the challenge :)

comment by AspiringKnitter · 2012-04-04T03:05:09.996Z · LW(p) · GW(p)

Doesn't that rely on everyone eating candy? One person who doesn't eat candy and therefore isn't invested in the outcome could wreck that.

Also: theoretically, a student could win hundreds of pieces of candy? I'm sure the parents were very happy about that.

Replies from: handoflixue, Nornagest
comment by handoflixue · 2012-04-04T08:27:44.532Z · LW(p) · GW(p)

Anyone can ruin it deliberately, true - it works best with a cooperative group, not a competitive one. Modifying it for a competitive group would definitely remove it from the real of "useful introductory ideas", but would probably still be a useful exercise for more advanced classes.

Candy is also concrete and engaging - most people don't respond as enthusiastically to raffle tickets or $0.25. As part of a larger set of challenges, using play money with some sort of modest exchange of play money -> small prizes at the end of the session might work.

comment by Nornagest · 2012-04-04T03:14:39.574Z · LW(p) · GW(p)

Doesn't need to be candy, necessarily. Money's the first thing that comes to mind, but if that's prohibitively costly to maintain while keeping the prizes attractive, you could play with probabilities: build a set of diverse prizes of approximately equal value, say, and instead of money or candy distribute tickets to a raffle where the winner'd be able to choose one prize. That might have some funny consequences in this experiment, though, depending on the quirks of how people think about probabilities.

comment by Furslid · 2012-03-24T21:14:31.337Z · LW(p) · GW(p)

An important question to ask that you are leaving out is "What are my alternatives to this course of action?"

The comparison of consequences requires an alternative set of consequences to compare to. Considering the question "Should I be in graduate school?" The answer may well be different if the alternative is getting a decent job than if the alternative is starving unemployment.

The listing of alternatives also helps catch cheating. If the alternative is implausible and disastrous (Stay in grad school or be a hobo) then it is likely that Checking Consequentialism isn't being done seriously.. The alternative compared needs to be a serious answer to the question "What would I do if I couldn't/wouldn't do this?"

Replies from: daenerys
comment by daenerys · 2012-03-29T08:08:42.994Z · LW(p) · GW(p)

I think an "Improv to Rationality" class would be very fun and interesting. I don't know how WELL the games would work to teach the desired skills, but I definitely think it would make the lessons more memorable, AND you'd be able to add the phrase "Team-Building" to your class description. You can sell anything if you promote it as "Team-Building." :P

A game that would teach how to look for alternative options (see parent) could be "New Choice". In this game, a scene is played out, with the moderator occasionally demanding a new choice(s) from the players. (Video Example (includes 15 sec commercial)).

Not only would it teach about not becoming too set on a current course of action, but could have the lesson of: "When thinking up alternative actions, don't just stop at one or two. Instead imagine a moderator yelling "New Choice!" at you. Keep thinking up new alternatives until you get to crazy-land (i.e. "Here's a cat from my pants")

Another sub-skill mentioned, was the ability to recognize bad rationalizations (i.e. "I will feel bad if I quit something I put so much effort into."). Perhaps one way to learn to recognize these, is to do bad rationalization ON PURPOSE, to see what it feels like. A good game for that is "Challenge in a Minute".

In this game, a two-sided silly debate is chosen, such as pirates v. ninjas, or Coke v. Pepsi. All the players line up and challenge each other's arguments. Arguments are supposed to start somewhat seriously ("Ninjas are sneakier") and devolve over the course of the game ("I don't like his pants"). (It's easier to just watch a video, than explain the rules.)

Participants could then brainstorm what it felt like to come up with the bad rationalizations. I would expect answers like: Grasping at straws, Searching your brain for things to support your position, Being proud of clever retorts, etc. Participants could then ACTUALLY try to answer the debate question (ninja v. pirates, or whatever) in teams, and then discuss what it felt like to actually try to find the answer. I would expect answers like: Defining the problem, Not knowing the answer, Looking for sources, Being willing to change my mind (debate question must be sufficiently silly that most people would be willing to change their mind.)

Note: I'll admit that I worked backwards for this. Instead of thinking "What's the best way to teach consequentialism?", I thought "It would be awesome to do an Improv Rationality class. How can I relate some improv games to the lessons we're trying to teach?" So this solution probably isn't optimal.,..But it IS fun!

Replies from: wedrifid
comment by wedrifid · 2012-03-29T12:23:09.067Z · LW(p) · GW(p)

Being proud of clever retorts, etc. Participants could then ACTUALLY try to answer the debate question (ninja v. pirates, or whatever) in teams, and then discuss what it felt like to actually try to find the answer.

Short exercise. Does anyone actually think pirates stand a chance against professionally trained assassins? I thought the only reason people defend pirates is because it's a way to say both of them (or their identity-memes) are just so damn cool.

Replies from: Eliezer_Yudkowsky, Jayson_Virissimo, Nornagest, Raemon, Desrtopa, thomblake, ciphergoth, CronoDAS, Bugmaster, APMason, wedrifid, None
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T23:00:58.773Z · LW(p) · GW(p)

Well, and now the question "How can we teach the skill Check Consequentialism?" has degenerated into an erudite debate on pirates vs. ninjas.

I have never, ever been tempted to say this before on LW but WELCOME TO REDDIT.

Edit: The conversation seems much more intelligent than average Reddit, but I still think we're solving the wrong problem here.

Edit 2: And now, no longer feeling as encouraged by the 150 comments I saw when I checked back in.

On the plus side, I'll concede we've demonstrated beyond a reasonable doubt that "pirates vs. ninjas" can be an argument -generator for all audiences.

Replies from: thomblake, CronoDAS, thomblake
comment by thomblake · 2012-03-29T23:39:54.773Z · LW(p) · GW(p)

puh-lease. There were pirates vs. ninjas debates on the Internet long before Reddit existed.

You happen to have carved out a small portion of the Internet, a medium that aside from porn is primarily for pirates vs. ninjas debates, and declared it's for some other purpose. That doesn't mean you're allowed to be surprised when pirates vs. ninjas debates happen.

And this site's software based on Reddit. Is "WELCOME TO REDDIT" even worth saying when there's a "powered by reddit" icon on the bottom-right of the site?

Now, back to the local version of pirates-vs-ninjas, which sometimes looks deceptively like discussion of rationality...

Replies from: APMason
comment by APMason · 2012-03-29T23:44:21.358Z · LW(p) · GW(p)

You happen to have carved out a small portion of the Internet, a medium that aside from porn is primarily for pirates vs. ninjas debates, and declared it's for some other purpose. That doesn't mean you're allowed to be surprised when pirates vs. ninjas debates happen.

Is he allowed to be surprised when lesswrong porn happens?

Replies from: thomblake
comment by thomblake · 2012-03-29T23:48:09.553Z · LW(p) · GW(p)

I think porn itself has somehow managed to stay off Less Wrong long enough to warrant surprise.

But no, it's not warranted to believe that porn about Less Wrong does not exist. Rule 34.

Replies from: gwern, Blueberry
comment by gwern · 2012-03-30T01:32:55.602Z · LW(p) · GW(p)

I've joked in the past about writing YudkowskyxBostrom slash fics; I swear to Bayes, if people keep annoyingly discussing LW porn, I will write it!

Replies from: MixedNuts, thomblake, thomblake
comment by MixedNuts · 2012-03-30T11:27:36.168Z · LW(p) · GW(p)

Please include a threesome with Vassar.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-02T10:10:12.928Z · LW(p) · GW(p)

Pass the brain bleach.

comment by thomblake · 2012-03-30T02:13:40.830Z · LW(p) · GW(p)

I'd write it, but I'm too busy working on that fic where Harry has to marry Draco to save someone from Azkaban.

comment by thomblake · 2012-03-30T01:56:37.736Z · LW(p) · GW(p)

Corollary to Rule 35. You have to now.

comment by Blueberry · 2012-03-30T00:18:37.342Z · LW(p) · GW(p)

It's not porn, but did you see Yvain's drawing?

Replies from: thomblake
comment by thomblake · 2012-03-30T00:23:43.154Z · LW(p) · GW(p)

Thanks, I hadn't seen that. Though for maximal thread derailment, the image really should've appeared right in your comment.

comment by CronoDAS · 2012-04-02T10:54:41.968Z · LW(p) · GW(p)

Are you trying to spoil our fun?

comment by thomblake · 2012-03-30T01:05:43.141Z · LW(p) · GW(p)

Besides, it's not like we're having really ridiculous thread-derailing discussions. It's not like anyone's tried to claim something insane, like that Twilight Sparkle is the best pony, or a plane on a treadmill will be able to take off, or that billy goats are not delicious.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2012-03-30T03:37:29.446Z · LW(p) · GW(p)

or a plane on a treadmill will be able to take off,

Reading this prompted me to ask myself a similar question:

Could wind powered plane on the ground traveling directly downwind take off? My answer is yes, and science is awesome! but I expect I'd get into arguments about it with even my educated friends who would say "no".

Replies from: thomblake
comment by thomblake · 2012-03-30T03:45:20.011Z · LW(p) · GW(p)

Perhaps even better, a wind-powered car can travel faster than the wind downwind! link

Replies from: wedrifid
comment by wedrifid · 2012-03-30T03:58:14.639Z · LW(p) · GW(p)

Perhaps even better, a wind-powered car can travel faster than the wind downwind! link

This seems to imply that you have some other mechanism for the plane to take off than by harnessing that very mechanism with enough efficiency and elegance that it can generate lift to take off engine free, powered by wind, with the direct force of the wind actually counting against it. Either that or you evaluate engineering coolness very differently.

comment by wedrifid · 2012-03-30T03:47:45.969Z · LW(p) · GW(p)

Besides, it's not like we're having really ridiculous thread-derailing discussions.

I know you are mostly just building up to a "Twilight Sparkle" joke but I'm going to express agreement with this anyway. The main way that this thread has gone off topic is in as much as the skill that daenyrs is testing and training isn't 'consequentialism' it is a different rationalist skill.

comment by Jayson_Virissimo · 2012-03-29T14:04:31.991Z · LW(p) · GW(p)

Short exercise. Does anyone actually think pirates stand a chance against professionally trained assassins? I thought the only reason people defend pirates is because it's a way to say both of them (or their identity-memes) are just so damn cool.

What are we talking about here? Ninjas didn't normally kill in open combat (nor did pirates if they could help it) and pirates didn't generally fight in set piece battles on land. In order to give a serious answer to such a question, you would first need to specify a combat situation: 17th century-style naval combat (of which the outcome would be largely dependent on the state of Japanese versus Portuguese maritime technology), open field daylight desert island team combat, single combat (with or without firearms and at what distance?), coastline castle siege, covert assassination of captain/quartermaster aboard a ship, etc...

Replies from: TheOtherDave, wedrifid
comment by TheOtherDave · 2012-03-29T14:14:53.688Z · LW(p) · GW(p)

Not sure I agree.

If I'm a trained covert assassin and I find myself in the middle of a 17th century-style naval combat involving Japanese vs. Portuguese marine technology, it seems what I ought to do is attempt to covertly board the enemy vessel and assassinate its captain. If I'm a trained covert assassin and I find myself in the middle of open field daylight desert island team combat against trained shock troops, it seems what I ought to do is run away and come back in about twelve hours and try again.

More generally, part of what a sufficiently trained combatant is taught to do is control the combat situation, rather than take it as a given.

comment by wedrifid · 2012-03-29T14:25:19.743Z · LW(p) · GW(p)

What are we talking about here?

I am talking about ninjas and pirates in any situation which doesn't involve giving all the ninjas idiot balls. So, for example, the ninjas don't swim out to sea and start throwing shurikens at pirate ships.

The best pirates could do if they somehow got advance notice that the ninjas were out to get them is to hide and stay away from any land the ninjas have access to (and can maintain reasonable intelligence on). Meanwhile they are almost no threat at all to the ninjas.

Suppose for some reason a stalemate with pirates hiding from the ninjas and restricting themselves from any activities that would give the ninjas a chance to catch them doesn't count as "ninjas are clearly better". Even then it would take very little time for the ninjas to gain sea based dominance too. They are a paramilitary organisation with government support. They'll quickly get better ships, easily get sailing training or just conscript sailors and then go hunt down the pirates.

If ninjas and pirates get into conflict the pirates just lose. Fortunately for pirates they aren't out to win - they are out to get booty. Pirates prey on the weak or undefended and avoid fighting the powerful - be it ninjas, armies, navies, heavily defended settlements or sith lords. They also make sure they don't do anything that pushes them across the threshold of being a nuisance to the powerful and a serious threat that needs to be dealt with by elite forces. (Like ninjas.)

Replies from: thomblake, Strange7
comment by thomblake · 2012-03-29T17:29:37.845Z · LW(p) · GW(p)

They are a paramilitary organisation with government support.

This also applies to my paradigm of "pirate".

Replies from: Alicorn, wedrifid
comment by Alicorn · 2012-03-29T17:37:10.905Z · LW(p) · GW(p)

They are a paramilitary organisation with government support.

This also applies to my paradigm of "pirate".

You might be thinking of privateers.

Replies from: thomblake
comment by thomblake · 2012-03-29T18:42:22.114Z · LW(p) · GW(p)

Yes. There was a very blurry distinction between the two while England wanted to encourage piracy against France.

comment by wedrifid · 2012-03-29T17:41:18.346Z · LW(p) · GW(p)

This also applies to my paradigm of "pirate".

I don't think that word means what you think it means.

Replies from: thomblake
comment by thomblake · 2012-03-29T19:00:48.312Z · LW(p) · GW(p)

Pirates were often privateers, which fit that definition very well.

Furthermore, the overlap between piracy and warfare meant that many pirates served in paramilitary forces before, after, or during their pirate days. For an illustration, note this description of the Battle of New Orleans, including a prominent pirate acting as an irregular military advisor / commander, and many of his compatriots serving even in land-based artillery companies.

comment by Strange7 · 2012-03-29T16:57:39.791Z · LW(p) · GW(p)

Let's say the pirates have a secret, semi-fortified island base. Ninja objective is to eliminate the threat of pirates to local shipping; pirate objective is to escape with as much of the loot as possible, or failing that, a workable ship and their lives. Ideally the pirates would also like to have a base from which to operate, but such a location is only really useful if it hasn't already been located by enemy forces.

comment by Nornagest · 2012-03-29T17:47:31.935Z · LW(p) · GW(p)

The identity-memes are the only reason the question even exists. Historical pirates were mostly desperate or ambitious but otherwise ordinary sailors, and usually had pretty short careers. Historical ninja were usually dirt-poor burakumin without much in the way of reliable support, and -- in common with a lot of other historical assassins -- were individually used more as ammunition than as soldiers. I'm having a hard time coming up with a reasonable scenario in which they (as opposed to the pop-cultural image of either one) would have any incentive to fight, and if you stretch to create one the outcome would be almost completely determined by the circumstances.

Replies from: fezziwig, Vaniver, wedrifid
comment by fezziwig · 2012-03-29T18:48:12.415Z · LW(p) · GW(p)

Historical pirates were mostly desperate or ambitious but otherwise ordinary sailors...

Even that's pretty time-period dependent; check out e.g. Jean Lafitte's utterly ridiculous career.

comment by Vaniver · 2012-03-29T18:09:15.147Z · LW(p) · GW(p)

Well, piracy was a huge thing in Japan and China and so on, so if there were any conflicts between them, they could have been recorded. But I don't see why they would have been- typically, there was no significant benefit to killing a pirate captain (rather than sinking his boat or hanging him and all of his crew), and so assassins would only be employed against people whose deaths were meaningful enough (generals, title-holders, etc.). Similarly, I doubt ninja would transport valuable cargo all that frequency, and typically it was Japanese pirates preying on Chinese vessels anyway.

I mean, pirate just means "sea bandit" but evokes the image of mostly European sea bandits at a time when navies were based far away, coastal land was essentially free, and cargoes were really valuable, because that's when there was a veneer of excitement over lowlifes murdering for fun and profit.

comment by wedrifid · 2012-03-29T17:54:01.262Z · LW(p) · GW(p)

Nornangest!History seems to be different to Wikipedia!History. My limited familiarity only extends to the latter.

Replies from: Nornagest
comment by Nornagest · 2012-03-29T18:04:52.389Z · LW(p) · GW(p)

Presuming you're talking about my take on ninja I'm informed mainly by martial arts lore, which (outside the schools calling themselves ninjutsu, which incidentally are almost all modern inventions) is probably a little more interested in demystifying the tradition than the Wikipedia authors are. The wiki definitely puts a more glamorous spin on it, but reading between the lines I don't see too much that's actually incompatible with my take -- note the emphasis on infiltration and sabotage rather than combat, and that the field agents were mostly drawn from the lower class.

comment by Raemon · 2012-03-29T14:19:22.794Z · LW(p) · GW(p)

I sort of assumed half the pirate crew would just BE ninjas in disguise, and then the other half would just be dead.

comment by Desrtopa · 2012-03-30T19:05:20.770Z · LW(p) · GW(p)

Pirates were also professionals at killing people; they made their livings by attacking people and taking their stuff.

The question of "Wikipedia pirates vs. Wikipedia ninjas" is still confused. Under what circumstances are they fighting? Are the ninjas inexplicably crewing a ship? Are the pirates stopping in at port? Has Omega arranged a 5 v 5 pirate/ninja arena deathmatch?

comment by thomblake · 2012-03-29T13:51:16.285Z · LW(p) · GW(p)

Are we talking about real pirates and ninjas, or fantasy pirates and ninjas?

Amongst other advantages, fictional pirates have the devil's own luck.

Replies from: wedrifid, TheOtherDave
comment by wedrifid · 2012-03-29T14:21:03.111Z · LW(p) · GW(p)

Are we talking about real pirates and ninjas, or fantasy pirates and ninjas?

Real pirates and ninjas. Fantasy pirates and ninjas are of course equally awesome, with the pirate persona perhaps having more potential for injecting individual charisma. Because a ninja who is complaining that the rum is gone just seems incompetent.

It is my position that any debate about ninjas vs pirates that is maintained must be about declaring the coolness of fictional pirate and ninjas. (Which is a perfectly respectable passtime!) Anyone who actually thinks real pirates aren't just ridiculously worse than ninjas is not thinking very well at all.

Replies from: Bugmaster, daenerys, Hul-Gil
comment by Bugmaster · 2012-03-29T23:16:39.783Z · LW(p) · GW(p)

Why would real ninjas fight real pirates ? Isn't that the job for the Coast Guard, or the ancient Japanese equivalent thereof ?

Replies from: wedrifid
comment by wedrifid · 2012-03-30T03:06:07.511Z · LW(p) · GW(p)

Why would real ninjas fight real pirates ?

Goatee envy. Pirates have much better goatees.

Isn't that the job for the Coast Guard, or the ancient Japanese equivalent thereof ?

And from what I understand the Japanese pirates were mostly a problem for China, not Japan.

comment by daenerys · 2012-03-29T14:44:24.710Z · LW(p) · GW(p)

Real pirates and ninjas.

If we are being "real" then it only makes sense that the historical ninja would be fighting wokou, which are Japanese pirates of the period.

Replies from: wedrifid
comment by wedrifid · 2012-03-29T14:53:39.806Z · LW(p) · GW(p)

If we are being "real" then it only makes sense that the historical ninja would be fighting wokou, which are Japanese pirates of the period.

Yes, ninjas can beat Japanese pirates of that period - if for some reason the ninja's masters decide that having a bunch of pirates that mostly attack neighboring countries is a bad thing. And, if you allot enough time to mount a campaign, give them a map and an even more highly unlikely set of orders they could beat any other pirates of their time period too.

comment by Hul-Gil · 2012-04-06T01:17:41.773Z · LW(p) · GW(p)

Worse in what manner? In individual combat? A pirate crew vs. an association of ninjas?

From the comments I've read so far, I think the hypothetical situations you've used to determine that ninjas would win are grossly weighted in favor of ninjas. For example, you've already said it can't be any sea-based conflict (unless the ninjas are specially-trained sailor-ninjas on a navy ship, instead of being passenger ninjas booking passage on a merchant ship, as most would do if required to travel by sea if traveling by sea is incidental to their main function - being, in this case, assassination), "because why would ninjas be at sea?" Yet it can be "drunk pirates in whorehouses, unaware they are being targeted, vs ninjas that know exactly where they are and can get to them in time."

Your ninjas also have unrealistic government support (they were usually employed by noble families or daimyos, not imperial forces, and considered quite expendable - see hairyfigment's comment below), "better" ships than the pirates (AFAIK almost impossible, considering the naval technology the sides would be using), the ability to obtain whatever training is required (no time limit? the pirates cannot buy training too?), are aware of the conflict while the pirates aren't (or would they be getting drunk on land? well - maybe), etc.

A distinction should be made between a strict determination of fighting prowess - pirates vs ninjas in equal numbers in open combat - and the sort of situation you seem to be thinking of, wherein we try to be as realistic as possible, and all factors (such as whether or not ninjas would be at sea, and a sailor's propensity to get drunk) are considered. The latter is a lot more difficult to figure out, since so much would depend on circumstance (as in your drunk pirate example). This should also include allowance for the favored methods of both sides - pirates fighting at sea, ninjas not charging forward in open combat but assassinating and infiltrating - though you only seem to make the latter allowance.

For the former situation, I believe pirates could possibly win. They have better guns, and contrary to popular assumption, pirates could be very skilled in swordsmanship and general brawling. A lot of them were ex-navy, and in any case you wouldn't survive long as a pirate without obtaining some competence. Ninjas might be trained in espionage and assassination, but that doesn't include open combat, and they'd likely have less experience with it than pirates. They were trained in swordmanship as well, though, and quite possibly more thoroughly (but in some cases inadequately!), and bows could be as good or better than guns at many points in our possible time-range.

For the latter, here are a few factors to consider. One, ninjas didn't often attack in groups; sometimes they operated in small teams, but not any as large as a pirate crew. They were not used to wipe out large groups of people, but individual targets. Already we must depart from realism if we want to grant anything like equal numbers; it wouldn't be interesting to think about "one ninja vs a crew of pirates", but it weights the situation in favor of the ninjas if we go beyond "a small group of ninjas vs a crew of pirates". Two, would the pirates be aware they're being targeted by assassins? That would seem to depend on why exactly they're being targeted - a bounty they might be aware of; a covert vendetta for personal reasons, probably not. Trying to think of a realistic reason for the conflict might be a bit difficult. Three, I don't think ninjas could ever requisition ships, but if they could, they would still be at a disadvantage in naval combat considering the superiority of European vessels up until very recently. (It's not like pirates would be exactly inexperienced at naval combat, note.) Four, the pirates might be based at an unknown location, or nowhere at all, leaving the ninjas to attempt to catch them either in the act of raiding a coastal village, or making landfall to obtain supplies. Five, the ninjas might also be based in a location unknown to the pirates, or operating clandestinely; so while I initially considered that the pirates might raid their Ninja HQ, that might not be possible.

So... we might have a crew of pirates making landfall in various locations and attempting to locate and kill a small (<12?) team of ninjas, and said small team of ninjas attempting to catch them in the act and kill them right back. I suppose there is also the possibility of ninjas acquiring a vessel to pursue the pirates, although I can't see how that would end any way but badly for them. They could hardly crew the entire vessel themselves, even if they had sufficient numbers, as they're not sailors. You could give them some year(s) to obtain sailing skill, but then, you could also give the pirates some year(s) to obtain espionage skill. You could give them an unhistorical amount of support and grant them a naval vessel with crew, but now it's not strictly pirates vs. ninjas.

A variety of situations could develop from this: ninjas creep aboard anchored pirate vessel and attempt to assassinate the crew, pirates raid village where ninjas are staying, pirates and ninjas engage in naval combat, land-based combat... I think, as Nornagest says above, it is clear that you have to stretch to come up with a situation in which they're actually engaging in conflict, and if you do, who wins depends entirely on the circumstance you have concocted.

To say that anyone who doesn't think real pirates aren't "ridiculously 'worse' than ninjas is not thinking well at all" seems quite absurd to me, and even smacks of "ninja fanboyism". It's by no means so clear-cut that any pirate-supporter is obviously mentally deficient. And I like ninjas much more, personally. Pirates were awful people who deserve to be vilified, not romanticized. I don't even know why I've put this much effort in supporting them, come to think of it, except my general urge to correct what I see as error. I've been exposed to too much weeabo-ism, perhaps.

Replies from: wedrifid
comment by wedrifid · 2012-04-06T02:06:22.870Z · LW(p) · GW(p)

(unless the ninjas are sailor-ninjas on a navy ship - for some reason, despite the fact that any Japanese power with a navy never employed ninjas as far as has been recorded)

Was not my counterfactual scenario. It was someone else describing a counterfactual where ninjas are travelling by sea to a ninja-convention. My only contribution there was to (implicitly) assert that the counterfactualising operation that preserves the most probability mass to produce that scenario would not result in ninjas travelling on unarmed ships.

To say that anyone who doesn't think real pirates aren't "ridiculously worse [in an unspecified manner] than ninjas is not thinking well at all" seems quite absurd to me, and smacks of "ninja fanboyism".

I had never really considered it before daenerys proposed the idea of actually considering the question. I don't particularly accept the charge "in an unspecified manner" but I certainly haven't gone into detail. It roughly pertains to how one reasons about counter-factual and hypothetical situations. One can either take the counterfactual as an excuse to make up whatever story suits your position or you can apply the counterfactualizing operation in a manner that preserves the most probability mass.

I consider this question a valid diagnostic tool in that regard. In fact I went ahead and used it as such. I made this very meta-claim on facebook and when anyone disagreed I unfriended them. I call it either "evaporative cooling of styles of thinking in my chosen peers" or "being grumpy and getting rid of people who are likely to say annoying and wrong things in the future".

Replies from: Hul-Gil, TheOtherDave
comment by Hul-Gil · 2012-04-06T02:15:14.003Z · LW(p) · GW(p)

Was not my counterfactual scenario. It was someone else describing a counterfactual where ninjas are travelling by sea to a ninja-convention. My only contribution there was to (implicitly) assert that the counterfactualising operation that preserves the most probability mass to produce that scenario would not result in ninjas travelling on unarmed ships.

I edited that; I think the daimyos did have their own navies. I'm not actually certain about that, though, and I don't feel like looking it up. Maybe someone who knows more Japanese history can contribute. In either case, I don't think it's possible to say which is more probable, since whether they book passage on a merchant ship, or are sent with a naval ship by a master who controls both, depends entirely on the circumstance we concoct. Historically, they could have done both, if daimyos did have navies.

Replies from: wedrifid
comment by wedrifid · 2012-04-06T02:23:18.556Z · LW(p) · GW(p)

depends entirely on the circumstance we concoct

And in my experience the way people go about concocting such circumstances (in general, over all counterfactuals) matters a lot to me both in terms of how much respect I can maintain for them as a thinker and how much I can tolerate their presence. For the purpose of answering a specific question not all concocted circumstances are equal!

comment by TheOtherDave · 2012-04-06T02:12:34.640Z · LW(p) · GW(p)

Damned ninjas! Get off my lawn!

Replies from: wedrifid
comment by wedrifid · 2012-04-06T02:16:44.499Z · LW(p) · GW(p)

Damned ninjas! Get off my lawn!

Better your lawn than inside your bedroom! ;)

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-06T02:25:40.831Z · LW(p) · GW(p)

That's an entirely different genre.

Replies from: wedrifid
comment by wedrifid · 2012-04-06T02:31:20.003Z · LW(p) · GW(p)

That's an entirely different genre.

That one took me a few seconds to decipher! :)

comment by TheOtherDave · 2012-03-29T14:15:38.464Z · LW(p) · GW(p)

OTOH, fictional ninjas have more Rule of Cool mojo than fictional pirates.

Replies from: AspiringKnitter
comment by AspiringKnitter · 2012-03-30T00:19:20.513Z · LW(p) · GW(p)

Fictional ninjas are handicapped by the fact that they attack one at a time. Fictional pirates, meanwhile, in addition to not suffering from scurvy like you'd expect, have improbably awesome fencing powers.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-30T00:27:47.396Z · LW(p) · GW(p)

Well, how else can they dispose of their stolen loot?
Or keep their dogs in their yards?

Replies from: AspiringKnitter
comment by AspiringKnitter · 2012-03-30T00:34:27.732Z · LW(p) · GW(p)

Well, there you have it, then. They can hide behind their magical fences.

comment by Paul Crowley (ciphergoth) · 2012-03-29T13:41:22.649Z · LW(p) · GW(p)

I believe Dr McNinja has something to say about this.

comment by CronoDAS · 2012-04-02T10:46:54.245Z · LW(p) · GW(p)

Short exercise. Does anyone actually think pirates stand a chance against professionally trained assassins?

Pirates have guns.

comment by Bugmaster · 2012-03-29T23:13:17.418Z · LW(p) · GW(p)

Does anyone actually think pirates stand a chance against professionally trained assassins?

Technically, both pirates and ninjas, as they are depicted in popular folklore, are fictional entities. While people named "pirates" and "ninjas" did exist (and do exist to this day), they bear little resemblance to grog-swilling scallywags or black-clad wire-fu artists or what have you. Thus, before we can answer your question, we need to nail down exactly what you mean by "pirates" and "ninjas"; what capabilities you expect these fictional combatants to possess, and then go from there.

Replies from: orthonormal, wedrifid
comment by orthonormal · 2012-03-30T00:10:43.349Z · LW(p) · GW(p)

I can't believe you missed the chance to say, "Taboo pirates and ninjas."

Replies from: Humbug, Eliezer_Yudkowsky, thomblake, Blueberry
comment by Humbug · 2012-03-30T08:40:35.205Z · LW(p) · GW(p)

I can't believe you missed the chance to say, "Taboo pirates and ninjas."

"Pirates versus Ninjas is the Mind-Killer"

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-30T00:19:26.120Z · LW(p) · GW(p)

Doesn't work - the accent is on the second syllable.

Replies from: ciphergoth, thomblake
comment by Paul Crowley (ciphergoth) · 2012-04-01T15:41:05.634Z · LW(p) · GW(p)

Pirates and ninjas taboo - oh my!

comment by thomblake · 2012-03-30T00:20:41.263Z · LW(p) · GW(p)

See, in context I was tricked into reading that Taboo, which is totally not natural.

comment by thomblake · 2012-03-30T00:17:32.654Z · LW(p) · GW(p)

Taboo pirate ninja badger mushroom narwhal!

comment by Blueberry · 2012-03-30T00:13:41.159Z · LW(p) · GW(p)

Oooh yes. Upvoted for awesomeness.

comment by wedrifid · 2012-03-30T03:26:43.472Z · LW(p) · GW(p)

Technically, both pirates and ninjas, as they are depicted in popular folklore, are fictional entities.

We already specified that we aren't talking about the fictional entities. We're talking wikipedia pirates and wikipedia ninjas. That's what you get by default if you taboo 'real'.

Thus, before we can answer your question, we need to nail down exactly what you mean by "pirates" and "ninjas"; what capabilities you expect these fictional combatants to possess, and then go from there.

As far as I'm concerned the question is trivially resolved to an answerable counterfactual, even more trivially answered and in general a solved problem. The whole "spin it into a deep question of ambiguity and definition" is one of the first things I discarded as part of the "ACTUALLY try to answer the question" part daenerys's game under "Being proud of clever retorts, etc."

It is my evaluation that a group who does not execute the reasoning within the minute or so allotted:

... has simply failed at the rationalist game and requires more practice.

Replies from: Bugmaster
comment by Bugmaster · 2012-03-30T18:36:46.712Z · LW(p) · GW(p)

We're talking wikipedia pirates and wikipedia ninjas.

As far as I understand, these two groups have never fought each other in any engagement, having neither the motivation nor the skills to do so. I cannot envision a realistic and deliberate ninja vs. pirate engagement, unless one or both of the parties were drunk or stoned out of their minds. Naturally, I'd assumed you were talking about fictional scenarios.

Replies from: wedrifid
comment by wedrifid · 2012-03-30T18:48:12.527Z · LW(p) · GW(p)

And, according to my analysis of daenerys' game this constitutes an all too common failure mode for the aforementioned reasons. My estimation of the capabilities of lesswrong participants to handle this sort of question was grossly misscalibrated.

Replies from: Bugmaster
comment by Bugmaster · 2012-03-30T18:56:59.383Z · LW(p) · GW(p)

It sounds like you're saying, "no one is smart enough to comprehend my brilliant point, despite the fact that it's 100% clear and obvious since I am such a great communicator". I believe there may be alternative explanations for people disagreeing with you, however...

Replies from: wedrifid
comment by wedrifid · 2012-03-30T19:22:20.369Z · LW(p) · GW(p)

It sounds like you're saying, "no one is smart enough to comprehend my brilliant point, despite the fact that it's 100% clear and obvious since I am such a great communicator".

If I said something that means that then I am indeed as bad at communicating as you insinuate. I actually maintain that I haven't made much of a point at all and thought that my initial claim verged on patronizing for stating the obvious.

I also note that the problem isn't that "no one comprehends"... from what I have seen most people here do. It is a matter of whether there are enough exceptions that the conversation can still get derailed into failure modes. Surely you at least agree that the conversation in this thread has been derailed at places? For example, this immediate context appears to include fairly unadorned indicators of disrespect. Nobody likes to hear those - even when they are plain and sincere expressions of the level of disagreement.

Replies from: Bugmaster
comment by Bugmaster · 2012-03-30T20:35:38.402Z · LW(p) · GW(p)

and thought that my initial claim verged on patronizing for stating the obvious.

What's obvious to one person may sound ridiculous to another. Often does, in fact.

Surely you at least agree that the conversation in this thread has been derailed at places?

What, do you mean the original thread, or the pirate vs. ninja thread ? Heh. Anyway, the answer is "yes" to both.

For example, this immediate context appears to include fairly unadorned indicators of disrespect.

Patronizing posts tend to invoke that kind of an atmosphere. In any case, you keep saying that disagreeing with your conclusions is a "failure mode". You may well be right, but so far, I'm not convinced that this is the case. I am prepared to be convinced, however.

comment by APMason · 2012-03-29T12:27:54.630Z · LW(p) · GW(p)

In equal numbers? At sea? Are the ninjas manning a ninja-vessel?

Replies from: wedrifid
comment by wedrifid · 2012-03-29T12:30:45.538Z · LW(p) · GW(p)

In equal numbers? At sea? Are the ninjas manning a ninja-vessel?

Why would the ninjas try to fight pirates at sea? Are they intellectually disabled ninjas? Clearly they would wait until the pirates land, get ridiculously drunk and start carousing (and raping and pillaging if they are into that sort of thing). Then poison them or stab them in the back.

Replies from: APMason, dlthomas
comment by APMason · 2012-03-29T15:11:33.329Z · LW(p) · GW(p)

Well, you see what you've done here is you've put me in a situation where I'm in an argument I don't care about, I agree with you, and yet some part of my brain is still playing Pirate's Advocate. Thanks very much.

Replies from: wedrifid
comment by wedrifid · 2012-03-29T15:15:00.832Z · LW(p) · GW(p)

an argument I don't care about

Sacrilege! Next you will be telling me you don't care about astronauts vs cavemen either! (If they are fighting in space the cavemen lose. Just sayin'.)

comment by dlthomas · 2012-03-29T14:15:48.553Z · LW(p) · GW(p)

Maybe they're just sailing on their way to a ninja convention, when suddenly - bam, pirates!

Replies from: wedrifid
comment by wedrifid · 2012-03-29T14:45:32.817Z · LW(p) · GW(p)

Maybe they're just sailing on their way to a ninja convention, when suddenly - bam, pirates!

(Laughing.)

And the pirates board the ship trying to take the loot when suddenly the ninjas casually slaughter all the pirates, board their ship (the pirates probably crippled the ninja ship before boarding) and sail the pirate ship away.

Mind you I'm still giving an idiot ball to all the ninjas for sailing in a ship that isn't heavily armed.

Replies from: dlthomas
comment by dlthomas · 2012-03-29T14:51:18.403Z · LW(p) · GW(p)

I think the group with practice winning naval engagements would more likely win the naval engagement.

Replies from: wedrifid
comment by wedrifid · 2012-03-29T15:13:24.216Z · LW(p) · GW(p)

I think the group with practice winning naval engagements would more likely win the naval engagement.

And by this you mean the Navy vessel that is transporting the ninjas would more likely win the naval engagement? (The engagement that the pirates would never initiate because they are experienced in not picking fights that they will lose).

Or are we still talking about the landlubber ninjas who have never learned to sail that are for some reason trying to sail themselves across a sea for conference purposes? I'm imagining them up in the rigging comically trying to sail and needing to use all their ninja agility to dodge the yardarms that are swinging around wildly as they try to figure out which way they are supposed to pull the ropes...

Basically, pirates can beat ninjas if they load up their cannons with idiot balls for ammunition and the ninjas eagerly run out to catch them.

Replies from: TheOtherDave, dlthomas
comment by TheOtherDave · 2012-03-29T16:05:18.054Z · LW(p) · GW(p)

International ninja conferences would be fun, though. (Once the transportation difficulties were ironed out.)

comment by dlthomas · 2012-03-29T16:39:18.584Z · LW(p) · GW(p)

Most people are landlubbers. Most landlubbers going on a voyage would be on a passenger ship, not a navy vessel. I see no reason either of these wouldn't apply to ninja. In this particular situation - one or several ninja on a passenger vessel raided by pirates - I expect the pirates to win. Of the circumstances where ninja and pirates would be in the same place at the same time, this seems to be the most likely where the pirates would have the edge.

Replies from: wedrifid
comment by wedrifid · 2012-03-29T17:09:24.372Z · LW(p) · GW(p)

Most people are landlubbers. Most landlubbers going on a voyage would be on a passenger ship, not a navy vessel. I see no reason either of these wouldn't apply to ninja.

I suggest that the reference class 'most people' is the wrong reference class from which to make predictions about ninja tactical decisionmaking.

Replies from: hairyfigment, dlthomas
comment by hairyfigment · 2012-03-29T19:41:12.614Z · LW(p) · GW(p)

I think you overestimate the importance of ninjas to the people who command navies.

The warlord Oda Nobunaga's notorious reputation led to several attempts on his life. In 1571, a Kōga ninja and sharpshooter by the name of Sugitani Zenjubō was hired to assassinate Nobunaga. Using two arquebuses, he fired two consecutive shots at Nobunaga, but was unable to inflict mortal injury through Nobunaga's armor.[51] Sugitani managed to escape, but was caught four years later and put to death by torture.[51] In 1573, Manabe Rokurō, a vassal of daimyo Hatano Hideharu, attempted to infiltrate Azuchi Castle and assassinate a sleeping Nobunaga. However, this also ended in failure, and Manabe was forced to commit suicide, after which his body was openly displayed in public.[51] According to a document, the Iranki, when Nobunaga was inspecting Iga province — which his army had devastated — a group of three ninja shot at him with large-caliber firearms. The shots flew wide of Nobunaga, however, and instead killed seven of his surrounding companions.[52]

comment by dlthomas · 2012-03-29T19:43:23.231Z · LW(p) · GW(p)

The reference class of "most people" is a better starting point than maximum entropy. Particularly in light of the fact that significant visible differences of behavior would work against the whole secrecy thing.

Replies from: wedrifid
comment by wedrifid · 2012-03-30T03:52:48.674Z · LW(p) · GW(p)

The reference class of "most people" is a better starting point than maximum entropy.

I have a higher standard than maximum entropy.

comment by wedrifid · 2012-03-29T17:12:56.011Z · LW(p) · GW(p)

Short exercise.

I was mistaken. I'm amazed how much debate the question prompted here even with this framing. I really thought it was just a closed question.

Replies from: thomblake, Vaniver
comment by thomblake · 2012-03-29T17:34:21.287Z · LW(p) · GW(p)

I'm amazed how much debate the question prompted here even with this framing.

I'm amazed at your amazement.

I'd have expected at least this much out of any such silly comparison, even here. Try:

  • Raptors vs. Jesus
  • Twinkie vs. cockroach
  • Star Trek vs. Pop Tarts

Or any other conflict between trochees.

Replies from: wedrifid
comment by wedrifid · 2012-03-29T17:39:36.716Z · LW(p) · GW(p)

Most of your amazement can be explained by you thinking that that 'pirates vs ninjas' belongs in the same reference class as:

  • Raptors vs. Jesus
  • Twinkie vs. cockroach
  • Star Trek vs. Pop Tarts

That seems utterly ridiculous. Are you being disingenuous or are you serious?

Replies from: thomblake
comment by thomblake · 2012-03-29T18:46:16.677Z · LW(p) · GW(p)

I'm being serious. See http://xkcd.com/856/

I think the relevant reference class is "trochees with a 'vs.' between them", and you can find many such engaging debates on the Internet. I'm skeptical that anyone would care to compare pirates and ninjas if they were called something else.

Replies from: Alejandro1
comment by Alejandro1 · 2012-03-29T19:13:33.466Z · LW(p) · GW(p)

I don't think the trochee pattern is so critical to these debates. Cavemen vs. astronauts is a notorious counterexample.

Replies from: countercheck, pedanterrific, thomblake
comment by countercheck · 2012-03-30T01:12:53.895Z · LW(p) · GW(p)

Cavemen is a trochee. Astronauts is a dactyl, which is why it's less funny, but still close enough to a trochee.

Replies from: thomblake
comment by thomblake · 2012-03-30T01:16:46.397Z · LW(p) · GW(p)

Well cavemen are well-known in the literature for their pterodactyl-defeating skills, so I suppose that would generalize to other dactyls.

comment by pedanterrific · 2012-04-06T02:38:36.006Z · LW(p) · GW(p)

I wonder why no one ever phrases it cavemen vs. spacemen.

comment by thomblake · 2012-03-29T19:26:39.879Z · LW(p) · GW(p)

Oddly, I don't think I've encountered anything vs. astronauts before.

Replies from: thomblake
comment by thomblake · 2012-03-30T00:34:25.048Z · LW(p) · GW(p)

I retract my statement. I've seen the entirety of Angel.

Replies from: wedrifid
comment by wedrifid · 2012-03-30T19:11:19.538Z · LW(p) · GW(p)

I retract my statement. I've seen the entirety of Angel.

Come to think of it the only times I've heard the debate is between Angel and Spike and Bones and Booth. I hadn't caught the easter egg until now.

comment by Vaniver · 2012-03-29T19:18:36.782Z · LW(p) · GW(p)

I was mistaken. I'm amazed how much debate the question prompted here even with this framing. I really thought it was just a closed question.

Like thomblake, I'm amazed at your amazement, but on a different track. Unless you're an expert in the histories of navies, espionage, and other miscellany, why would you expect your intuition to both identify closed questions and the correct answers?

Replies from: wedrifid
comment by wedrifid · 2012-03-30T19:08:51.313Z · LW(p) · GW(p)

Like thomblake, I'm amazed at your amazement, but on a different track. Unless you're an expert in the histories of navies, espionage, and other miscellany, why would you expect your intuition to both identify closed questions and the correct answers?

The experts you appeal to are indeed far more impressive than I and I wouldn't dream of claiming their status as my own. That said the fact that very impressive people can answer trivial questions too doesn't make questions non-trivial. (Or useful, for that matter.)

comment by [deleted] · 2012-03-29T13:05:28.365Z · LW(p) · GW(p)

A self-taught dirty fighter/swashbuckler against a professional assassin with quality weapons and poinsions isn't much of a fight.

Replies from: wedrifid
comment by wedrifid · 2012-03-29T13:26:53.013Z · LW(p) · GW(p)

A self-taught dirty fighter/swashbuckler against a professional assassin with quality weapons and poinsions isn't much of a fight.

Exactly!

Replies from: Bugmaster
comment by Bugmaster · 2012-03-29T23:18:16.823Z · LW(p) · GW(p)

Upvoted for delicious ironic ambiguity.

comment by Vladimir_Nesov · 2012-03-24T12:00:26.181Z · LW(p) · GW(p)

This seems like a comparatively reliable procedure: imagine a collection of possible worlds generated by possible actions; explain what future events distinguish between these worlds in a way that makes one of them preferable to the others; then choose the action that leads there.

Paying attention to events that distinguish these possible futures from each other guards against errors such as comparing to status quo or should-world (neither of which is among the possible futures), or worse comparing an arbitrarily picked-out event (in one of the possible futures) to an arbitrary anchor (i.e. a cost that feels excessive in some absolute way, and not by comparison to alternatives). Focusing on future events and not past events or actions themselves guards against deontological and identity-based arguments, such as "this is a proper action", "he shouted first" or "because I'm a human being".

Saying "positive consequence" sounds like a bad habit to me: positive compared to what? The comparison should be among the alternatives, without anchoring on some baseline of neutrality such that some consequences are more "positive" than that.

comment by TheDave · 2012-04-13T19:41:38.288Z · LW(p) · GW(p)

Concreteness Game The object of this game is to train players to generate examples for explaining concepts quickly. The game requires at least two people, but may work better with groups of three or four.

To play, one of the players (Asker) names a concept to be explained, such as "How do you traverse a linked nodal network", "Explain the law of Conservation of Energy", or "What constitutes good financial advice?"

The other player (Generator) then tries to explain the concept/skill/other by using a nearby object to assist. The Generator should relate the example back to the original query and explain how the example demonstrates the experimental predictions that the concept makes (see Extended Example below). The Asker listens to the explanation, and once the Asker feels as though the generator has been able to explain the concept fully, they indicate to the Generator that the Generator should pick another object and start again. Another option the Asker should exercise is to ask follow-up questions about the Generator's example, asking for clarification or elaboration by interacting with their object in some way (examples in the Extended Example section below)

Extended Example:

Asker: "Okay, let's try this. Explain Newton's Third Law." Generator: "All right, hmm... Okay, take that big old oak tree as an example! Now, imagine that I want to push this tree over. If I push on it, it doesn't really move anywhere, and neither do I. That's because the tree has a really strong root system to prevent it from being tipped over, while I'm braced against the ground. That means the force I put on the tree doesn't go anywhere. Now, what happens if I don't brace against the tree? I'm going to try and push with the same strength that I did just now. If the tree didn't exert any force on me, then why did it force my arms away strongly enough to tip me over?" Asker: "What happens if you were standing on ice instead of dirt?" Generator: "Hmm... If I wanted to avoid spinning away from the tree, then I couldn't push against it too hard. If I did want to, though, then I could turn this whole area into a pinball machine by pushing off of all the trees!" Asker: "Okay, another example this time!" Generator: "All right, look at that guy in his kayak over there. Every time he uses his paddle to push himself forward, you see a whirlpool where his paddle was. That's because he's pushing against a lot of water with the flat of the paddle. The paddle pushes on the water, and the water pushes back on the paddle. You can see this because he moves forward while the water is disturbed. If the law didn't apply then the water shouldn't-" Asker: "Okay, you've got that one, give me another one!" Generator: "Hmm..."

Through playtesting, we've noticed that there are two broad categories of useful questions in this game. The first type are "You have a rule, now apply it with the objects in front of us". For example, one might describe good financial advice by pointing at a nearly-broken-down refrigerator: "Some advice you have to use right away like take-out food before it spoils. Other advice will keep basically forever, like ketchup maybe, while other advice will let you repair the refigerator to let it keep everything fresh for longer. What you want is to avoid poisoning yourself and keep healthy, so good advice is anything that keeps from poisoning you or your friends. Stock tips are like take-out food, while better mental models are like fixing the fridge." The other type of useful question seems to be descriptive in nature. "Explain the life-cycle of a caterpillar": "Well, imagine riding around on a cheap kid's bicycle and picking up tubing, gears, and other supplies as you ride around. Then you take the bike and everything you collected into a room, work for a while, and you come out with an awesome racing bike that lets you do things you never could before".

Example questions: Explain Conservation of Energy. What constitutes good scientific practice? Explain the sunk cost fallacy. How do you lift heavy things safely? What constitutes an efficient algorithm? Explain (Hansonian) signaling. Explain priming and how expectations about the quality of a thing can affect your assessment of its quality. Describe how Omega might optimize for happiness. Explain what a set (in set theory) is and some of its basic properties. Explain the fallacy of gray.

Theory (SPOILER ALERT! This section contains material likely to prime your reactions) This game is designed to help people provide concrete examples on demand. The expectation is that: Forcing the players to compare their mental models against physical objects makes their explanations more concrete because physical objects can be interacted with. If a Generator relates a tree to AI theory, the question "What is a branch and what happens when I push on it?" seems to yield concrete answers more often in practice than "What are the features of AI theory and why do they matter?". The concept of follow-up questions seems to greatly increase the fun of the game. Many more grins were observed when the Asker occasionally quit saying "Okay, got it, another example now!" and instead interacted with the physical model in some way. Playtesting seemed to also show that people really enjoyed coming up with examples up to their third, and the fourth became difficult to generate while the fifth was simply not all that fun to force ourselves to come up with. Priming may have been an issue, but initial results suggest asking for roughly three examples is the most fun before moving to a different question.

EDIT: Formatting

comment by James_Miller · 2012-03-24T19:15:05.949Z · LW(p) · GW(p)

You could use mazes where your score is -(total distance traveled) . First, give a simple maze with two obvious paths, A and B but Path A is much shorter. Then give a second maze identical to the first but you are taking over from another player who has already gone down Path B but the shortest way to the exit is to double back and then go down Path A. Then give the same maze but now there is an obstacle on Path A that you must go around if you take this path and so it's now optimal to go on Path B. The obstacle was placed there for some unfair reason (Perhaps only women will face the obstacle.) Next, the same situation as before but you have the ability to erase the obstacle at no cost. Finally the same as before but now you are told that the maze is only a map and you can't erase the obstacle on the map. You should have the people play for real money to make the game more emotionally "meaningful." Also, you could plant someone who makes arguments against the consequentialist outcome using analogies to real life situations (i.e. women should never give in to discrimination.)

Replies from: handoflixue
comment by handoflixue · 2012-03-29T22:32:23.173Z · LW(p) · GW(p)

(i.e. women should never give in to discrimination.)

And then you collapse in to short vs long-term gains. Giving in to discrimination is a net long-term loss, since then you'll face it in the future...

Also, for initial teaching, you want to present the SIMPLEST possible version. Rationality 102 is when you start introducing enemy agents, and then eventually combine that idea with this one after BOTH "sabotaging agents" and "consequentialism" are understood.

comment by velisar · 2012-03-24T11:31:18.010Z · LW(p) · GW(p)

Kahneman suggests such an exercise for groups after pointing out that organizations generally act more rationally than individuals. The devil's advocate role and thinking at the worst possible outcome. We don't always have the luxury of having others near us for checking our thoughts. But we often have imaginary conversations with friends or parents. So it shouldn't be very difficult to assign a devil's advocate position to a imaginary voice. That should put in perspective the way we feel about the subject. It is a basic mean of delaying the strong coherence of the first good narrative.

Maybe it would be great to have an imaginary Bayesian friend...

comment by Eneasz · 2012-03-30T18:10:57.213Z · LW(p) · GW(p)

This wouldn't work as the only exercise, but could be useful if paired with another.

Presumably all students have other things they could be doing with their time, some of it possibly fun. Near the end of the lesson, perhaps just before the last exercise, with maybe 15-20 minutes left in the session, present this option: You can leave right now and get on with your day.

Obviously this is always an option, no one is required to stay against their will, but it's usually considered bad form and it's never given as an explicit option. Tell everyone to think about it for at least a full by-the-clock minute. What they could learn in the next 20 minutes vs what else they could do, and the consequences of both options. Then let them make their decision.

Afterwards both those who stayed and those who left may rethink their choice at least once and wonder if their choice was the best one. If nothing else, it could be memorable.

Replies from: handoflixue
comment by handoflixue · 2012-04-03T00:16:41.848Z · LW(p) · GW(p)

That is an AWESOME way to grade the success of the lesson!! If half the audience leaves, then either the audience still isn't making good choices (in which case you clearly didn't teach them well), or they made the correct choice and your lesson genuinely isn't worth their time.

(Or you're teaching an audience that isn't ready for it, but that's still a failing in the teaching, just on a more administrative "select your audience better" level.)

comment by jimrandomh · 2012-03-24T06:13:14.399Z · LW(p) · GW(p)

I think of this as "looking effectward", one of the basic directions in concept space (opposite causeward, making it the inverse operation of asking "why").

comment by Vaniver · 2012-03-24T05:40:41.887Z · LW(p) · GW(p)

Another inferential path: it may be valuable to differentiate them as attitudes and events. If my motivation for getting a PhD is "I will feel terrible if I do not get a PhD", that's an attitude motivation, which in theory I have internal control over. If my motivation for getting a PhD is that I will get hired for a different sort of job, that's an event motivation, for which control is primarily external. I don't have control over whether or not a variety of job requires a PhD, but I do have control over whether or not my attitude will be negative if I don't have a PhD.

The obvious social mirror of the Really Getting Bayes game is to have people pair up and dissect motivations for something, trying to identify what's event-based and what's attitude-based (or however you're presenting it). It will help for them to be motivations they're feeling, rather than descriptions they're reading.

Asking them to provide examples from their life is potentially useful and will promote bonding between participants, but requires heavy investment by participants. Another approach is to offer them a choice between gambles, where all potential consequences are as apples and oranges as you can make them. Some more trickery could be used to provide non-consequential reasons to pick one gamble over another- perhaps give participant's names to the gambits, but don't require them to pick the gambit with their name.

comment by daenerys · 2012-03-28T14:49:42.716Z · LW(p) · GW(p)

I think an important pre-skill is to decide what your goals are BEFORE checking the consequences. Otherwise, you can easily retroactively change your goals to fit whatever action you want.

For example, in the sunk costs PhD scenario, it would be easy for someone to say something like "If I pursue my PhD, it will make my family proud. This is very important to me, so I will pursue my PhD." However if you asked them what their goals are BEFORE they listed the consequences of each action they probably would not list "Make family proud", but would list things like "Job I enjoy" and "Make more money in my career".

The trap is that it can be very difficult to admit to your ACTUAL goals. For example, it took a while for me to realize that I don't ACTUALLY care about making money once I have enough to survive (unless phrased in a way such as "If I do Job B, I only have to work 12 hours/week" or somesuch).

So I think figuring out what your actual goals are (noit the cached thoughts of your goals) is a set of exercises itself, but that they should probably be done before the Consequentialism exercises.

Replies from: handoflixue
comment by handoflixue · 2012-03-29T22:42:34.435Z · LW(p) · GW(p)

Conversely, you might not recognize your true rejection until you go "well, but wait, I don't want to abandon my PhD - my family wouldn't be proud of me." If you get stuck on that, it's possible that "make your family proud" really IS important to you.

I do agree that it's a useful idea, but new insights aren't necessarily ex post facto rationalizations.

comment by Daniel_Burfoot · 2012-03-26T21:46:49.887Z · LW(p) · GW(p)

an audience of non-Sequence-reading experimental subjects, who were mostly either programmers or in other technical subjects, so I could go through the math fairly fast

I don't have suggestions on the main question, but I strongly recommend that you design the curricula to be consumable by corporate executives. If you can convince companies that they need to send their execs to week-long rationality seminars, to help them improve their decision-making skills, you are going to make megabucks.

comment by fubarobfusco · 2012-03-24T04:11:58.209Z · LW(p) · GW(p)

We did the Really Getting Bayes game at the Mountain View meetup this week. My impression of it was that the explanation was at first a little unclear, but that once we had gotten the sense of it, it was worthwhile.

One thing that I realized during the game was the degree to which I was using the availability heuristic to provide my likelihood ratios. For instance, one object I was given was either an electrical extension cord or an audio cable. In coming up with RGB likelihoods, I thought, "Electrical extension cords are usually black or white, hence not RGB; whereas audio cables are often RGB" — by using instances I could bring readily to mind to come up with this thought. My partner then told me the item was RGB, and I correctly predicted it as an audio cable. However, the electrical cord in his other hand was in fact orange, not black or white. While my availability heuristic worked for the purpose at hand, it clearly would have been wrong for other purposes.

Replies from: Academian
comment by Academian · 2012-03-24T20:44:26.848Z · LW(p) · GW(p)

In case this wasn't done, a physical demonstration of a game like this at first is important, with a concurrent verbal description to tag it for indexing: "Step 1: we do this", "Step 2: we do this." Showing beats telling alone. Verbal or written instructions are a low bandwidth form of communication that are better used for tagging/clarification of demonstration data (i.e. naming the steps while you do them) or error-correcting after a demonstration (i.e. people can look stuff up if they get confused).

comment by duckduckMOO · 2012-03-28T20:23:12.936Z · LW(p) · GW(p)

Why is teaching people to think like consequentialists a good idea again? Serious question.

If they're (relatively successful) mathematicians and programmers I don't see how it could go wrong but I'm awfully worried about some of the rest of the population. Specifically people not being able to sustain it without letting other things slip.

second edit: I should clarify. It's teaching the habit that I'm thinking about. Everyone should be able to think like a consequentialist but is instilling these reflexes gonna be a net positive?

Replies from: daenerys, JoachimSchipper, army1987
comment by daenerys · 2012-03-29T19:54:48.309Z · LW(p) · GW(p)

Why is teaching people to think like consequentialists a good idea again? Serious question.

Devil's Advocating Here:

I do think we need to not forget that most people's minds do NOT operate like the typical LWian!

I know this personally, in that I tend to make intuitive-based decisions (and by intuitive, I mean things like waking up one morning thinking "I should eat less meat", and so becoming a vegetarian for the next 8 (so far) years.)

Decisions I have made intuitively like this include: atheism, vegetarianism, not drinking alcohol (that one only lasted 7 years), quitting grad school, not having children, polyamory, pretty much every career decision, liking rationality.

The social situations were different enough for each of these for me to think that social concerns were not the main trigger, but I somehow feel like I've ended up making rational-style choices by following my intuition (and I recognize that I probably just lost all credibility I might have had here by writing this post :P).

The upside of this, is that since I am always doing what I feel like, I rarely feel like I am having to fight myself. For example, giving up meat was amazingly easy for me, because it was like the decision had already been made.

From someone who really enjoys learning about rationality, I can still see how it wouldn't mesh with many people's methods of living, without a complete lifestyle overhaul (which is an unlikely result of a single class).

But I do not AT ALL think that this means that we shouldn't teach people about consequentialism or other rationality topics (I am all about spreading rationality), I just think we need to make sure that we do so in a way that can encompass a wide range of people. First figure out what percentage of the overall population you want to be accessible to, (say the top 60% intellectually, MINUS the 20% most intuitive types) and make sure that your presentations and materials are able to reach whatever your target is.

Replies from: JoachimSchipper
comment by JoachimSchipper · 2012-03-30T09:16:02.852Z · LW(p) · GW(p)

Your intuition appears to like LW-approved things, and you are on LW. Don't you think that learning about consequentialism might beneficially rewire one's intuition?

Replies from: daenerys
comment by daenerys · 2012-03-30T14:29:45.129Z · LW(p) · GW(p)

If you are implying that learning about rationality on LW made my intuitions more rational, then you should know that I made all those decisions long before joining LW about 5 months ago.

However, I wouldn't be surprised with the fact the LW conforms to most of my intuitions (except the whole anti-deathism and singularity stuff) as one of the reasons I joined the site. I remember thinking "OMGWTF there are people who are OPENLY POLY on here, outside of a poly-specific group!!!" I'm sure it doesn't hurt that I find rationality and psychology to be amazingly interesting.

Replies from: JoachimSchipper
comment by JoachimSchipper · 2012-04-02T13:48:41.985Z · LW(p) · GW(p)

Honestly, I'm not sure what I was thinking when I wrote that comment. I'm aware you joined rather recently...

Perhaps "intellectual beliefs and intuitions seem to be correlated, which suggests that one can rewire one by tinkering with the other."

comment by JoachimSchipper · 2012-03-29T07:27:56.793Z · LW(p) · GW(p)

Umm, are you under the impression that (the non-mathematical-ish part of the population/anyone) is constantly operating near their sustainable cognitive maxima? So near that adding a nearly-automatic reflex would push them over?

Neither looking around nor introspection suggests that that is true.

Replies from: scav
comment by scav · 2012-03-29T08:19:58.198Z · LW(p) · GW(p)

Indeed. That would imply that our shared goal of raising the sanity waterline would cause most of the population to drown :)

Mind you, I like that the OP is asking what the consequences would be. However my guess is: more people making slightly better decisions some of the time, and with no obvious mechanism for "letting other things slip", I don't see a downside.

Replies from: AspiringKnitter, handoflixue
comment by AspiringKnitter · 2012-04-04T03:39:18.681Z · LW(p) · GW(p)

What if the problem isn't that it's too cognitively taxing, but that, applied in the sloppy way most people apply their heuristics, it could lead to irrational choices or selfish behavior?

Replies from: scav
comment by scav · 2012-04-04T08:19:39.090Z · LW(p) · GW(p)

People already make irrational choices. I don't think teaching them one way to mitigate that could make things worse. What's the opposite of status quo bias? I might have some of that, whatever it is :)

comment by handoflixue · 2012-04-03T00:22:38.712Z · LW(p) · GW(p)

That would imply that our shared goal of raising the sanity waterline would cause most of the population to drown :)

Upvoted because I rather like that phrasing :)

comment by A1987dM (army1987) · 2012-03-30T09:29:49.517Z · LW(p) · GW(p)

Well, when teaching non-perfect people about consequentialism you should teach them about ethical injunctions as well. I don't think teaching both will be a net negative.

comment by twanvl · 2012-03-29T10:23:29.559Z · LW(p) · GW(p)

"You shouldn't kill because it's the wrong thing to do" can be rescued as "Because then a person will transition from 'alive' to 'dead' in the future, and this is a bad event" or "Because the interval between Outcome A and Outcome B includes the interval from Fred alive to Fred dead."

Why the fancy words? This just seems like a complicated way of saying: "Because the person would then be dead. And that is bad".

Replies from: MixedNuts, HonoreDB
comment by MixedNuts · 2012-03-29T14:17:39.027Z · LW(p) · GW(p)

People being dead is a bad outcome. Killing people is a bad action. Consequentialism does not recognize bad actions, only actions that lead to bad outcomes.

comment by HonoreDB · 2012-03-30T16:15:16.486Z · LW(p) · GW(p)

A human corpse poofing into existence from nowhere wouldn't be in itself a bad outcome. So we need to specify that the human was once alive.

An alternate phrasing might be "Because this would cause the person to die." But the word "die" is historically imprecise. Open-heart surgery stops a beating heart. Destructive uploading would cause brain death.

Replies from: wedrifid, TheOtherDave, FGonzalez
comment by wedrifid · 2012-03-30T16:17:58.195Z · LW(p) · GW(p)

A human corpse into poofing into existence from nowhere wouldn't be in itself a bad outcome.

Free food! (It doesn't count as cannibalism if the corpse has never been a member of your species!)

Replies from: AspiringKnitter, TheOtherDave, Eliezer_Yudkowsky
comment by AspiringKnitter · 2012-04-04T03:21:32.205Z · LW(p) · GW(p)

Does it have kuru? I'm only open to eating healthy human flesh in this scenario.

Also, if it poofs into existence from nowhere, is it creating matter out of nothing? It's creating something that still has usable energy in it, out of nothing? That could not only end world hunger and veganism, you might be able to use the newly-created corpses for fuel in some kind of power plant. Sure, you might have to go back to steam power to make it work, and sure, human bodies might not be the optimal fuel source, but if you're getting them from nowhere, that solves all our energy woes.

It also might make the planet gain mass, eventually, if you did enough of it for long enough. Hmm. Oh, well, you can use that to make spacecraft. Maybe. Or something.

That and blood pudding. And fertilizer.

I think actually, being able to poof human corpses into existence would be an improvement over the current state of affairs. It might still be sub-optimal, but it would be better.

Now I want to be able to poof human corpses into existence from nowhere. I also think maybe I should start a list of things I've said that I wouldn't have been able to predict that I would say if asked the day before.

Replies from: Eliezer_Yudkowsky, Armok_GoB
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-05T20:46:41.279Z · LW(p) · GW(p)

Less Wrong: Rationality, polyamory, cannibalism.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-05T20:56:52.316Z · LW(p) · GW(p)

...though the other order would be more challenging.

comment by Armok_GoB · 2012-04-05T21:27:22.830Z · LW(p) · GW(p)

Someone need to make an SCP based of this.

Replies from: Strange7
comment by Strange7 · 2012-06-23T22:03:10.178Z · LW(p) · GW(p)

There already is, if you're willing to combine two: http://www.scp-wiki.net/scp-871 http://www.scp-wiki.net/scp-604

comment by TheOtherDave · 2012-03-30T16:36:13.865Z · LW(p) · GW(p)

Nobody said "free." The operational costs of corpse-poofing might be prohibitive.

Replies from: Nornagest
comment by Nornagest · 2012-04-05T20:59:38.543Z · LW(p) · GW(p)

Well, there ain't no such thing as a free lunch.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-05T20:45:46.924Z · LW(p) · GW(p)

Less Wrong: Rationality, polyamory, cannibalism.

comment by TheOtherDave · 2012-03-30T16:34:54.432Z · LW(p) · GW(p)

"the person would then be dead" seems to pretty clearly imply that there was a person involved. In the case where a corpse poofs into existence from nowhere, there doesn't seem to have ever been a person involved. I conclude that "Because the person would then be dead" doesn't apply to the case where a corpse poofs into existence from nowhere. So I'm not sure why we would need to further specify anything here.

All of that said, the whole approach of counting deaths as negative utility seems to me to be rescuing the wrong part of the original nonconsequentialist claim in the first place.

It's clear that one consequence of increasing the human population from 1 billion people to 7 billion people is that many more people die per unit time, but it doesn't follow from that fact that we should reject increasing human population on consequentialist grounds. (It might be true that we should so reject it, but even if true it doesn't follow from that fact.)

It seems that the part we would want to rescue from a consequentialist POV is the idea that more life-years is good, so any act that reduces expected net lifeyears is bad... and also, perhaps, the idea that more life-years/person is good, so any act that reduces expected net lifeyears/person is bad.

This would also render all concerns about how we define "death" moot.

comment by FGonzalez · 2012-04-03T04:28:43.622Z · LW(p) · GW(p)

You still need to weigh emotional trauma caused by corpse-poofing.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-05T21:01:31.835Z · LW(p) · GW(p)

Eh, people would get used to it.

comment by thescoundrel · 2012-03-26T14:49:30.822Z · LW(p) · GW(p)

To me, this comes down to what I am trying to learn as my anti-akrasia front kick: I cache the question "Why am I doing what I am doing?". While I lose some amount of focus to the question itself, I have gained key insights into many of my worst habits. For instance, my employer provides free soft drinks- I found that I would end up with multiple, open drinks at my desk. The cached question revealed I was using the action of getting a drink whenever I felt the need to stretch and leave my desk. Browsing reddit too much at work- cached question can catch it. Eventually, when I have affirmative answers for the question, it no longer even draws focus away from the task at hand- it is simply an itch that is easily scratched, as I know I am doing something that accomplishes a larger goal.

comment by johnswentworth · 2012-03-29T04:10:03.420Z · LW(p) · GW(p)

Many of the pain points listed have a common trait: the decision would seem easier with less information. For example, the PhD decision is easier if you didn't know about the costs which have been sunk, the identity decisions are easier if you're not sure of your own identity, cached thought problems are easier without having that thought cached, etc...

But we know that information should never have negative value. So why not highlight that dissonance? Imagine the following exercise:

Handout: "You spent the last 3 years working toward a PhD. You passed up a $90k job to stay in the program. Now you have 2 years left, and another $90k job offer has come your way. Do you take it?" (I don't know much about PhD programs, so feel free to imagine more plausible numbers here and add narrative).

Exercise 1: Is there any information you would prefer to not know? Exercise 2: How much would you pay to not know it?

If you really want to have fun, give people monopoly money and let them bid to remove information from a range of scenarios. Note that we're not offering to change the facts, just to not know them.

Personally, I think this would be a lot easier if I could just forget about all that time spent in the PhD program.

At least in this case, the exercise highlights the difference between consequentialist and non-consequentialist reasons/excuses for doing things. The "how much would you pay to not know it" is especially handy, since it puts a number on that mental pain. Then we can ask whether the mental pain is worth all the money you'd lose by, in this example, staying in the PhD program.

Replies from: JoachimSchipper, handoflixue
comment by JoachimSchipper · 2012-03-29T07:31:11.303Z · LW(p) · GW(p)

I'm not sure this works well - last time "I" made a decision, "I" preferred five years of work for a PhD title to a $90k job now. It would seem unlikely that I'd prefer a $90k job now over two years of work for a PhD title, especially given that I'm now more sure that there are good jobs waiting for me.

Replies from: johnswentworth
comment by johnswentworth · 2012-03-31T03:24:07.527Z · LW(p) · GW(p)

Thanks, Joachim. Like I said, I don't know much about PhD programs. What would be some better numbers to make the point?

Replies from: JoachimSchipper
comment by JoachimSchipper · 2012-04-02T13:45:58.130Z · LW(p) · GW(p)

I'm sorry, but I have no idea - I'm in the Netherlands, which has a different academic/economic structure than the US.

comment by handoflixue · 2012-04-03T00:20:32.940Z · LW(p) · GW(p)

The problem is, that hypothetical doesn't really have any weight, unless you specify that having a PhD will still only produce a job worth $90K, at which point the audience has to wonder why this hypothetical fool started on their degree in the first place.

I do like the point about paying to remove information - there's times I would happily have paid to remove information from my awareness, because I was aware it was biasing me in very annoying ways. I think learning to deal with that separately would be very useful, and probably help a lot with Consequentialism (maybe they're even the same issue? my intuition tells me they feel different internally, but I don't have a lot of good examples available right now)

Replies from: Bluehawk, pnrjulius
comment by Bluehawk · 2012-04-05T17:27:02.876Z · LW(p) · GW(p)

The money isn't necessarily the only factor. Don't forget about location, working hours, stress levels, and job satisfaction. I'd take a $70k job that's intrinsically rewarding over a $100k job that "isn't really my type of environment" any day.

Of course, I'd have to KNOW that the $70k job was intrinsically rewarding and that the $100k job wouldn't be, but if the hypothetical fool does know this about his PhD job prospects, for example he wants to be an academic and the job offers so far are in unintellectual labor, or in the family business, or in a city he/she would like to avoid settling down in, or involve 50% more hours than the target job of the same wage --

I don't know if that's useful or not, but I'll err on the side of opening my mouth.

Replies from: handoflixue
comment by handoflixue · 2012-04-05T18:55:32.668Z · LW(p) · GW(p)

Research suggests that once you have sufficient income to meet your basic needs, that travel time is one of the biggest factors in job satisfaction. I think we tend to focus on income because it's much easier to evaluate the actual pay rate of a job - if you're promised $100K, you can expect 100K. If you're promised 40 hours and no overtime then you'll often find that tested. If you're promised low stress and high job satisfaction, well, good luck suing for breach of contract on that.

Replies from: TheOtherDave, Bluehawk
comment by TheOtherDave · 2012-04-05T19:17:24.515Z · LW(p) · GW(p)

I read the first sentence of this comment three times, with increasing incredulity, before my brain finally parsed "travel time" in the correct order.

I think perhaps my expectations of LW discourse are being unduly skewed by all the HPMOR discussion.

comment by Bluehawk · 2012-04-05T21:12:18.881Z · LW(p) · GW(p)

Being promised low stress/high satisfaction and having a rough idea of what kind of work or work environment is (more or less) enjoyable to you are quite different things. A given idea of which work is enjoyable won't be 100% accurate; there are always going to be surprises from both inside the mind and out. But most people have a rough idea what kind of work they prefer to do. That's where the low stress/high satisfaction predictions come from in this scenario.

Obviously one can only expect so much "enjoyment" in a work environment (and no "work" is fun and enjoyable 100% of the time), but if one type of work feels worthwhile to a given person, and the other doesn't, even if this is on the basis of inference, then for some people this is going to be a significant factor in how good/bad they feel about passing up those $90k jobs for the PhD program that might now be in question.

Replies from: handoflixue
comment by handoflixue · 2012-04-05T23:03:23.790Z · LW(p) · GW(p)

Fair point. I'm fairly young, so most of my social group is still trying to figure out what sort of work environment they want, and how to actually identify it - a lot of entry level jobs outright lie about the work environment ("we value employee feedback, overtime only when necessary" -> "we are going to be doing another 80 hour death march this week because of an arbitrary release deadline").

comment by pnrjulius · 2012-04-04T20:44:06.891Z · LW(p) · GW(p)

In game theory, there are a number of situations where it is rational to handicap your own rationality: Reduce your number of choices, take away information, etc.

Now, in game theory you're competing against someone else, whereas in this case you're only competing against (time-indexed versions of?) yourself; but it could be that the same rules apply. Maybe it really is rational to pay to not know something.

Or maybe it's rational for a bounded agent to pay to be counter-biased: Knowing that I have this bias toward sunk costs, make me ignorant of all sunk costs.

Replies from: Eliezer_Yudkowsky, handoflixue
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-05T20:43:56.088Z · LW(p) · GW(p)

In game theory, there are a number of situations where it is rational to handicap your own rationality: Reduce your number of choices, take away information, etc.

TDT is intended to eliminate this. A TDT-agent - one that's correctly modeled by the environment, not that some other agent thinks is a CDT-agent - is supposed to never benefit from having any option taken away from it, and will never pay to avoid learning a piece of information.

Replies from: jeremysalwen
comment by jeremysalwen · 2012-04-05T20:49:11.590Z · LW(p) · GW(p)

Er, this is assuming that the information revealed is not intentionally misleading, correct? Because certainly you could give a TDT agent an extra option which would be rational to take on the basis of the information available to the agent, but which would still be rigged to be worse than all other options.

Or in other words, the TDT agent can never be aware of such a situation.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-05T21:22:28.740Z · LW(p) · GW(p)

Amendment accepted.

comment by handoflixue · 2012-04-04T23:09:23.654Z · LW(p) · GW(p)

Agreed. I think one could assert "Given a perfect decision theory AND a perfect implementation, additional information is never a negative", but it's silly to live as though that were true. If you know your decision theory doesn't handle X information correctly (say, sunken costs) then it's in your best interests to either eliminate the information, or fix the decision theory.

Of course, eliminating information seems to be by far the easier option...

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-04T23:25:36.412Z · LW(p) · GW(p)

If I know the class of errors my decision theory tends to make given the kinds of Xes I most commonly run into, I can also adopt a third option... for want of a better term, I can patch my decision theory. E.g., "Well, I want to finish this project, but I suspect that part of that desire stems from an invalid weighting of sunk costs, so I won't take that desire at face value... I'll apply some kind of rough-and-ready discounting factor to it." This is clearly not as good as actually fixing my decision theory, but isn't as hard either, and is sometimes more practical than eliminating the information.

Replies from: handoflixue
comment by handoflixue · 2012-04-05T18:45:39.418Z · LW(p) · GW(p)

Very true. However, "avoid X information, since it biases me" is actually an example of such a patch. Especially if the information doesn't otherwise have any useful value. How often does knowledge of sunk costs actually move you towards ideal action, rather than biasing you away from it?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-05T19:14:40.751Z · LW(p) · GW(p)

Sure, avoiding information is an example of patching a decision theory, agreed.

So I guess what I'm saying is that "either eliminate the information, or fix the decision theory" is a misleading way to phrase the choice. My real choice is between fixing it and patching it, where eliminating the information is one of several ways to patch it, and not always the best.

Making choices about future investments in ignorance of the existing data I have about previous investments and their ROI is probably less ideal than taking those data into consideration and applying some other patch to compensate for sunk-costing.

Replies from: handoflixue
comment by handoflixue · 2012-04-05T22:56:52.166Z · LW(p) · GW(p)

I like the idea of phrasing it as "patching vs long-term fixes" :)

comment by HonoreDB · 2012-03-25T04:45:51.087Z · LW(p) · GW(p)

If the world were going to end right after I took an action, which action would I choose? (Alt: If everybody saw what choice I was about to make, but then circumstances changed and my decision turned out not to matter, what choice would I want to have made?)

Did answering that question feel the same as answering the actual question? If so, I'm not really thinking about consequences.

Replies from: jmmcd
comment by jmmcd · 2012-03-26T21:34:58.440Z · LW(p) · GW(p)

Did answering that question feel the same as answering the actual question?

I think you're onto something good here. Given any question, there are probably lots of hypothetical variations, like the world-ending or the exposure to everyone's judgement which you mention, which shouldn't make a difference but do, or should make a difference but don't. Maybe list a few more such circumstances and get the class to decide whether and why the variations make a difference.

comment by Drahflow · 2012-03-24T17:23:26.479Z · LW(p) · GW(p)

So... how would I design an exercise to teach Checking Consequentialism?

Divide the group into pairs. One is the decider, the other is the environment. Let them play some game repeatedly, prisoners dilemma might be appropriate, but maybe it should be a little bit more complex. The algorithm of the environment is predetermined by the teacher and known to both of the players.

The decider tries to maximize utilitiy over the repeated rounds, the environment tries to minimise the winnigs of the decider, by using social interaction between the evaluated game rounds, e.g. by trying to invoke all the fancy fallacies you outlined in the post or convincing the decider that the environment algorithm actually results in a different decision. By incorporating randomness into the environment algorithm, this might even be used to train consequentialism under uncertainty.

comment by novalis · 2012-03-24T17:04:50.002Z · LW(p) · GW(p)

Maybe the easiest way to teach it is to teach how it applies to others. That is, train people to catch nonconsequential reasoning in arguments that others make, and then hope that they apply that to themselves. The easiest way to do that is by reflexively asking, "so what?"

Replies from: jmmcd
comment by jmmcd · 2012-03-26T21:36:52.071Z · LW(p) · GW(p)

Nice. Many people are much better at criticising others than finding the same flaws in themselves.

comment by TheOtherDave · 2012-03-24T06:54:59.517Z · LW(p) · GW(p)

Here's a long-form exercise:

  1. Break up into small groups (2-5 people)
  2. Someone in each group, picked at random, presents an upcoming problem or decision.
    The other participant(s) ask questions to clarify the problem/decision.
    The problem/decision can be either real or imaginary, but if imaginary the presenter must come up with appropriately detailed answers to questions.
  3. Everyone collaborates in generating a list of possible solutions. Five is plenty.
  4. Everyone privately notes their preferred solution.
  5. Everyone collaborates on a list of expected consequences of each solution (with confidence intervals, if desired) and publicly announce which consequence-list they consider preferable.
  6. Everyone reveals their earlier preference from step 4, and if it's different from their preference from step 5 explains what changed.

This will probably take a fair amount of time.
The shorter-form version starts with steps 1-3 already done as example problems.
It probably helps if step 6 is a surprise.

Incidentally, I prefer "Compare Likely Consequences" to "Check Consequentialism" as a label for the skill.

comment by banana · 2012-03-28T00:05:34.772Z · LW(p) · GW(p)

Your check consequentialism sounds a lot like risk management. Risk is the effect of uncertainty on objectives (ISO 31 000). The risk management process involves indentifying risks, analysing how significant they are, and then treating the big ones so that they don't prevent you from attaining your objective. This is fairly straightforward to do. The difficult part is building a risk management culture where the risks are considered before making a decison, embarking on a project, etc. Just identifying the risks is often the big deal. Once you are aware that a risk exists you will probably deal with it. Sorry that I have not given you an activity, but perhaps I have given you a useful keyword to help your search.

comment by John_Maxwell (John_Maxwell_IV) · 2012-03-24T04:55:05.843Z · LW(p) · GW(p)

By the way, if you don't mind my asking, once you've come up with your rationality curriculum what do you plan to do with it? Are you making inroads with whoever you would need to talk to to get this in a school curriculum, for instance?

Replies from: shokwave
comment by shokwave · 2012-03-24T12:44:48.513Z · LW(p) · GW(p)

I think they plan on running workshops or seminars, likely targeted at startup founders or business/consultant-type people handling large decisions (both from a capability-to-pay and a convinced-of-the-value point of view, this makes far more sense than school curriculums).

Replies from: John_Maxwell_IV, Eliezer_Yudkowsky
comment by John_Maxwell (John_Maxwell_IV) · 2012-03-24T19:53:28.022Z · LW(p) · GW(p)

What is the closest existing thing to this? How can we make friends with someone who is good at it? Are there any books about them doing it that we could read?

Brainstorm:

I would guess that large organizations are more willing to pay for live instruction than startup founders are. On the other hand, you wouldn't be able to suggest that they do anything that wasn't in the best interest of their employer like quit their job.

If organizational seminars are going to be a goal, it might not be a bad idea to start talking to relevant organizational folks to make sure you're making a product they actually want to buy. Jane Street could be an ideal first client, since they've got prestige you can use to sell other clients, EY has a pre-existing relationship with them, and they seem genuinely interested in improving their rationality. On the other hand, their rationality may be at a level where they don't think they could benefit from this sort of workshop, or targeting the workshop at them would mean developing a different set of materials. (But these "advanced" materials might appeal to clients who had already purchased and enjoyed the "basic" materials.)

The standard way to do this sort of B2B sale is to graduate to more and more important clients, since a lot of businesses will not buy a novel product unless some other businesses bought it and were happy with it. That's why getting and pleasing the first few clients is so important.

Robin Hanson has had a hard time selling prediction markets to businesses. Should we expect this to be more successful? I would guess yes, since it's not as explicitly targeted at replacing the people who might choose to implement it.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-24T18:01:01.241Z · LW(p) · GW(p)

Initially. School curriculum would be harder to develop, so the plan is for that to happen later.

comment by Nominull · 2012-03-24T03:05:19.385Z · LW(p) · GW(p)

What, we're not even allowed to have identities now?

Replies from: Vladimir_Nesov, katydee, Wei_Dai, Incorrect, Manfred, David_Gerard
comment by Vladimir_Nesov · 2012-03-24T12:12:43.267Z · LW(p) · GW(p)

Identity shouldn't act as a normative consideration. "He's going to do X because he belongs to a reference class Y" may be a valid outside view observation, a way of predicting behavior based on identity. On the other hand, "I'm going to do X because I belong to a reference class Y" is an antipattern, it's a descriptive explanation, fatalist decision rule, one that may be used to predict, but not to decide. An exception is where you might want to preserve your descriptive identity, but then the reason you do that is not identity-based.

So you can have an identity, the way you can have a pair of gloves or a Quirrell, just don't consider it part of morality.

Replies from: Will_Newsome, taryneast
comment by Will_Newsome · 2012-03-24T12:19:05.190Z · LW(p) · GW(p)

Identity shouldn't act as a normative consideration for an angel, maybe. For a human, "identity" is a pragmatic reification of cached complexes of moral conclusions that aren't immediately accessible for individual analysis. "Normative" is a misleading word here.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-24T13:01:20.330Z · LW(p) · GW(p)

Identity shouldn't act as a normative consideration for an angel, maybe.

Still shouldn't for a human, even if does. It's a normative consideration, not a descriptive one.

Replies from: Will_Newsome, TheOtherDave
comment by Will_Newsome · 2012-03-24T13:33:28.485Z · LW(p) · GW(p)

...Is there a word for "normative given bounded rationality"?

Replies from: Vaniver, Vladimir_Nesov
comment by Vaniver · 2012-03-24T17:12:26.331Z · LW(p) · GW(p)

Prescriptive.

comment by Vladimir_Nesov · 2012-03-24T13:39:12.602Z · LW(p) · GW(p)

Bounded rationality is like the mass of the Sun, difficulty of the problem, not a kind of goal.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-24T14:25:25.953Z · LW(p) · GW(p)

I don't understand.

If you're trying to dam a river, and you only have 100,000 bricks, then there is a normative solution, i.e., the solution that has the greatest chance of successfully damming the river. Talking about solutions that require one million bricks is talking about a different problem that is only relevant to people with millions of bricks. So when you say, "identity shouldn't act as a normative consideration", that sounds to me like, "you should already have one million bricks, there is no normative solution if you only have 100,000 bricks". Using 100,000 bricks to dam a river isn't using an approximation of the solution you would use if you had a million bricks. That's why I say "normative" is a misleading word here. It implies that you should try to approximate the million-brick solution even when you know you don't have enough bricks to do that: a tenth of a great million-brick dam is one millionth as useful as a complete 100,000-brick dam. Why not just renormalize such that your constraints are part of your environment and thus part of the problem, and find a normative solution given your constraints? Otherwise the normative solution is always to have already solved the problem. "What would Jesus do? Jesus would have had the foresight not to get into this situation in the first place." "Normative" is always relative to some set of constraints, so I don't see why normative-given-boundedness isn't a useful concept. I'm reminded of Nick Tarleton's intuition that decision theory needs to at some point start taking boundedness into account.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-24T14:36:28.053Z · LW(p) · GW(p)

It's useful to take the limitations of decision-making setup into account, but that is not fundamentally different from taking the number of bricks into account. The idealized criteria for comparing the desirability of alternatives don't normally depend on which alternatives are available. People shouldn't die even if it's impossible to keep them from dying.

comment by TheOtherDave · 2012-03-24T14:28:06.657Z · LW(p) · GW(p)

I'm not sure this is responsive to Will's point... at least, it seems plausible that the moral considerations he considers identity to imperfectly encapsulate are also normative, which is why he refers to them as moral in the first place. That is, I think he means to challenge the idea that identity shouldn't be/isn't a normative consideration.

comment by taryneast · 2012-03-25T08:49:33.595Z · LW(p) · GW(p)

I agree but.... purposely self-identifying with a reference class that has supposed-skills that you are trying to acquire does seem to have benefits in actually becoming more likely to have those skills. eg "I'm a hard-working person and hard-working people wouldn't just give up" is a way of convincing (/tricking) yourself into actually being a hard-working person.

EDIT: that being said - it certainly wouldn't be consequentialist. :)

Replies from: jschulter
comment by jschulter · 2012-04-04T05:02:19.056Z · LW(p) · GW(p)

But it is near-consequentialist: "I'm a hard-working person and hard-working people wouldn't just give up" --> "the act of giving up will make me feel less like a hard-working person and therefore make me less likely to work hard in the future"

Replies from: taryneast
comment by taryneast · 2012-04-04T05:28:52.970Z · LW(p) · GW(p)

Yes - it can definitely be re-phrased in consequentialist ways...

comment by Wei Dai (Wei_Dai) · 2012-03-25T02:40:29.868Z · LW(p) · GW(p)

I previously wrote a comment that seems relevant here:

How to translate identity-based decision making into values and/or beliefs seems non-trivial, and can perhaps be compared to the problem of translating anticipated-reward type decision making into preferences over states of the world or over math.

An agent that lets identity influence its decisions probably deviates from ideal rationality, but how to fix that? If we just excise the identity-based parts of its decision procedure without any compensation, that could easily make it worse off if for example it's CEV depends on its identity.

comment by Incorrect · 2012-03-24T04:58:51.918Z · LW(p) · GW(p)

To become a true rationalist one must shed the trappings of personhood. The rationalist's mind has no goal except rationality itself; no thought except the Bayesian update.. Only once you are free of worldly concerns and the concept of autonomy may you see the light of Bayes.

edit: Sorry, I was joking. I thought I was being ridiculous enough for it to be obvious.

Replies from: army1987, orthonormal, handoflixue, fubarobfusco
comment by A1987dM (army1987) · 2012-03-24T13:20:26.496Z · LW(p) · GW(p)

The rationalist's mind has no goal except rationality itself

I thought it had the goal of maximizing expected utility.

comment by orthonormal · 2012-03-24T16:17:12.329Z · LW(p) · GW(p)

Um, no.

Replies from: Incorrect
comment by Incorrect · 2012-03-24T19:01:33.890Z · LW(p) · GW(p)

Sorry, I was joking. I thought I was being ridiculous enough for it to be obvious.

Replies from: orthonormal, army1987, Will_Newsome
comment by orthonormal · 2012-03-24T19:05:04.152Z · LW(p) · GW(p)

I should have remembered that you've been around for a while, but bear in mind that the joke is just the sort of Straw Vulcan reasoning that some new people think Less Wrong obviously must subscribe to.

comment by Will_Newsome · 2012-03-24T21:12:27.866Z · LW(p) · GW(p)

'Twas completely obvious to me. I mean seriously, "light of Bayes".

comment by handoflixue · 2012-03-29T21:27:17.790Z · LW(p) · GW(p)

laughs The username was a pretty obvious give-away, IMO :)

comment by fubarobfusco · 2012-03-24T07:07:06.000Z · LW(p) · GW(p)

Incorrect indeed.

comment by Manfred · 2012-03-24T04:32:24.574Z · LW(p) · GW(p)

We are the Borg. Lower your shields and surrender your ships.

comment by David_Gerard · 2012-03-24T12:12:33.086Z · LW(p) · GW(p)

Depends what the consequences of asserting one to yourself are.

comment by Matt_Simpson · 2012-03-24T01:40:48.156Z · LW(p) · GW(p)

It occurs to me that games with some significant strategic component might be useful for priming the "but what consequences does it have?" response. I'm thinking of games like Magic: the Gathering, Settlers of Catan, Risk, etc. (I'm sure the board game aficionados will have better examples than I). I say this because of personal experience with Magic players - as they get better at magic, they tend to get better at life. Well, some of them do. The others perhaps compartmentalize too much, so maybe this won't help with everyone.

In any case, my model for what would work is a relatively easy social game that allows a non-trivial number of actions with unclear consequences... unless you stop to think about them. Magic would be perfect... if it wasn't so complicated and if fantasy tropes didn't turn off a large segment of the population. Ideally the game would be something you create instead of something your subjects/clients may have played before.

I have no ideas for the actual game, but maybe this sparks someone else's imagination.

Replies from: Desrtopa, Desrtopa, JoachimSchipper, John_Maxwell_IV
comment by Desrtopa · 2012-03-24T02:09:10.574Z · LW(p) · GW(p)

If I'm thinking of games to reinforce consequentialism, my first thought is to use games with actual story involved; you don't lose points, or regions, or so on, you lose the lives of characters you're attached to, or their trust, or maybe you fail to prevent a genocide, etc. Things which people will be more likely to associate "this is a bad game outcome" with "this would have been a bad choice in real life."

The first solution that comes to mind for this is a video game, perhaps some kind of visual novel that features a large number of choices and forces the players to choose consequentially on pain of causing Bad Things to happen in the game. But I don't think this is actually a very good solution considering how much effort it takes to make a visual novel, which can be played in its entirety and will no longer offer a single new choice afterwards, and how many people are simply not interested in playing visual novels.

Maybe some sort of roleplay would be more feasible, at least you wouldn't be designing a whole video game for each scenario, but it still sounds like an awful lot of work.

Replies from: John_Maxwell_IV, Matt_Simpson
comment by John_Maxwell (John_Maxwell_IV) · 2012-03-24T04:34:39.900Z · LW(p) · GW(p)

I sometimes try to get myself to make better decisions by pretending I'm a character in a Choose Your Own Adventure book. (E.g. "If you decide to stay on the couch because you're too lazy to work, turn to page 30.") Unfortunately, in the real books it's rare that enough information is given for you to make a really good decision, and the authors also appear to like messing with you by having good decisions blow up in your face.

So, maybe a similar book that actually gave you enough information to make a good decision and rewarded good decisions and punished bad ones?

Replies from: Eliezer_Yudkowsky, Armok_GoB
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-24T18:04:57.135Z · LW(p) · GW(p)

I sometimes try to get myself to make better decisions by pretending I'm a character in a Choose Your Own Adventure book.

This sounds like a more useful, more intuitive, much more widely applicable reification of my own method of "What Would Your TV Tropes Page Say?"

Replies from: sketerpot, Eugine_Nier, CronoDAS
comment by sketerpot · 2012-03-25T06:36:00.612Z · LW(p) · GW(p)

I don't know how many people have this issue, but I can't read Choose Your Own Adventure books without marking several past pages so I can rewind time, or try multiple branches, or safely find out what was hidden behind the venomous Venusian potted plant. Really, the only bound on it is that I eventually run out of fingers to mark my place, which constrains my time travel abilities to about four save-states. (In visual novels it's even worse, since there are enough actual save states that saving at anything that looks like a potentially significant branching point becomes viable. I've actually started using walkthroughs from GameFAQs to find out where I don't need to save, so I can stop fretting about making an irreversible decision. Trivial time travel is surprisingly addictive! What would the world be like if everyone could do it, I wonder?)

I really, really wish that this were a useful approach to life, but if it's possible to save and restore universe states, I have not been made aware of this. And obviously I haven't noticed anybody else doing it.

Replies from: Nornagest, tkadlubo, handoflixue
comment by Nornagest · 2012-03-30T22:53:48.150Z · LW(p) · GW(p)

At least visual novels (well, the two or three of them that I've played) are pretty good about giving your decisions reasonable consequences based on what you know or should be able to infer. If I'm remembering my childhood well, Choose Your Own Adventure books have a nasty habit of dropping you into unwinnable states based on trite moral dilemmas, when they aren't dropping you into unwinnable states for no good reason at all. Not that life's fair in that regard either, but CYOA doesn't even give you the option of taking the steps that could ameliorate it.

So I've got to doubt the usefulness of this as a general decision procedure. Seems to me that it'd lead to overweighting conventional social mores and social risks in general, and underweighting the sort of fact-finding and risk minimization that actually works. Which, while not as immediately suboptimal as ignoring the "Beware of Yeti" sign or playing patty-cake with the toaster in the bathtub, is probably a lot more salient for a halfway sane decision-maker.

Replies from: daenerys
comment by daenerys · 2012-03-30T23:05:42.456Z · LW(p) · GW(p)

This is one of the things I originally found disconcerting about the board game Arabian Nights. It's like anti-consequentialism: You would have options of things to do, and the option that seemed the most logical ("I'll give change to the beggar" or "I'll ignore the beggar") never gave as good of results as the craziest options ("I'll worship the beggar" or "I'll steal from the beggar", etc). I ended up getting the best results by choosing the weirdest option available.

Replies from: Dolores1984
comment by Dolores1984 · 2012-05-26T06:05:59.196Z · LW(p) · GW(p)

That strategy doesn't ALWAYS work out poorly in weird life. If you go through life looking for opportunities to make your life weirder, it WILL be interesting, if nothing else. Of course, you might also get shot.

comment by tkadlubo · 2012-03-26T09:08:31.689Z · LW(p) · GW(p)

IMHO that's a really important point. You get a better grasp about consequences of your choice after trying several options and seeing how the consequences of different actions differ.

The best laboratory example of this is playing go on a computer. Typical go software records your games, and then lets you replay, play different variants, analyze when things went really bad after a silly move, etc. After a while you get a tree of diverging game records. In some you won, in others you lost. It's a good learning experience.

(disclaimer: I'm not sure how to un-compartmentalize this learning to be applicable in real life, not just in a game of go)

comment by handoflixue · 2012-03-29T21:19:34.133Z · LW(p) · GW(p)

I do the same thing. I found that I needed far fewer save states when I routinely took the BAD choice first, since they usually lead to the shortest further decision tree. I'd also occasionally use physical bookmarks, for the few rare books that just would NOT kill you off until the very end (even though you were quite possibly stuck on a guaranteed-negative branch of the decision tree)

As to applying it to real life, I will sometimes think about the decision tree involved. Playing Chess is a good example of this: If I make THIS move, my opponent could do X, Y, or Z. If she goes with X, I can do X-a, X-b, or X-c... and then weighing all this based on probability ("She hates doing X!" "Y is her best move!") and expected value (if she does X, I'll lose. If she does Y, I go up a pawn.) Fortunate for me that she hates doing X :)

comment by Eugine_Nier · 2012-03-24T20:12:42.151Z · LW(p) · GW(p)

"What Would Your TV Tropes Page Say?"

The problem with TV Tropes is that they've been heavily primed with fictional evidence.

Replies from: JGWeissman
comment by JGWeissman · 2012-03-25T18:55:17.115Z · LW(p) · GW(p)

If you are influenced by the fictional evidence, your TV Tropes page will say Wrong Genre Savvy.

Replies from: Ezekiel
comment by Ezekiel · 2012-03-30T22:24:23.030Z · LW(p) · GW(p)

"Real Life" isn't a genre. Or if it is, it has only one trope, and that is Reality Ensues.

comment by CronoDAS · 2012-05-26T05:44:34.666Z · LW(p) · GW(p)

Incidentally, Eliezer actually does have a TV Tropes page.

comment by Armok_GoB · 2012-03-24T18:12:09.124Z · LW(p) · GW(p)

reminds me of http://www.epicsplosion.com/epicsploitation/adventures , maybe you'll be able to find something there?

comment by Matt_Simpson · 2012-03-24T03:30:00.500Z · LW(p) · GW(p)

I have two interpretations of your idea, so I'll just say what I think of both.

1) Underlying, known, game mechanics with a story behind them involving role playing.

I like this because it gives the players something they can easily point to and say "look, consequences!" in the game mechanics while making the situation feel closer to reality. However, reality doesn't give you the mechanics by which it works, so this may not translate into real-life decision making as well. On the upside, this is easy to make into a social game - think DnD but with less magic and dice.

2) No game mechanics, just a "choose your own adventure" game.

The consequences are more nebulous in this version, which is both a positive and a negative. It's a positive because it forces more brainstorming of actual consequences, but it's a negative because that makes it harder to initially start thinking about the consequences of actions. It's also difficult to make this type of game vary from playthrough to playthrough.

Starting with a type 1) game and then moving to a type 2) game seems like it might take advantage of both types' strengths. Alternatively, there's really a continuum between the two types, so maybe somewhere closer to the middle is best.

comment by Desrtopa · 2012-03-24T01:50:39.285Z · LW(p) · GW(p)

I say this because of personal experience with Magic players - as they get better at magic, they tend to get better at life. Well, some of them do. The others perhaps compartmentalize too much, so maybe this won't help with everyone.

Really? I sure haven't noticed this. If anything from my own circle of acquaintances it looks like those who got better at life were the ones who stopped putting so much of their time and attention into card games.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2012-03-24T01:57:13.564Z · LW(p) · GW(p)

Roughly, there's two populations - those who apply what they learned in magic (microeconomics, essentially) to life and those that don't. The latter tend to spend way to much time on card games. The former start saying things like "this event is pretty low EV for me, i think I better study/write that paper/work on that project/etc. instead."

In any case, as people get better at Magic, they get better at thinking about the consequences of their actions within the game. This seems like a natural stepping stone to thinking about consequences in all situations, though the trick is getting people to generalize it.

Replies from: gwern, CronoDAS
comment by gwern · 2012-03-25T21:33:36.871Z · LW(p) · GW(p)

The latter tend to spend way to much time on card games. The former start saying things like "this event is pretty low EV for me, i think I better study/write that paper/work on that project/etc. instead."

How would one distinguish between the scenario in which they begin to apply Magic-like thinking to their regular life and begin optimizing there, and the scenario in which ordinary diminishing marginal returns to playing Magic causes them to switch to the other activities?

Replies from: Matt_Simpson
comment by Matt_Simpson · 2012-03-26T02:16:54.368Z · LW(p) · GW(p)

If they're actually optimizing, you should be able to see the results, though measuring them is another problem in itself.

comment by CronoDAS · 2012-03-24T02:03:54.453Z · LW(p) · GW(p)

If I was sensible, I probably should stop playing Magic, or at least paying money for cards... but I have too much of my self-esteem wrapped up in that stupid game. It's like trying to quit smoking. :P

Replies from: NancyLebovitz, Desrtopa
comment by NancyLebovitz · 2012-03-24T11:56:25.660Z · LW(p) · GW(p)

I don't know if it's a consequentialism issue, but "if I was sensible" seems like a way of locking a problem in place.

Maybe there should be a separate category for noticing identity issues.

comment by Desrtopa · 2012-03-24T02:32:18.852Z · LW(p) · GW(p)

This is why I tend to have an immediate aversion to using Magic as a rationality teacher. The whole game is set up on a business model that incentivizes constantly shelling out money for new cards to keep your deck from becoming obsolete. Wizards Of The Coast's goal is to make sure that their players cannot continue to be competitive without providing a constant revenue flow. If you want to teach people good rationality skills, don't start by encouraging them to get into something like that.

Replies from: DSimon
comment by DSimon · 2012-03-24T06:43:55.000Z · LW(p) · GW(p)

I've always been turned off my MtG on the grounds that I should just be able to print up any cards I like and use them as long as they form a valid deck, rather than having to follow WotC's anti-"counterfeiting" policy. Do any Magic players actually do this?

Replies from: Matt_Simpson, Desrtopa
comment by Matt_Simpson · 2012-03-24T20:10:46.045Z · LW(p) · GW(p)

People create "proxy" decks all the time. It's one of the dominant ways of testing for big tournaments (when you don't know what cards you'll need until you settle on a decklist, but you don't want to buy every potential card). However, for some reason the casual community doesn't seem to do this as much. This is somewhat ironic because sanctioned tournaments are the only place you have to use real cards.

comment by Desrtopa · 2012-03-24T06:52:27.125Z · LW(p) · GW(p)

I have friends who did so, but they only used them to compose special print decks to play with the few other friends who were also using print decks, and I think they used their "real" decks more even among each other than the print decks.

comment by JoachimSchipper · 2012-03-29T07:23:21.115Z · LW(p) · GW(p)

Careful, there: some vindictiveness ("if you attack me in Africa despite our pact, I will go totally apeshit on you for the rest of the game") is an essential part of playing e.g. Risk well (in our group) - naive consequentialism ("looks like I lost Africa, taking Australia from (unrelated player) seems best now") does not work very well on intelligent and adversarial agents.

Of course, most of the world is not an intelligent and adversarial agent - pre-committing to going totally apeshit on an unthinking animal is just stupid. The easiest and biggest wins for consequentialism are there, not in games of Risk.

(Non-naive consequentialism works fine. Naive consequentialism probably works fine in many games, e.g. two-player games like Magic.)

Replies from: wedrifid
comment by wedrifid · 2012-03-29T12:18:07.240Z · LW(p) · GW(p)

Careful, there: some vindictiveness ("if you attack me in Africa despite our pact, I will go totally apeshit on you for the rest of the game") is an essential part of playing e.g. Risk well (in our group) - naive consequentialism ("looks like I lost Africa, taking Australia from (unrelated player) seems best now") does not work very well on intelligent and adversarial agents.

Totally agree. I'm ruthlessly vindictive but perfectly trustworthy (meaning I refrain from making promises I do not keep) when it comes to strategic situations like that. It looks superficially like being completely unsophisticated but it works.

comment by John_Maxwell (John_Maxwell_IV) · 2012-03-24T04:41:22.406Z · LW(p) · GW(p)

Lost Cities might work, if you took your time and tried to make the optimal play every move. I think you can make it work with playing cards.

comment by Elithrion · 2012-04-09T19:11:16.351Z · LW(p) · GW(p)

As far as I can tell, most people find it fairly easy to think about others' decisions in consequential terms, but have a lot more trouble thinking about their own that way. So, a good technique to switch to consequential thinking is to imagine that instead of thinking about your own decision, you're thinking about a decision that someone in your exact situation is making. Consider what advice you'd want to give this person, and what choice he or she should make. Disassociating yourself from the decision like this should remove the influence of most things that would normally lead to you making bad decisions.

I'm not entirely sure what would be a good exercise to teach this. One thing that comes to mind is that it might be useful to ask participants to think of to give advice to their selves from five years ago (for example), and then to imagine themselves five years from now giving advice to their current selves.

comment by KateGladstone · 2012-04-04T03:39:10.549Z · LW(p) · GW(p)

Possible exercise: Assume that you have no source of income except what you can beg, steal, or find ownerless/abandoned. Assume that you have a friend in similar straits (we'll call this person Paul Poor), and that both you and Paul know of a very wealthy person (whom we'll call Richard Rich). One day, you find — apparently abandoned in the street — a loaded gun. Think of various reasons for you to use the loaded gun to force Richard to give money to Paul. Which of these reasons are non-consequentialist, and why? Now think of various reasons for you to NOT use the loaded gun to force Richard to give money to Paul. Which of these reasons are non-consequentialist, and why? Next step: think of various reasons to use the gun to force Richard to give money to you. Which of these reasons are non-consequentialist, and why? Finally: think of various reasons NOT to use the gun to force Richard to give money to you. Which of these reasons are non-consequentialist, and why? In case you use this as an exercise, and/or wish to contact me for any reason, my name is Kate Gladstone and my e-mail address is handwritingrepair@gmail.com

comment by thomblake · 2012-04-03T18:17:50.745Z · LW(p) · GW(p)

Exercise: Notice results.

Example: (participants A and B)

A: What would be the consequences if you told everyone you were not wearing any underwear?
B: writing it down: I would be really embarrassed for the rest of the day, and everyone would laugh at me so I wouldn't want to show my face.
A: Go do that.
B: gives paper to A, does it
(some time later) A: What were the consequences of telling people...?
B: not reading the earlier paper I was a little embarrassed at the time, but people laughed so it was okay. Also, one person hit on me.
A: shows paper to B
They talk about the difference between the predictions and what actually happened.
Instructor (optional): See, this and this were not physical consequences of the action, so that's why your predictions were off-base.

Description:

2 participants. A asks B to do some action, ideally somewhere in the range that is emotionally-loaded but not completely inappropriate. B has to write down the consequences of that action (perhaps out to the end of the day / session), give the paper to A for safekeeping, and perform the action. Later, A asks B to describe what the consequences actually were and checks them against the written prediction. Ideally, an instructor points out any listed consequences that aren't 'legitimate', and explains how they could have been improved.

Rationale:

We feel differently about consequences after the fact, so independently checking our predictions about consequences against our memories of consequences should highlight any 'illegitimate' consequences listed.

Caveats:

This might just be an exercise to combat Impact Bias, or other biases related to affective forecasting.

comment by Benquo · 2012-04-03T03:23:30.239Z · LW(p) · GW(p)

On a high level, practice asking: If I do X, what does the world look like 5 minutes from now? An hour from now? A day from now? etc.

If I don't do X, what does the world look like 5 minutes from now? An hour from now? A day from now? etc.

So, let's take the PhD example. Try talking about it without using the word "because".

If I decide to finish my PhD, 5 minutes from now I feel OK. An hour from now I'm eating dinner. A day from now I'm grinding away at my dissertation. A month from now, I'm grinding away at my dissertation. A year from now, maybe I'll have finished my PhD, and I'll be on the job market. A decade from now, I'll likely have a research job in my field, or I'll be teaching, or I'll have abandoned my field and done something else.

If I don't decide to finish my PhD, 5 minutes from now I feel like a total failure, I wasted so much time. An hour from now I'm stuffing my face with junk food to deal with the stress of figuring out what to do with my life. A month from now, I've done something totally different with my life. (got a job? started a business? backpacked through Europe?) A year from now, something similar. A decade from now, who knows?

To make it vivid, ask people to talk about a choice they themselves face, and weigh the pros and cons in terms of likely future world-states. Each time you talk about a consequence, you have to mention what is going on if they chose differently, at the same distance in the future. Everyone else's job is to call them out if they go off track.

comment by handoflixue · 2012-04-03T00:00:12.065Z · LW(p) · GW(p)

It seems to me that since it's easier to notice it in other people, starting with illustrative examples would be good. This allows you to establish the basic idea, and establish a model in the audience's head. I'd suggest fictional examples would be easier to come up with, but using real examples might add veracity and help the audience engage. You could possibly even invite a few audience members up to discuss things where they might be stuck, but that runs in to the usual risks of audience examples and would probably take a fair amount of time.

Once you've established the basic idea, I think you then need to transform the idea by getting them to apply it to themselves. Relying on the audience to all have a situation where this skill applies, and one which is a good learning example, seems foolish. Instead, I'd suggest a game where you lead someone to get stuck in a commitment or trend, and then throw them at a situation where they have to break that trend to succeed.

A simple example would be to pick an audience member, and tell them to answer each question you ask "yes or no". You ask a bunch of questions that all produce an immediate, unthinking 'yes' response, and then ask them one where a 'yes' would be humorously inappropriate.

"Do you understand the idea?" "Of course" "It needs to be a yes or no. Understand?" "Yes" "Good. Is the sky blue?" "Yes" "Are dogs a mammal?" "Yes" [...] "Have you mastered Consequentialism?" "Yes" pause, cue audience laughter "Oh. No."

I'm entirely certain one could come up with both better and more complex examples, but I think that serves to illustrate the basic idea. Consequentialism suggests that putting more work in to examples probably isn't wise unless someone suggests this is a good idea and would like to see me flesh it out more :)

comment by Dmytry · 2012-04-02T09:45:56.454Z · LW(p) · GW(p)

This is the sort of in-person, hands-on, real-life, and social exercise that didn't occur to me, or Anna, or anyone else helping, while we were trying to design the Bayes's Theorem unit. Our brains just didn't go in that direction, though we recognized it as embarrassingly obvious in retrospect.

There's something important here. Problem solving. That's the use of intelligence that got us to the Moon. That's the use of intelligence which gave us Bayes theorem. And the best way you can spend your time is focussing on this. It does not help you a whole lot if you can very rationally pick between two ways of teaching the students, if you cant generate those ways. For success, one needs, first and foremost, problem solving abilities. There may be general intelligence factor, but there is also a lot of very high IQ people who are comparatively bad at free form problem solving, and even at fairly basic combinatorial things like fitting most items into a box carefully so they won't break when transported. The rationality may help you de-bias yourself wrt which items you consider more and less 'important' - you may be able to rid yourself of bias of how dear an item is to you - but it won't so much help you process the immense number of combinations and generate the best one, and your packing will still be much less effective than the packing of someone irrational who puts a nearly indestructible object that is dear to them on the top, if they are just a bit better at processing the huge solution space.

Replies from: epigeios, handoflixue
comment by epigeios · 2012-04-05T11:09:54.019Z · LW(p) · GW(p)

Simple left-brain vs right-brain. The problem you refer to isn't that hard to fix, it's just that very few people know about it. Reading through the sequences will, in most cases, make people want to exercise their minds in daily life. Eventually, the right brain will activate despite the left-brain dominance of english-speaking culture.

to put it simply. The left brain's job is to process individual points of data in series as a pattern. The right brain's job is to process all points of data in parallel as a chaotic fractal flow.

Granted, most of the sequences on here are about how to use the left brain more efficiently. And in scientific society as a whole, right-brain concepts are generally shunned except by the few people who already know about them.

However, at the very least, Eliezer himself is capable of using his right brain, even if he thinks that the general problem of society is solvable by increasing efficiency of left-brain usage. The result of this is that right-brain concepts are hidden in the sequences. Anyone who reads through deeply enough will start to be influenced by this.

But yes, I also partially agree. The fact that Eliezer tried to explain wisdom as modified pattern recognition from left-brain intelligence in HPMOR shows that either Dumbledore is hiding his wisdom, or Eliezer doesn't know what the right brain is capable of.

-

I'm looking at the long term here. This website is a good stepping stone into right-brain usage by left-brained people (it is MUCH more right-brained than standard education), and hopefully also has the ability to help right-brained people learn how to use their left brain. If nothing else, Eliezer is seriously trying to improve the functionality of the world. That means that some time in the future, he will have to learn about how the right brain works. And until then, I'm gonna keep trying to plant the seeds for this.

When I have a full, concrete understanding with the ability to really explain it in-depth to a left-brain dominant person, I will post my solutions on this website. Until then, the game you seem to be trying to play is impossible to win.

Replies from: Dmytry
comment by Dmytry · 2012-04-05T11:40:35.942Z · LW(p) · GW(p)

I do think theres truth to here being two ways to using the thought but I don't think its simply one side vs other side in humans. The left side (of right-handed individuals) has the speech centre, and thus is more involved in process of making sequences of chirps that achieve particular social purpose, and subsequently less involved in the decision making or reasoning.

In the split brain patients, when left side is presented with chicken, and right side is presented with snow, and the right side picks shovel as related, the left side explains that the shovel is for cleaning chicken shit. The left side doesn't have slightest clue why shovel was chosen, nor does have any need-to-know what so ever (even when the corpus callosum is present) as the optimum chirps are entirely dependent to listener and independent of motivation. The left side still has to employ massively parallel process to generate the chirps to the specific purpose - that's the only way brain can do it - clearly there's a lot of parallel processing required for coming up with an explanation how the shovel is related to chicken - but the chirps themselves are sequential in nature and so it appears as if there is some sort of serial process going on. It even looks like some sequences of chirps are consequences of other sequences of chirps, when the chirp making rule requires them to be produced in that order.

Then the people here have trouble with 'procrastination', 'akrasia', and the like, which is inevitable outcome of the disconnect between decision making (which decides not to do something) and speech synthesis (which talks of wanting something), and are generally a case of the pirate ship's parrot complaining of the weather. Letting the part-brained parrot take over the pirate ship is generally a bad idea, even if the parrot is very extensively trained. For one thing, the part-brained parrot doesn't know one thing about navigation and can't read the maps or charts, which are non verbal in nature. I would guess the parrot take-over corresponds to psychiatric disorders.

Arguably one of the best scientists, Albert Einstein, has lacked the parrot portion of the brain entirely.

Furthermore, an unusually high fraction of accomplished people (e.g. presidents) are left handed, which is a proxy for unusual brain architecture that doesn't implement standard clueless parrot. The evolution may easily have over-fitted us to some very specific situation where the speech is just noise, entirely unrelated to reasoning (which is the case for all smaller brained animals).

Replies from: epigeios
comment by epigeios · 2012-04-10T15:17:01.398Z · LW(p) · GW(p)

The left side still has to employ massively parallel process to generate the chirps to the specific purpose.

What makes you say that? In your example, the left brain has 2 inputs, and only needs to find a plausible connection between the two.

Although, in hindsight, You're right. The brain uses many neurons in parallel no matter what or where it is processing.

I will now proceed to twist my words to attempt to better communicate what I mean. In reality, i spoke too hastily, generalized too greatly, and still obviously don't know the correct words to use to communicate my partial, incomplete theory to a left-brain dominant culture.

If we take what I stated for the two "jobs" of the two brains:

The left brain's job is to process individual points of data in series as a pattern. The right brain's job is to process all points of data in parallel as a chaotic fractal flow.

Then, take "individual points of data in series as a pattern" and "all points of data in parallel as a chaotic fractal flow", and call each of those 2 quotes a complete concept or set, labeled A and B respectively. Then, as if putting grammar in the correct/different location, say that the left brain processes set A, and the right brain processes set B; where "processes" specifies neither parallel nor sequential, but implies "however the brain does it". If what I stated is grammatically edited to mean this, then it fits more closely with what I intended and satisfies your examples (as far as I can tell).

To describe in a different, probably better way, I consider the right brain as being used to build interacting, interweaving probability clouds of all data even remotely related to the subject (more neuron connections = more remote). The result of this is sections and points of higher or lower concentration. I then consider the left brain to take this information, and determine the direct connections between the important pieces, especially how they directly relate to an initial goal (more neuron connections = more and farther-reaching direct connections). The combination of the two thus gives the person the decision on the "best" course of action. And of course, this process can be iterated, as well as be initiated by the left brain's direct connections instead of the right brain's probability clouds.

-

I just noticed an interesting difference between my concepts and your concepts.

decision making (which decides not to do something) and speech synthesis (which talks of wanting something). And I just further (after quoting) figured out a way it relates to left-right brain difference.

I had thought of decision making as being positive (deciding "to do" instead of "not to do"). I think, however, that this is once again the difference between right brain and left brain (respectively). What I mean by this can be summarized and generalized (or analogized) as the difference between the concept of "syntropy" (a receiving antenna) and entropy (a projecting antenna).

Likewise, I thought of speech synthesis as, instead of "wanting something", "choosing something", as in "cutting out everything else". Negative instead of positive. This obviously relates to what I think of right vs left, but I'm not sure exactly how; especially since you input that the left brain has the speech center (I didn't know that).

comment by handoflixue · 2012-04-03T00:03:18.708Z · LW(p) · GW(p)

rationality[...] won't [...] help you process the immense number of combinations and generate the best one

Intelligence is the lens which sees it's own flaws. This is a flaw. See that clearly, and you should be able to fix it.

In fact, when I see intelligent people fail at such situations, I immediately want to drag them on to LessWrong and have them read all the sequences, because somewhere in there (and I'm not quite sure where), I figured out all sorts of incredible techniques for actually dealing with exactly that.

Replies from: Dmytry
comment by Dmytry · 2012-04-03T07:31:21.711Z · LW(p) · GW(p)

Did you magically transform your life to 10x the awesome? There are solutions that make it so. They are incredibly hard to arrive at, but there are.

Look at what people do here. Spending very non-trivial fraction of the time thinking about problems with very narrowly defined range of solutions, usually below 10. I have suspicion that such trains you to fail the real-world situations where you deal with > possible solutions. People do love familiar approaches, meaning, in those cases they'll latch on <10 most obvious solutions that come up instantly or were chosen by others, then rationally choose among those, because that's what the methods here deal with, that's what they tried to improve. Of course it is better to choose the best one out of easily available solutions, than not the best one, but that doesn't get anyone any heaps of utility; there are some cases where it looks like it does (market speculation), but it still does not as the system is multiplicative, follows specific sort of power law distribution, and one of the fools with coin tosses is still expected on the top, and still, coming up with methods for trading is a problem with enormous number of solutions.

Replies from: handoflixue
comment by handoflixue · 2012-04-03T20:49:06.764Z · LW(p) · GW(p)

Did you magically transform your life to 10x the awesome?

I have trouble imagining what an entire magnitude of awesomeness would even look like. I tend to intuitively model the question as "what percentage of your life are you satisfied with?" and the answer has almost always been "more than 10% of it", so you can't multiply by ten in this context. I'm not really sure of a way to phrase the question where a 10x multiplier is meaningful.

Look at what people do here. Spending very non-trivial fraction of the time thinking about problems with very narrowly defined range of solutions, usually below 10.

My area of greatest gain is self-awareness, dealing with various mental illnesses/abnormalities, and dealing with relationships (friends, work, romantic). One of my friends recently commented "I run in to the issue when meeting new people - there's thousands of things I could say, and I can't figure out where to start!" and my immediate thought was "Oh! I learned how to fix that problem from reading the sequences."

In general, before LessWrong, I could handle basic "shut up and multiply" without any trouble - a problem with only a few solutions was generally trivial. Where I ran in to issues was exactly that "huge solution space", and that is where LessWrong has really helped me.

I have definitely noticed that the sequences seem to be surprisingly well written for a wide range of rationality levels - they seem to help you build skills whether you have a little bit or a lot of rationality coming in to this. A lot of what I've personally gained from the sequences is simply that "aha!" moment of the final missing piece of the puzzle clicking in to place, because a lot of this is stuff I've spent years thinking about.

The other big thing I've gained from LessWrong is having very coherent explanations that I can share with others. It makes it very easy to quickly get one of my friends trained up sufficiently to help me bounce around ideas and come up with solutions to problems that are stumping me.

comment by Richard_Kennaway · 2012-03-25T13:25:03.429Z · LW(p) · GW(p)

Beware of motivated stopping. If someone wants to do A, because B will happen, that is only the beginning. There are several directions it's worth exploring further, with one person exploring and another prompting them with questions such as these:

Will B actually happen (or be more likely to), given A?

What makes B a desired consequence? Some further consequence that it leads to? Some larger purpose for which B is a means? Or is B terminally desirable?

At some point one has to stop, but the very first consequences one thinks of may not be that point.

comment by twentythree · 2012-03-30T18:36:26.081Z · LW(p) · GW(p)

Split the class into groups and get each group working on something they all will easily become invested in. I'm thinking have them spend 10 minutes creating/building something as a group, and make it a competition (bragging-rights only) to solidify the investment.

Before anyone has enough time to finish, offer $100 to the first person to destroy their group's creation. (Obviously, it would be best if doing so could be done in a quick motion: like if they were building a large tower with jenga blocks or something.)

After 5 seconds, pause and have each person self-evaluate: did they Check Consequentialism? Then, group up again to rescue consequentialist reasons to act/not act from non-consequentialist reasons for doing so. i.e. "I don't want to do selfish things" -> "I don't want to look selfish in front of my peers".

Replies from: handoflixue
comment by handoflixue · 2012-04-03T00:12:57.330Z · LW(p) · GW(p)

"Doing this would be rude and harm my social standing" is a perfectly reasonable criteria.

Consequentialism should point out that if they offer to split the $100 evenly, and everyone else in the group is somewhat rational, then they've avoided that consequence (and, as a bonus, prevented some jerk from knocking it over and keeping the $100)

Does remind me of an example from my childhood, though: http://lesswrong.com/lw/b4f/sotw_check_consequentialism/67uf

comment by maia · 2012-03-28T14:47:01.080Z · LW(p) · GW(p)

What kinds of exercises you use to teach a skill like "checking consequentialism" should probably be placed in the greater context of a rationality curriculum. You have to know where the students are coming from at each step.

That said-- making the assumption that the students are already familiar with the theory of heuristics and biases, and just need to learn how to apply them-- I think most of these can be taught with similar kinds of hypotheticals and problems.

For checking consequentialism, you might want to focus on problems involving sunk costs. To illustrate how sunk costs can affect how someone automatically approaches a problem, split students up into groups (they will probably produce worse results as a group, which is good for this part) and give them all similar problems, with the initial conditions modified slightly. Example: "John has been working on his PhD for X years, and expects to finish in Y. He knows W and Z facts about how his degree will benefit him." Modify parts of the problem, X and Y especially, to try and prime their System 1 for a different result. Have them make a decision quickly. Reconvene, discuss the problem, point out the issues with sunk costs and why the groups did or didn't reach a different result.

This is just a starting activity; it could be followed by having students do more hypotheticals individually. The instructor needs to give a lot of feedback on the problems as they go, asking students key questions that they might not have thought of, so they'd ideally be well trained in a) rationality and b) drawing out student thinking.

Ideally the exercises wouldn't require too much instructor skill, though, so I'll think about this some more.

comment by Zaine · 2012-03-24T08:00:17.181Z · LW(p) · GW(p)

Practicing this could be fun in pairs, dissecting an acted out scenario. Two instructors act out previously conceived scenarios, with a Influencer and a Reactor. At some point, 'twill be implied the Reactor wishes to act on the scenario itself or the knowledge presented therein; the scenario will then halt, and the students put in pairs to brainstorm the beneficience and maleficience of possible actions. Each student will take turns (which can be timed) being the brainstormer and the consequentialist (utilitarian?); of course the pairs can have different functions, like as suggested. These just serve to outline the general idea.

For example:
INFLUENCER: Good day, sir! On your way to the place we are going?
REACTOR: Why yes, I am! However odd you too shall be going there; I wish we shall fall upon their fancy!
INFLUENCER: Oh dear! The Gods are weeping once more!
REACTOR: Dear me! I prefer not to be wet, and so I always carry an umbrella upon my person!
INFLUENCER: Indeed, I see it now grasped in your hand! Whatever shall you do? Poor me, if only I were so prepared....
Halt


BRAINSTORMER: He opens his umbrella, and uses it himself.
CONSEQUENTIALIST: The umbrella protects him, but not his companion to any significant degree. (The companion must dodge the edges so as not to be poked in the eye, and may be offended.) The umbrella may wetten things once inside(, and earn him the ire of some people by whom he'd rather not be thought ill.)
Both would then consider the merits and, as outlined by the parentheticals, disadvantages of these outcomes, moving on to the brainstormer's next suggestion afterward.

After each has taken a turn, the instructors would go around the room asking each pair their brainstormed actions, their potential consequences, and the positive and negative aspects of each; as Vladimir suggests, these aspects can ("should") be relativized against each other - if they do relativize, they would state the pair's preferred action and its predicted consequence. The instructors could reinforce correct applications, and constructively criticize incorrect applications, with care taken to not put any pairs down too much (using softeners, etc.: "That's quite creative! We're glad you thought of that, this is an excellent example of how even the best consequentialists can go wrong...").

comment by Incorrect · 2012-03-24T02:26:31.541Z · LW(p) · GW(p)

Identity - "I'm the sort of person who belongs in academia."

This could also be caused be confusing correlation with causation.

comment by [deleted] · 2015-06-21T00:16:58.576Z · LW(p) · GW(p)

sd

comment by Adam Zerner (adamzerner) · 2013-12-02T04:14:58.928Z · LW(p) · GW(p)

The best idea I have for teaching rationality (in the general sense) is to:
1) explain the concepts to people (ie. explain the idea of consequentialist thinking, and the rationale behind it).
2) have people write essays about thoughts/ideas they have (they should be excited to write these essays), and then peer review the essays, pointing out errors in rationality. Like not supporting claims with evidence. Then have an instructor go over the essays and the evaluations to make sure they did a good job.

Also, I think what you're doing right now - crowd sourcing - is probably the best thing for idea generation.

comment by [deleted] · 2012-04-11T16:37:40.741Z · LW(p) · GW(p)

A simple 4 Step Process:

Step 1) write down a list of the consequences

Step 2) => take this list and eliminate all descriptions of actions

Step 3) Eliminate indirect descriptions of what someone has done: honor, merit, guilt, virtue (and all their derivations); this includes personality descriptions like: beeing a good friend, winner, loser, hero, asshole, slut, murderer

Step 4) Eliminate all "consequences" that are defined as the fulfillment or not-fulfillment of plans/goals

comment by buybuydandavis · 2012-04-08T21:47:54.949Z · LW(p) · GW(p)

"What positive future events does this action cause?"

When reading this, Thomas Sowell's 3 questions came to mind.

  • Compared to what?
  • At what cost?
  • What data do we have?

Without identifying the other options, even identifying something as "the cause" becomes problematic. And it is always problematic because "the cause" is generally used as "that event which I assign the credit or blame to".

If you're only looking for the positive events, you're obviously biasing your search. You should be looking for all consequences first, good, bad, and neutral. Further, by encouraging people to start looking for either the good or bad, they're starting off with a position and attitude first, then the reality of events second.

Finally, you have to consider the data you have to back up your claims of the likely consequences. What's your confidence in the model?

Checking consequentialism should answer Sowell's three basic questions -

  • Compared to what? ** What are my options?
  • What data do you have? ** What probabilities do I assign to possible states of the universe for each state?
  • At what cost? ** What value do I assign to these different states?

(What am I wrong with the asterisks? I wanted my versions of each of his questions to have an extra level of nesting.)

All these things come out automatically if you follow the mathematical formalism for decision theory. That would be my suggestion - have people work out an everyday problem in the formalism of probabilistic decision theory. Show them that the problem has already been solved, and they just have to turn the crank.

comment by Yuu · 2012-04-08T12:46:08.677Z · LW(p) · GW(p)

One possible exercise:

  1. In pairs or in groups one person is asked by instructor, what he or she wants to buy in near future. For example, the person wants new digital camera.

  2. Then group should calculate full cost of this camera, including all accessories and expendables.

  3. After that people in the group suggest alternative activities and expenses, based on this full cost of digital camera, what the person can buy instead of this camera. For example, the person can buy a bike and ride around, instead buy a camera and take pictures around.

  4. Then the person, who wants this camera, can check all alternatives and find the best, Maybe the group can also discuss, why one alternative is better, and another one is worse. For example, will digital camera gives more pleasure, comparing with skateboard? Or usage of some of the proposed items will give more physical exercises and more health to the owner, instead of using another one.

  5. When one person from the group will find some explanations or answers to the questions group discuss now, other people should check the rationality of this explanation: is it rational thinking, or is it bias? And one person can also check the reasoning of another person in the group in any time.

Another exercise: Group watch or read some news, proposed by instructor, then analyse the news. Group should discuss the following questions:

  • Is it possible, that the event from the news was really happened?
  • What details of this event prevent it from happening? What details show, that it looks like the genuine event?
  • If we trace this event to the future and to the past, can we find the confirmations or refutations of the discussed event?
comment by Lycia · 2012-04-07T11:34:41.883Z · LW(p) · GW(p)

Possible exercise: Take one decision, two groups. First group works out all the details of what would happen in either case, best case scenario. Second group works out all the details what would happen in either case, worst case scenario. Don't be afraid to get creative or exaggerate, have fun with it. Then write down the key points, and both groups make their decision.

Then discuss both options between groups, being more realistic.

Is there a difference in approach? Reflect as a group, what have you learned? Will you use this in future decisions? If you have time, try this again but reverse group roles, with a different decision.

Bigger decisions work better, as they have larger consequenses. Try investing as a multinational, or use a current political topic. Controvercy works well if you wish to teach critical thinking without judgement.

comment by epigeios · 2012-04-05T10:29:56.535Z · LW(p) · GW(p)

So, first of all

The easiest way to help people learn this skill, I think, would be to teach people:

  • Good posture

  • How to relax and open their muscles and joints

  • How to breath properly

And, the easiest way to teach people this skill, I think, is to instead teach them about this skill. This means that exercises should be somewhat indirect. Exercises should definitely get people to experience the problem instead of getting people to learn the solution, and only make available this solution as an option. Partly because the proposed solution is not the only solution - it is not an absolute solution - it does not work in all instances.

Lastly, teach people how to make the connection between their awareness of themselves (real situations) and the method of Checking Consequentialism. Get them to realize that it is applicable and usable almost all of the time, in good or bad situations, not just when something seems wrong or off, not just to improve their situations or make something better. If this isn't done, they won't use it when it's important.

-

Now on to reasons why most attempts at teaching this will fail, and most exercises will not reach their audience in the desired way.

This one is REALLY HARD to teach

Seriously.

The reason this is really hard is because of the concept of habits and defense mechanisms and internal realities (such as the should reality).

As you already stated, when trying to check consequentialism, most people will come up with excuses or other defense mechanisms to maintain their reality, instead of actually checking. You mentioned many ways to overcome that, but they all require recognizing that the person is trying to maintain his/her reality.

When people have a habit, they will often go to extreme lengths to maintain that habit. This is especially true for addictions.

when people have a habit of retreating to a reality (such as the should reality), the teacher has to be very careful not to give them an opportunity to retreat further (don't say "this method should work," or "you should try this". Just plain don't say "should" unless you know what you're doing).

So, we have a problem of habits and defense mechanisms and realities. This is on the level of karma. This means that there are infinite reasons why people will continue doing this, and zero reasons why they will stop. This means that logic will not work. Trying to logic people into learning this skill will not work. There are infinite barriers in the way of teaching someone this skill directly, especially of when to use it.

-

Now, with that said, it should still be possible to teach this as a skill. It's obvious that people won't use it all the time, but if they learn how to use it, they might just use it little by little.

The thing that I do instead of checking consequentialism is that whenever I notice that I am in a cycle, I exit the cycle; I stop participating in the cycle. This requires willpower that most people don't have, and an awareness of the present that most people don't have (and sometimes requires a safety net that many people don't have); however it does not require analytical skills.

The two concepts could be combined, though. It is much easier to discern a cycle than it is to determine whether an idea has gone unchecked or ducked under the radar. If someone finds themselves thinking the same thing multiple times, that person is in a cycle. If someone finds themselves compelled to do something that person has a habit of, that person is in a cycle (a minor addiction). If someone finds themselves emotionally reacting to a trigger word, that person is in a deep cycle related to that word. The thing about unchecked consequentialism is that it's really hard to catch the first time it happens (for each subject), but easy to catch the second time (and if it happens once without being fixed, it WILL happen again in a similar way).

If you want people to learn how to catch it the first time it happens (for each subject before it impacts their lives), you have to teach them how to meditate (the Taoist way, which essentially just means teach people to become aware of themselves and their surroundings). Otherwise, instead teach them how to recognize when it has happened in the past, and how to recognize when it happens again. If you do not teach them meditation, then forget about trying to get them to recognize it the first time it happens.

Man, even after writing all that, I still don't have any good ideas for exercises.

comment by John_Maxwell (John_Maxwell_IV) · 2012-03-24T04:47:23.302Z · LW(p) · GW(p)

I remember reading about an experiment performed by behavioral economists where person A divides some cash and person B either accepts their division and gets their allocated share of the money or rejects it and neither party gets their allocated share. You could say the consequentialist solution is to always accept the division of money, which most folks don't do, so this could make a good trial exercise. On the other hand, if person A is someone person B is going to have repeated interactions with, one could argue that the social capital of training person A to divide things fairly might be worth forgoing cash... So maybe it wouldn't work in a scenario where the class meets again and again? (Unless things were anonymized somehow...)

There is also the Newcomb's Problem aspect to this, where having taught the class about consequentialism will make it appear as though you have made everyone who is Person B worse off.

Reading up on experiments behavioral economists have done in general seems like it could be a good source of ideas.

Replies from: fubarobfusco, orthonormal, ciphergoth, John_Maxwell_IV, Vaniver
comment by fubarobfusco · 2012-03-24T07:06:46.379Z · LW(p) · GW(p)

I remember reading about an experiment performed by behavioral economists

http://en.wikipedia.org/wiki/Ultimatum_game

comment by orthonormal · 2012-03-24T16:19:58.283Z · LW(p) · GW(p)

I predict that if a stranger tried a one-shot Ultimatum game against Eliezer with a 99-1 split in the stranger's favor, EY would refuse it on TDT grounds. Thus any person who knows Eliezer subscribes to TDT wouldn't offer a manifestly unfair split to him.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-03-26T00:15:00.283Z · LW(p) · GW(p)

You could structure the game so that the person making the offer and the person receiving it were paired randomly after the offer was specified.

comment by Paul Crowley (ciphergoth) · 2012-03-24T11:18:02.798Z · LW(p) · GW(p)

Right, this article appears on the surface to endorse causal decision theory, which we know Eliezer doesn't in fact endorse. Mostly that's fine, but there are occasions where CDT will make the wrong call, such as the examples you point out.

comment by John_Maxwell (John_Maxwell_IV) · 2012-03-24T05:18:30.834Z · LW(p) · GW(p)

I can't help but think that the best way to actually get people to be consequentialist is similar to the way to actually get people to be atheists: convince them that all the cool kids are consequentialist. This probably contributed to me becoming more consequentialist, in the form of reading about behavioral economics studies where people did silly and irrational things and wanting to not be one of the silly and irrational ones.

comment by Vaniver · 2012-03-24T05:33:58.863Z · LW(p) · GW(p)

You could say the consequentialist solution is to always accept the division of money, which most folks don't do, so this could make a good trial exercise.

I would strongly recommend against going this direction. Consequentialism is about methodology, not particular results. As soon as you say "the consequentialist always accepts" the clever students will get a funny look on their face, as they try to cost out and compare the immediate gain and long-term loss.

Consider Kohlberg's stages of moral development, which doesn't care about the conclusion drawn but does care about the stated justification for the conclusion.

comment by beoShaffer · 2012-03-24T02:00:14.877Z · LW(p) · GW(p)

I haven't had a chance to test it much, but I think I have an idea. Frame the question as what will happen if I do x that won't happen if I do y. At this point it seems like it should be possible to reuse the mental processes for making beliefs pay rent. Basically typecast "I do x" and "I do y" to beliefs. Then see what experiences I anticipate as a result of the beliefs "I do x" and "I do y". Then determine which set of anticipated experiences has higher utility.

comment by Osmium_Penguin · 2012-03-24T06:58:20.880Z · LW(p) · GW(p)

The day-to-day cognitive skills I've mastered most completely (I will not say "rationalist skills," because this is true of my countless irrational skills too) are the ones which I learned during a moment of strong emotion — any emotion, excitement or curiosity or joy or surprise or depression or fear or betrayal.

In the case of this particular skill, it was betrayal. I brought it on myself — the details aren't important; suffice it that I spent two weeks living in the "should-universe" (I like this term) before a rude reminder of reality — but the emotion, the physical neurendocrine experience of betrayal, was quite real. And I've been able to return to it ever since, and if I'm ever in a situation where I might be working from a cached plan, I can relive a hint of it and ask myself, "Now, you don't want to feel that again, do you?"

Unfortunately, this experience strongly ties the five-second skill of "check consequentialism" to the emotion of betrayal in my mind. It is very easy for me to construct social experiments in which the teacher radically betrays her students, and then turns around and says, "Don't let anyone do that to you again!" But that is horrible teaching. It's a lot more difficult for me to imagine what "check consequentialism" would feel like if it carried a strictly positive emotional association, and then extrapolate outward to what kind of social situation would provide that emotional/cognitive link.

Students must abandon a cached plan, and evaluate the real-world consequences of their actions instead, at precisely the moment they get a strong positive emotional charge. Preferably "fun." Preferably in the sense of a party game, not a strategy game: both because people who have learned to win without disrupting social bonds (or who care more about winning than about socialization) have often already learned this skill, and because the moment I construct "winning" as a state which disrupts social bonds, I've set up a false dilemma which misleads my students about what rational thought actually is.

But what's the chain of causation? A dispassionate experimenter times the payoff to correlate with the decision? That seems awfully Pavlovian. Leaving the plan causes a reward which provides an emotional payoff? Maybe, but if a student only leaves the plan in expectation of reward, they haven't actually learned anything beyond the latest professorial password. The excitement of getting the right answer to a puzzle inspires leaving the plan? I suspect this is the way to go. But then what sort of puzzle?

I'm going to press the "Comment" button now, even though I don't think I've contributed much beyond a restatement of your original dilemma. Perhaps having done so, I'll think of some specific scenarios overnight.

comment by Sum1 · 2012-03-24T06:08:38.717Z · LW(p) · GW(p)

Well, this might be a completely trivial suggestion but, if the point is to get people using consequentialist thinking in their lives, why not have them each pick some big important decision in their lives (either an upcoming one or an ongoing one), preferably one they aren't entirely set in already or are uneasy with, so they would be more open to changing their mind, then get into groups and each take turns to discuss their decisions and options (hopefully without applying any judgement to them at this stage), then the other members trying to come up with as many new options as possible. Then they discuss the outcomes and benefits of each option and try to find the best one, not moving on until the entire group feels that the last person's problem was solved adequately. That way not only do they practice consequentialism in a practical way, they might get immediate help with a current problem which could end up being useful on its own and make consequentialist thinking seem more useful.

comment by [deleted] · 2012-04-05T08:17:54.972Z · LW(p) · GW(p)

A fun and social exercise could involve Facebook timeline!