Misleading the witness

post by Bo102010 · 2009-08-09T20:13:52.895Z · LW · GW · Legacy · 116 comments

Contents

116 comments

Related: Trust in Math

I was reading John Allen Paulos' A Mathematician Plays the Stock Market, in which Paulos relates a version of the well-known "missing dollar" riddle. I had heard it once before, but only vaguely remembered it. If you don't remember it, here it is:

Three people stay in a hotel overnight. The innkeeper tells them that the price for three rooms is $30, so each pays $10.

After the guests go to their rooms, the innkeeper realizes that there is a special discount for groups, and that the guests' total should have only been $25.

The innkeeper gives a bellhop $5 with the instructions to return it to the guests.

The bellhop, not wanting to get change, gives each guest $1 and keeps $2.

Later, the bellhop thinks "Wait - something isn't right. Each guest paid $10. I gave them each back $1, so they each paid $9. $9 times 3 is $27. I kept $2. $27 + $2 is $29. Where did the missing dollar go?"

I remembered that the solution involves trickery, but it still took me a minute or two to figure out where it is. At first, I started mentally keeping track of the dollars in the riddle, trying to see where one got dropped so their sum would be 30.

Then I figured it out. The story should end:

Later, the bellhop thinks "Wait - something isn't right. Each guest paid $10. I gave them each back $1, so they each paid $9. $9 times 3 is $27. The cost for their rooms was $25. $27 - $25 = $2, so they collectively overpaid by $2, which is the amount I kept. Why am I such a jerk?"

I told my fiance the riddle, and asked her where the missing dollar went. She went through the same process as I did, looking for a place in the story where $1 could go missing.

It's remarkable to me how blatantly deceptive the riddle is. The riddler states or implies at the end of the story that the dollars paid by the guests and the dollars kept by the bellhop should be summed, and that that sum should be $30. In fact, there's no reason to sum the dollars paid by the guests and the dollars kept by the bellhop, and no reason for any accounting we do to end up with $30.

The contrasts somewhat with the various proofs that 1 = 2, in which the misstep is hidden somewhere within a chain of reasoning, not boldly announced at the end of the narrative.

Both Paulos and Wikipedia give examples with different numbers that make the deception in the missing dollar riddle more obvious (and less effective). In the case of the missing dollar riddle, the fact that $25, $27, and $30 are close to each other makes following the incorrect path very seductive.

This riddle made me remember reading about how beginning magicians are very nervous in their first public performances, since some of their tricks involve misdirecting the audience by openly lying (e.g., casually pick up a stack of cards shuffled by a volunteer, say "Hmm, good shuffle" while adding a known card to the top of the stack, hand the deck back to the volunteer, and then boldly announce "notice that I have not touched or manipulated the cards!"1). However, they learn to be more comfortable once they find out how easily the audience will pretty much accept whatever false statements they make.

Thinking about these things makes me wonder about how to think rationally given the tendency for human minds to accept some deceptive statements at face value. Can anyone think of good ways to notice when outright deception is being used? How could a rationalist practice her skills at a magic show?

How about other examples of flagrant misdirection? I suspect that political debates might be able to make use of such techniques (I think that there might be some examples in the recent debates over health care reform accounting and the costs of obesity to the health care system, but I haven't been able to find any yet.)

 

 

Footnote 1: I remember reading this example very recently, maybe at this site. Please let me know whom to credit for it.

116 comments

Comments sorted by top scores.

comment by billswift · 2009-08-10T13:04:00.068Z · LW(p) · GW(p)

These sorts of brain-teasers are of limited help in developing your critical thinking skills for dealing with real world problems. Here the problem is presented to you and you just have to figure out what went wrong with a train of thought. In the real world, the BIG problem is noticing that there is a difficulty in the first place.

Replies from: Bo102010
comment by Bo102010 · 2009-08-10T13:48:07.623Z · LW(p) · GW(p)

I think what's striking about this example is that it's not just a misstep in a train of thought; it's the riddler flat out lying to the riddled, forcing them into a certain pattern of thought.

comment by D_Alex · 2009-08-10T03:40:13.550Z · LW(p) · GW(p)

From advertising via a friend: Apparently a specific technique used in advertising a product with a known weakness is to promote it as a strength. Eg when first feedback from consumers shows that the taste of a particular toothpaste is disliked, the response may be to put a prominent "Great New Taste!!!" label on the pack.

Replies from: PhilGoetz, dclayh
comment by PhilGoetz · 2009-08-10T18:52:58.079Z · LW(p) · GW(p)

This was made famous by Heinz Ketchup in the 1970s. They surveyed consumers and found they were losing market share because their ketchup was so hard to get out of the bottle because it was so thick. So they made a series of "It's Slow Good!" commercials implying that pouring slower was, for some reason, a good thing. And it worked.

comment by dclayh · 2009-08-10T06:57:23.774Z · LW(p) · GW(p)

Otherwise known as "it's not a bug, it's a feature".

comment by SilasBarta · 2009-08-10T00:17:16.396Z · LW(p) · GW(p)

While I'd seen the missing dollar problem before, I think I have a new appreciation for it now. I seem to recall puzzle books presenting this problem, but even when they present the solution, they present it in terms of "here's where the missing dollar is". But as you, Wikipedia, and Paulos point out, the whole problem is that the dollar is only "missing" relative to an invalid comparison.

So, to solve the problem by finding a missing dollar is to fail to learn from it.

This riddle made me remember reading about how beginning magicians are very nervous in their first public performances, since some of their tricks involve misdirecting the audience by openly lying... they learn to be more comfortable once they find out how easily the audience will pretty much accept whatever false statements they make.

It makes me wonder how dangerous magicians can become in their regular lives.

comment by anonym · 2009-08-09T21:24:53.070Z · LW(p) · GW(p)

One technique is to look carefully for fallacies and/or gaps in the reasoning by summarizing the key theses of the argument and then considering what assumptions (and definitions) have to be made for the theses to be accepted and what has to be true for each to follow deductively from what has already been given. A book on "critical thinking" (e.g., this one) will have lots of exercises to develop this kind of skill. They typically have lots of examples drawn from politics just because political discussion is so frequently chock full of fallacies and bad arguments.

When you're trying to be critical of your own arguments and to identify cognitive biases at work, there are many simple and practical techniques mentioned in The Psychology of Judgment and Decision Making. To give just one example, poor calibration can be improved and overconfidence can be attenuated by considering the reasons why what you believe isn't so might actually be so. The mere act of trying to find reasons for the hypotheses you don't believe will make you less overconfident in the hypothesis you do believe. In general, iterating through the biases discussed in that book (and many others), and considering how each bias might apply in the particular circumstance, is a widely applicable and very useful technique.

comment by Vladimir_Nesov · 2009-08-09T22:33:31.433Z · LW(p) · GW(p)

I don't know, I felt the correct sign the first time I read it. I also didn't get confused by the cognitive reflection test (in the sense that there is no confusion, the correct way of seeing the problem is all there is). It's really hard to imagine how a person with math training can miss that.

But from what I heard, a sizable portion of math students still manage to get confused by these. Tracing the analogy to cognitive biases, there may be a qualitative difference between a person who just knows about it "in theory" and even done a lot of reading on the topic ("typical" math student), and a person who thought about the techniques at every opportunity for a number of years.

Replies from: PhilGoetz, thomblake, MichaelVassar, Jonathan_Graehl
comment by PhilGoetz · 2009-08-10T02:51:59.417Z · LW(p) · GW(p)

Re. the linked article about the cognitive confusion test:

80 percent of high-scoring men would pick a 15 percent chance of $1 million over a sure $500, compared with only 38 percent of high-scoring women, 40 percent of low-scoring men and 25 percent of low-scoring women.

Wow. That's the most mind-blowing thing I've read in a while. I can't think of a good explanation for anyone picking the $500, let alone the male-female difference. Maybe they assume that someone might actually give them $500, but the $1M is a scam. And how does this square with the idea that poor people play the lottery more?

Princeton students scored a mean of 1.63. Heh.

Replies from: Vladimir_Golovin, jimrandomh, MattFisher, thomblake, None, Nubulous, Alicorn, timtyler
comment by Vladimir_Golovin · 2009-08-10T05:54:00.514Z · LW(p) · GW(p)

I can't think of a good explanation for anyone picking the $500

Say, you're starving and if you don't get a meal today, you'll die. In such situations, the choice between 15% chance of $1 million and a sure $500 boils down to a choice between 15% chance to survive today and a 100% chance to survive today (assuming that the meal costs less than $500.)

Perhaps the people who chose $500 operate in this 'starvation mode' by default.

Replies from: randallsquared, ShardPhoenix, orthonormal
comment by randallsquared · 2009-08-11T03:07:56.983Z · LW(p) · GW(p)

The general term for "people who operate in starvation mode" is "the poor".

comment by ShardPhoenix · 2009-08-15T16:17:54.658Z · LW(p) · GW(p)

I doubt this is the case for most of the people who would take the $500 - I'd assume it's more that most of them couldn't or just didn't think to calculate the expectation value of a 15% chance at a million.

Replies from: Alicorn
comment by Alicorn · 2009-08-15T17:41:50.258Z · LW(p) · GW(p)

I think people are flipping the offer in their minds and comparing a sure $500 to an 85% chance of zilch.

comment by orthonormal · 2009-08-10T20:23:28.852Z · LW(p) · GW(p)

If that's actually the way you think, I've got some food to sell you.

comment by jimrandomh · 2009-08-10T12:27:33.901Z · LW(p) · GW(p)

Just because someone tells you that something has a 15% chance does not make it so. If someone offers you a 15% chance at $1M for anything less than $150k, then you should be 95% confident that they will try to cheat somehow.

Replies from: PhilGoetz, orthonormal, anonym
comment by PhilGoetz · 2009-08-10T15:20:08.305Z · LW(p) · GW(p)

Sure; but it's posed as a hypothetical. The participants know there's no real money involved. Are their conscious selves unable to prevent a subconscious defense against being scammed?

Replies from: thomblake, Nick_Tarleton, MichaelVassar, Nick_Tarleton
comment by thomblake · 2009-08-10T22:17:06.275Z · LW(p) · GW(p)

If it's the right answer in reality, then it's the right answer in a hypothetical. People use their actual cognitive faculties for pondering hypotheticals, not imaginary ones.

Replies from: jajvirta, Jonni
comment by jajvirta · 2009-08-11T06:42:45.897Z · LW(p) · GW(p)

This sounds like something that could be tested in a real experiment. We don't have surplus millions floating around to give away, but surely there has to be a way to arrange so that the subjects either believe it's just a hypothetical offer as apposed to a real proposition.

comment by Jonni · 2011-09-03T14:51:01.666Z · LW(p) · GW(p)

They may do, but they are still missing many of the physical reactions one might have to genuinely being offered large sums of money - excitement, adrenalin etc - and these are bound to have some effect on people's decision making processes.

Perhaps a way around this would be to conduct several thought experiments with a subject in one sitting, and tell them beforehand that 'one of the offers in these thought experiments is real and you will receive what you choose - although you will not know which one until the end of the experiment'.

This would be a good way to induce their visceral reactions to the situation, and of course, disappointingly perhaps, a more modest-sum-involving thought experiment at the end could provide them with their dividend.

Also worth noting: Deal or No Deal (UK version) demonstrates the variety of reactions and strategies people have to this sort of proposition. The US version is just silly though :)

comment by Nick_Tarleton · 2009-08-11T00:20:03.429Z · LW(p) · GW(p)

Sure; but it's posed as a hypothetical.

Maybe they don't have have the same concept as we do of a "hypothetical".

Are their conscious selves unable to prevent a subconscious defense against being scammed?

If their conscious selves could shut down the defense, scammers could convince it to. This kind of sphexish paranoia is adaptive, if you're the sort of person who scores low on the cognitive reflection test. Maybe.

comment by MichaelVassar · 2009-08-11T05:47:43.143Z · LW(p) · GW(p)

Most people don't really get hypotheticals. Even most high IQ people seem not to.

comment by Nick_Tarleton · 2009-08-10T23:47:15.538Z · LW(p) · GW(p)

Are their conscious selves unable to prevent a subconscious defense against being scammed?

If anything could let down the defense, scammers could exploit it – so the semiconscious reasoning might go. Epistemic paranoia is adaptive when you're bad at evaluating arguments.

comment by orthonormal · 2009-08-10T20:28:44.130Z · LW(p) · GW(p)

We've had this argument before, and it still looks to me like this couldn't account for the full effect of risk aversion. The fact that scammers regularly succeed means that people don't usually base their reasoning on that sort of suspicion.

Replies from: randallsquared
comment by randallsquared · 2009-08-11T03:15:44.790Z · LW(p) · GW(p)

People who feel secure do not, and people who do not feel secure do. Unfortunately, to someone in the latter camp, genuine opportunity really looks like a scam; it's "too good to be true".

comment by anonym · 2009-08-15T18:00:54.661Z · LW(p) · GW(p)

This is a really good point. I'd feel much more confident in these sorts of results if the questions were prefaced by disclaimers stating that there is no chance whatsoever of getting ripped off, that the random decision that determines the win or loss is absolutely guaranteed to be secure and accurate, that the $1M is tax-free, will be given in secret, doesn't need to be reported to the government, etc.

comment by MattFisher · 2009-08-22T09:50:13.356Z · LW(p) · GW(p)

One reason people might pick the $500 is because they'll come out better off than 85% of people who take the more rational course. It is little comfort to be able to claim to have made the right decision when everyone who made the less rational decision is waving a stack of money in your face and laughing at the silly rationalist. People don't want to be rich - they just want to be richer than their next door neighbour.

comment by thomblake · 2009-08-10T22:26:39.443Z · LW(p) · GW(p)

I can't think of a good explanation for anyone picking the $500

I agree with Vladimir Golovin. I definitely think this way - I can think immediately of how useful $500 would be to me, but cannot think of many ways to use a 15% chance of $1M.

Well, I can think of one way - I would take the 15% chance of $1M and then sell it to one of you suckers for $100,000.

Replies from: Alicorn, barrkel
comment by Alicorn · 2009-08-10T23:12:28.295Z · LW(p) · GW(p)

Show of hands - who really has $100,000 that they could free up to buy this from thomblake? Personally, I don't think I actually know what 100k is worth to me because I have never had my hands on that much money.

Replies from: MichaelVassar, CronoDAS
comment by MichaelVassar · 2009-08-11T05:45:00.341Z · LW(p) · GW(p)

Actually, if he has the $1M, I'm in. I don't have $100K liquid in any normal sense but I could certainly raise it for such a deal and divide the risk and return up among a few people.

Replies from: Alicorn
comment by Alicorn · 2009-08-11T16:00:39.548Z · LW(p) · GW(p)

He does not have (even in this hypothetical situation) a million bucks. Hypothetically, he's being offered a 15% chance of winning a million bucks.

Incidentally, in a staggering display of risk aversion, I asked a friend how much she'd pay for a 15% chance of a million dollars and she said maybe twenty bucks because those did not seem like "very good odds" to her. -.-

comment by CronoDAS · 2009-08-10T23:39:58.033Z · LW(p) · GW(p)

I don't have $100,000. I only have about $30,000.

Just how much is a one in seven chance of a million dollars worth to everyone here, anyway? (Offer me a near certainty of $70,000 and I'd start to have second thoughts about taking the gamble.)

comment by barrkel · 2009-08-11T05:30:12.739Z · LW(p) · GW(p)

Exactly. If someone has $1.1M, spending $0.1M on a 15% chance of $1M is a good deal. Someone who has $0.05M and has to go into debt to buy the 15% chance is very probably insane.

comment by [deleted] · 2009-08-10T03:31:13.756Z · LW(p) · GW(p)

Wow. That's the most mind-blowing thing I've read in a while.

Agreed. That made my eyes water quite a bit. Alicorn's large-amounts-of-money-can-have-negative-utility explanation snapped me out of it.

Just wait until Eliezer sees this...

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-10T05:02:38.055Z · LW(p) · GW(p)

AAIIIIIEEEEEAAARRRRRGGGHHH.

Just when you think your species can't possibly get any more embarrassing.

Replies from: None, Eliezer_Yudkowsky, MichaelVassar, dtgmrk
comment by [deleted] · 2009-08-10T05:24:28.041Z · LW(p) · GW(p)

Precisely the reaction I expected! This model of despair produced by the Singularity Institute for Eliezer Yudkowsky is matching quite well. A rigorous theory of Eliezer Yudkowsky can't be far off.

--Delta, your friendly neighborhood Friendly AI

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-11T07:12:54.089Z · LW(p) · GW(p)

Okay, I tested this on a couple of uninvolved bystanders and yes, they would take the $500 over the 15% chance of $1m. Guess it's true. Staggers the mind.

Replies from: orangecat, Hans
comment by orangecat · 2009-08-13T01:20:28.120Z · LW(p) · GW(p)

I tried it on two women at work and they both went for the million, one with no hesitation and the other after maybe 10 seconds. Although they both have some background in finance and are probably 1 to 2 standard deviations above average IQ.

Replies from: Eliezer_Yudkowsky, Bo102010
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-13T15:09:02.281Z · LW(p) · GW(p)

That's not very surprising. You could see if they passed all three questions on the reflection test.

comment by Bo102010 · 2009-08-13T02:25:52.262Z · LW(p) · GW(p)

My fiance (who has a more advanced degree than I) thought I was trying to trick her and made me restate the problem several times.

comment by Hans · 2009-08-11T13:29:13.176Z · LW(p) · GW(p)

As previous comments have said, it would be possible to sell the 15% chance for anything up to $150k. Once people realise that the 15% chance is a liquid asset, I'm sure many will change their mind and take that instead of the $500.

What does this mean? If the 15% chance is made liquid, that removes nearly all of the risk of taking that chance. This leads me to believe that people pick the $500 because they are, quite simply, (extremely) risk-averse. Other explanations (diminishing marginal utility of money, the $1 million actually having negative utility, etc.) are either wrong, or they are not a large factor in the decision-making process.

Replies from: conchis
comment by conchis · 2009-08-11T13:59:02.050Z · LW(p) · GW(p)

Note that the standard explanation for risk-aversion just is diminishing marginal utility (where utility is defined in the decision-theoretic sense, rather than the hedonic sense). However, Matt Rabin pretty convincingly demolishes this in his paper Diminishing marginal utility of wealth cannot explain risk aversion.

comment by MichaelVassar · 2009-08-11T05:49:16.636Z · LW(p) · GW(p)

OK, you have to think like reality too. How many times am I going to post this same sentence on one thread?

comment by dtgmrk · 2009-08-22T00:49:34.048Z · LW(p) · GW(p)

Off topic, but I just wanted to draw your attention to a comment made about you here:

http://www.kurzweilai.net/mindx/frame.html

Over halfway down page is a topic with your name. In this topic one commenter says unkind things about you. Your response ?

comment by Nubulous · 2009-08-24T22:27:12.863Z · LW(p) · GW(p)

I can't think of a good explanation for anyone picking the $500

For a person who doesn't expect to get many more similar betting chances, the expectation value of the big win is unphysical.

Replies from: Christian_Szegedy
comment by Christian_Szegedy · 2009-08-24T23:03:33.432Z · LW(p) · GW(p)

Yes. Probably that must be one of the reasons.

I also read someone that humans generally don't distinguish much between between probabilities lower than 5%. That is, everything below 5% is treated as a low probability event.

Even I, with good mathematical training, I guess if I would prefer $100K at 100% to $1B at 1%.

Although the second alternative has 100X "expected" pay off, I don't "expect" myself to be lucky enough to get it. :)

And although I'd definitely prefer $1M@15% to $500@100% if you'd multiply it by thousand, I think, I'd take $500K@100% rather than $1B@15% (in this case of course, Bill Gates would laugh at me... :) )

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-08-25T00:04:51.220Z · LW(p) · GW(p)

Well, also part of it is that for most people, utility isn't linear in money.

Imagine someone starving, on the verge of death or such. This offer is more or less their very last chance at this particular moment to be able to survive.

500$ with certainty means high probability of immediate survival. 1 million dollars with 15% chance means ~15% chance of survival.

500$ can potentially get enough meals and so on to buy enough time to get more help.

Again, not everyone is in this situation, obviously. But this is a simple construct to demo that utility isn't linear in money and that picking the 500$ can, at least in some cases, be rather more rational than the initial naive computation may suggest at first. (Shut up and multiply UTILITIES and probabilities, rather than money and probability. :))

Having said all that, probably for most people in that study, picking 500$ was the wrong choice. :)

Replies from: MarkusRamikin, Christian_Szegedy
comment by MarkusRamikin · 2011-07-03T08:44:31.145Z · LW(p) · GW(p)

Well, also part of it is that for most people, utility isn't linear in money.

Yeah. This assumption of linearity is annoyingly common; I wish more people were aware of the problems with it when contructing their various thought experiments. Not just with money, either.

comment by Christian_Szegedy · 2009-08-25T00:21:20.365Z · LW(p) · GW(p)

I don't think you can model my preferences with excepted value computation based on a money -> utility mapping.

E.g. I'd definitely prefer 100M@100% to any amount of money at less than 100%. Still I'd prefer 101M@100%.

I think that my preference is quite defensible from a rational point of view, still there is no real valued money to utility mapping that could make it fit into an expected utility-maximization framework.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-08-25T00:37:50.837Z · LW(p) · GW(p)

Well, you can use money to do stuff that does have value to you. So while there isn't a simple utility(money) computation, in principle one might have utility(money|current state of the world)

ie, there's a sufficiently broad set of things you can do with money such that more money will, for a very wide range of amounts of money, give you more opportunity to bring reality into states higher in your preference ranking.

and wait... are you saying you'd prefer 100 million dollars at probability =1 to, say, 100 billion dollars at probability = .99999?

Replies from: Christian_Szegedy
comment by Christian_Szegedy · 2009-08-25T00:54:06.125Z · LW(p) · GW(p)

Call me a chicken, but yes: I would not risk going out empty handed even in 1 out of 100000 if I could have left with $100M.

This kind of super-cautious mindset can't be modeled with any real valued money X (current state of the world) -> utility type of mapping.

Replies from: Alicorn, Eliezer_Yudkowsky, Vladimir_Nesov, rhollerith_dot_com
comment by Alicorn · 2009-08-25T01:08:43.327Z · LW(p) · GW(p)

The technical term is "risk-averse", not "chicken".

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-25T02:54:38.628Z · LW(p) · GW(p)

This kind of super-cautious mindset can't be modeled with any real valued money X (current state of the world) -> utility type of mapping.

If you would trade a .99999 probability of $100M for a .99997 probability of $100B, then you're correct - you have no consistent utility function, and hence you can be money-pumped by the Allais Paradox.

Replies from: SilasBarta, Christian_Szegedy
comment by SilasBarta · 2009-08-25T03:26:55.405Z · LW(p) · GW(p)

And as I've argued before, that only follows if the a) the subject is given an arbitrarily large number of repeats of that choice, and b) their preference for one over the other is interpreted as them writing an arbitrarily large number of option contracts trading one for the other.

If -- as is the case when people actually answer the Allais problem as presented -- they merely show a one-shot preference for one over the other, it does not follow that they have an inconsistent utility function, or that they can be money-pumped. When you do the experiment again and again, you get the expected value. When you don't, you don't.

If making the "wrong" choice when presented with two high-probability, high-payoff lottery tickets is exploitation, I don't want to be empowered. (You can quote me on that.)

Replies from: MarkusRamikin, thomblake
comment by MarkusRamikin · 2011-09-18T10:52:16.992Z · LW(p) · GW(p)

This is what I'm thinking, too. Curious, since you say you've argued this before, did Eliezer ever address this argument anywhere?

Replies from: SilasBarta
comment by SilasBarta · 2011-09-20T20:22:59.404Z · LW(p) · GW(p)

Yes, but I can't find it at the moment -- it came up later, and apparently people do get money-pumped even on repeated versions. The point about what inferences you can draw from a one-shot stands though.

comment by thomblake · 2009-08-26T18:28:02.503Z · LW(p) · GW(p)

If making the "wrong" choice when presented with two high-probability, high-payoff lottery tickets is exploitation, I don't want to be empowered. (You can quote me on that.)

Not very quotable, but I may be tempted to do so anyway.

Replies from: SilasBarta
comment by SilasBarta · 2009-08-26T18:54:16.361Z · LW(p) · GW(p)

Aw, come on! Don't you see? "If X is wrong, I don't want to be right", but then using exploitation and empowerment as the opposites instead?

Anyway, do you get the general point about how the money pump only manifests in multiple trials over the same person, which weren't studied in the experiments, and how Eliezer_Yudkowsky's argument subtly equates a one-time preference with a many-time preference for writing lots of option contracts?

Replies from: thomblake
comment by thomblake · 2009-08-26T20:10:15.138Z · LW(p) · GW(p)

Yep.

Replies from: SilasBarta
comment by SilasBarta · 2009-08-26T21:59:46.356Z · LW(p) · GW(p)

Rockin.

comment by Christian_Szegedy · 2009-08-25T03:42:50.652Z · LW(p) · GW(p)

The above example had no consistent (real valued) utility function regardless off my 100M@.99999 vs. 100B@.99997 preference.

BTW, whatever would that preference be (I am a bit unsure, but I think I'd still take the 100M as not doing so would triple my chances of losing it) I did not really get the conclusion of the essay. At least I could not follow why being money-pumped (according to that definition of "money pumped") is so undesirable from any rational point of view.

comment by Vladimir_Nesov · 2009-08-25T07:31:16.550Z · LW(p) · GW(p)

This kind of super-cautious mindset can't be modeled with any real valued money X (current state of the world) -> utility type of mapping.

Yes it can: use the mapping U:money->utils such that U(x) is increasing for x<$100M (probably concave) and U(x) = C = const for x>=$100M. Then expected utility EU($100M@100%) = C*1 = C, and also EU($100B@90%) = C*0.9 < EU($100M@100%). But one of the consequences of expected utility representation is that now you must be indifferent between 20% chance at $100M and 20% chance at $100B.

Replies from: Christian_Szegedy
comment by Christian_Szegedy · 2009-08-29T00:48:55.863Z · LW(p) · GW(p)

I also made the requirement that 101M@100% should be preferred to 100M@100%.

Your utility function of U(x)=C for x>100M can't satisfy that.

comment by RHollerith (rhollerith_dot_com) · 2009-08-28T19:26:18.706Z · LW(p) · GW(p)

Call me a chicken, but yes: I would not risk going out empty handed even in 1 out of 100000 if I could have left with $100M.

This kind of super-cautious mindset can't be modeled with any real valued money X (current state of the world) -> utility type of mapping.

Like Vladimir Nesov pointed out, that is false -- not the preference being expressed, of course, but the statement that the preference can't be modeled with the mapping.

Now first let me make it clear that I disapprove of the atmosphere you find in some academic science departments where making a false statement is taken to be a mortifying sin. That kind of attitude is a big barrier to teaching and to learning. Since teaching and learning is a big part of what we want to do here, we should not think poorly of a participant for making a false statement.

But I am a little worried that in 88 hours since the false statement was made, no one downvoted the false statement (or if they did, the vote was canceled out by an upvote). And I am a little worried that in the 81 hours since his reply, no one upvoted Nesov's reply in which he explains why the statement is false. (I have just cast my votes on these 2 comments.)

It is good to have an atmosphere of respect for people even if they make a mistake, but it is bad IMHO when most readers ignore a false statement like the one we have here when there is no doubt about its falseness (it is not open to interpretation) and it involves knowledge central to the mission of the community (e.g., like the one we have here about the most elementary decision theory). Note that elementary decision theory is central to the rationality mission of Less Wrong and to the improve-the-global-situation-with-AI mission of Less Wrong.

Moreover, if you not only read a comment, but also decide to reply to it, well, then IMHO, you should take particular care to make sure you understand the comment, especially when the comment is as short and unnuanced as the one under discussion. But before Nesov's reply, two people replied to the comment under discussion without showing any sign that they recognise that the one statement of fact made in the comment is false. One reply (upvoted 3 times) reads, 'The technical term is "risk-averse", not "chicken"'. The other introduces the Allais paradox, which is irrelevant to why the statement is false.

I do not mean to single out this comment and these 2 replies or the people who wrote them: the only reason I am drawing attention to them is to illustrate something that happens regularly. And I definitely realize that it probably happens a lot less here on Less Wrong than it does in any other conversation on the internet that ranges over as many subject relevant to the human condition as the conversation on Less Wrong does. And a significant reason for that is the hard work Eliezer and others put into the development of the software behind the site.

But I suspect that one of the best opportunities for creating a conversation that is even better than the conversation we are all in right now is to make the response by the community to false statements (the kind not open to interpretation) more salient and more consistent. Wikipedia's response to false statements gives me the impression of rising to the level of saliency and consistency I am talking about, but of course the software behind Wikipedia does not support conversation as well as the software behind Less Wrong does. (And more importantly but more subtly, Wikipedia is badly governed: much of the goodwill and reputation enjoyed by Wikipedia will probably be captured by the ideological and personal agendas of Wikipedia's insiders.)

Replies from: thomblake, Christian_Szegedy
comment by thomblake · 2009-08-28T19:41:53.119Z · LW(p) · GW(p)

I disagree that false statements are the sorts of things that should be downvoted. I'm all about this being a place where people can happily be false and get corrected, and that means the 'I want to see fewer comments like this" interpretation suggests that I should not downvote comments merely for containing falsehoods.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2009-08-28T19:59:14.134Z · LW(p) · GW(p)

"I'm all about this being a place where people can happily be false and get corrected."

I am, too, until the false statements start drown out the relevant true information so that the most rational readers decide to stop coming here anymore or until the volume of false statements overwhelm the community's ability to respond to false statements. But, yeah, I am with you.

And you make me realize that downvoting is probably not the right response to a false statement. I just think that there should be a response that is not as demanding of the reader's time and attention as reading the false statement, then reading the responses to the false statement. (Also, it would be nice to give a prospective responder a way to respond that is less demanding of their time than the only way currently available, namely, to compose a comment in reply to the false statement.)

comment by Christian_Szegedy · 2009-08-29T00:52:46.292Z · LW(p) · GW(p)

My original statement was mathematically true. Maybe Vladimir was sloppy reading it (his utilty function satisfied only half of the requirements), but I would not downvote him for that.

comment by Alicorn · 2009-08-10T03:04:53.866Z · LW(p) · GW(p)

Another possible sane motivation for taking the $500 is a familiarity with how commonly lottery winners find their lives ruined by the sudden influx of cash.

Replies from: anonym, PhilGoetz, simpleton, CarlShulman, jajvirta
comment by anonym · 2009-08-10T03:53:13.523Z · LW(p) · GW(p)

It's possible but doesn't seem very likely, since given the choice between $1M outright or $500 outright, those same people would almost certainly take the $1M.

I think a more likely explanation is that they conceptualize the problem as having to choose between "probably getting $0" and "certainly getting $500".

Replies from: bbleeker, cousin_it
comment by Sabiola (bbleeker) · 2009-08-11T10:33:10.644Z · LW(p) · GW(p)

Of course that's it. $500 is a lot to pay for a lottery ticket, even one with as high a chance of winning as this. Change it to a certain $20 and a 15% chance of $40,000, and I bet (heh) that many more people will take the chance then.

comment by cousin_it · 2009-08-10T15:20:34.991Z · LW(p) · GW(p)

Alicorn can still rescue her justification by saying that winning a lot of money by luck can ruin your life :-)

comment by PhilGoetz · 2009-08-10T15:26:44.524Z · LW(p) · GW(p)

Well, I don't buy this in general, but I do know one person who would do this. I was talking with her recently about a lottery winnery who won some huge sum, maybe $100M, and she said, "But it ruined his life." And I said, "Why? What happened?" And she said, "Oh, I don't know what happened. But I assume it ruined his life."

In her mind, I think she had already taken her high prior for B, assumed B, and converted the non-incident into further evidence for B. She did laugh at herself when she realized she'd done this, so there is hope. :)

comment by simpleton · 2009-08-10T03:34:10.834Z · LW(p) · GW(p)

Being aware of that tendency should make it possible to avoid ruination without forgoing the money entirely (e.g. by investing it wisely and not spending down the principal on any radical lifestyle changes, or even by giving all of it away to some worthy cause).

Replies from: Alicorn
comment by Alicorn · 2009-08-10T05:24:35.490Z · LW(p) · GW(p)

Unless there's akrasia involved. I can only imagine how tempting it would be to just outright buy a house if I were suddenly handed a million dollars, no matter how sternly I told myself not to just outright buy a house.

Replies from: simpleton, hirvinen, rwallace
comment by simpleton · 2009-08-10T06:02:23.082Z · LW(p) · GW(p)

And the best workaround you can come up with is to walk away from the money entirely? I don't buy it.

If you go through life acting as if your akrasia is so immutable that you have to walk away from huge wins like this, you're selling yourself short.

Even if you're right about yourself, you can just keep $1000 [edit: make that $3334, so as to have a higher expected value than a sure $500] and give the rest away before you have time to change your mind. Or put the whole million in an irrevocable trust. These aren't even the good ideas; they're just the trivial ones which are better than what you're suggesting.

comment by hirvinen · 2009-08-10T23:11:22.808Z · LW(p) · GW(p)

Ha! Buying a house and even more so moving is hard work, even with hired help. No way I'd do that right away.

comment by rwallace · 2009-08-10T15:09:24.964Z · LW(p) · GW(p)

Given a million-dollar windfall, buying a house at today's depressed prices would be one of the best investments you could make. (An additional benefit would be to make the money less liquid, thereby cutting down the temptation to spend it frivolously.)

Replies from: Alicorn, matt
comment by Alicorn · 2009-08-10T15:42:10.520Z · LW(p) · GW(p)

Perhaps, but owning a house would be a terrible time investment for me the way my life is working. I suppose I could hire a property manager and rent it out, though.

comment by matt · 2009-08-10T22:36:43.626Z · LW(p) · GW(p)

How sure are you that you know more than the market on this one? What information do you have that (still) rich property speculators don't have?

Replies from: MichaelVassar
comment by MichaelVassar · 2009-08-11T05:51:21.827Z · LW(p) · GW(p)

Housing markets aren't even theoretically efficient. Too big, diffuse, illiquid, etc.

Replies from: MichaelBishop, MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-08-15T21:36:49.075Z · LW(p) · GW(p)

ok, but are you arguing that Matt's skepticism is unwarranted? are you heavily invested in realestate?

comment by CarlShulman · 2009-08-10T04:10:32.629Z · LW(p) · GW(p)

So they should keep $10,000 if they win. Or just sell a stake in the winnings to a less risk-averse third party for $10,000, risk free.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-10T15:23:50.362Z · LW(p) · GW(p)

What $10,000?

EDIT: Never mind; I thought he said "the $10,000".

Replies from: Alicorn
comment by Alicorn · 2009-08-10T15:41:00.769Z · LW(p) · GW(p)

You could probably sell somebody a 15% chance to win a million bucks for 10k. It's worth fifteen times that to a risk-neutral agent.

Replies from: Technologos
comment by Technologos · 2009-08-11T05:41:43.522Z · LW(p) · GW(p)

I'm pretty confident that you could sell a true 15% chance to win a million bucks for a lot more than 10k... after all, investment banks make substantially greater gambles regularly.

I'd probably ask for 100k to start and go from there.

comment by jajvirta · 2009-08-22T11:04:56.726Z · LW(p) · GW(p)

It is my understanding that people are in general way too optimistic about how much winning $1M would increase their overall happiness. (Say if they are asked to imagine themselves winning the lottery.)

comment by timtyler · 2009-08-10T18:01:44.936Z · LW(p) · GW(p)

Re: Maybe they assume that someone might actually give them $500, but the $1M is a scam.

If you were offering this deal, wouldn't the $1M be based on a deceptive maniuputation of the stated probabilities? Many participants can probably figure that one out.

comment by thomblake · 2009-08-10T22:14:30.150Z · LW(p) · GW(p)

kudos for linking to Virginia Postrel.

comment by MichaelVassar · 2009-08-11T05:34:35.926Z · LW(p) · GW(p)

Think like reality. If it's hard to imagine how something could happen update your model.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-11T13:09:31.228Z · LW(p) · GW(p)

My model says that there is a big difference between formal education and deep understanding that can only be developed by extracurricular appreciation of the subject.

comment by Jonathan_Graehl · 2009-08-10T21:12:33.099Z · LW(p) · GW(p)

I was briefly tempted to answer 10 cents to the first problem.

one study was done at University of Toledo'' -- where the mean score on his test was 0.57 out of a possible 3 -- ''and one study was done at Princeton,'' where the mean was 1.63

I'm just thrilled to think of how dumb our elite (top 10% and top 2% respectively?) are.

Almost a third of high scorers preferred a 1 percent chance of $5,000 to a sure $60.

Maybe those are just the high scorers who got 3/3 (avoiding the tempting surface error) almost by sheer chance.

Replies from: Eneasz, anonym
comment by Eneasz · 2009-08-10T21:44:30.889Z · LW(p) · GW(p)

I got 3/3 and I would take the chance. My rational is that $60 is almost nothing. I can make that very quickly, and it won't buy much. I won't notice it in my monthly finances. $5000, on the other hand, is actually worth considering. That could change my month significantly (and impact the rest of my year as well). Would you rather have a 100% chance of getting a nickel, or a 0.01% chance of getting a small diamond?

Replies from: JGWeissman, orangecat, orthonormal, Alicorn
comment by JGWeissman · 2009-08-10T22:01:29.925Z · LW(p) · GW(p)

What if we transform the problem, so that you have the opportunity to pay $60 for a 1% chance to gain $5000?

Replies from: DonGeddis
comment by DonGeddis · 2009-08-10T22:29:16.332Z · LW(p) · GW(p)

Exactly! This is gambling, isn't it? A small expected loss, with a tiny chance of some huge gain.

If your utility for money really is so disproportionate to the actual dollar value, then you probably ought to take a trip to Las Vegas and lay down a few long-odds bets. You'll almost certainly lose your betting money (but you wouldn't "notice it in [your] monthly finances"), while there's some (small) chance that you get lucky and "change [your] month considerably".

It's not hypothetical! You can do this in the real world! Go to Vegas right now.

(If the plane flight is bothering you, I'm sure we could locate some similar online betting opportunities.)

comment by orangecat · 2009-08-11T23:56:46.787Z · LW(p) · GW(p)

That sort of attitude (among my opponents) is very helpful to my poker bankroll. You're giving up $60 for $50 of expected value. Even given your risk-seeking preference, surely you can find a better gamble. Putting it on a single number in roulette would be a better deal.

comment by orthonormal · 2009-08-11T02:25:25.651Z · LW(p) · GW(p)

By the way, welcome to Less Wrong (I notice you had some comments on Overcoming Bias as well); you should check out the welcome thread if you haven't already.

comment by Alicorn · 2009-08-10T22:05:43.120Z · LW(p) · GW(p)

I'd be almost guaranteed to lose the diamond before I could liquidate it if I won it. Should I factor that in or not? Diamonds are also notoriously difficult to liquidate unless you are in the relevant cartel...

comment by anonym · 2009-08-15T17:52:16.282Z · LW(p) · GW(p)

I would guess that most people who got the first problem correct also had "10 cents" as their initial thought, for about a half or second or so before they had finished reading the question and before they had actually started deliberatively thinking about the problem.

comment by Bo102010 · 2009-08-09T21:28:53.792Z · LW(p) · GW(p)

I remembered shortly after writing this that there was an example of outright lying in an attempt to get one to conform to a certain pattern of thought in Initiation Ceremony (hooray fictional evidence).

comment by teageegeepea · 2009-08-09T21:15:38.351Z · LW(p) · GW(p)

I had heard it before a while back but I decided to think through it myself and see what was going on. The first thought that occurred to me was that the bellhop's two dollars should be subtracted, just as each of the 1 dollar bills given back to the guests was (to get 27 from 30). Then, I imagined that the guests had not paid with $10 dollars bills, but in 30 ones. The hotel has 30, then each of the three guests is given 1and the bellhop takes two. Here is where the money is:

  • Guest 1: 1
  • Guest 2: 1
  • Guest 3: 1
  • Bellhop: 2
  • Hotel: 25
  • Total: 30

It is basically following all the individual dollars to see where they are and make sure none are missing. Alternately, you can imagine that the price really was 27 in the beginning, they all paid 9, and then the bellhop just stole 2 on the assumption that the owner wouldn't notice.

comment by Aurini · 2009-08-21T23:44:53.368Z · LW(p) · GW(p)

A while back on here somebody mentioned that they were at a College, and then later on somebody else mentioned MIT, so I drew the conclusion that they were at MIT. This is a power which must be used wisely...

comment by SforSingularity · 2009-08-11T22:16:30.441Z · LW(p) · GW(p)

Can anyone think of good ways to notice when outright deception is being used? How could a rationalist practice her skills at a magic show?

Most "rationalists" are quite smart people, so tricks that are designed by a trickster to fool the masses rarely work on us. For example, I doubt that many on this site would invest heavily in a pyramid scheme or get fooled by a used car salesman. This is because these tricks are targeted at the average idiot.

However, I have recently noticed that there is, for each of us, a stalker who stalks us and at each and every turn attempts to deceive us, and is just as smart as we are. That stalker/trickster is your own cognitive biases, and by far and away inflicts the greatest material losses on you. This is certainly true in my case.

I cannot even remember the last time I was fooled by someone else, but now that I am working on reducing my losses due to self deception, I realize that basically every day I engage in successful self-deception: I get into some emotional state, myopic, irrational algorithms take over, and I make up little excuses to myself for why they reached the right conclusion.

The real enemy is already inside your head.

Replies from: Annoyance, fburnaby
comment by Annoyance · 2009-08-13T15:36:08.310Z · LW(p) · GW(p)

Most "rationalists" are quite smart people, so tricks that are designed by a trickster to fool the masses rarely work on us.

Wrong. Tricksters rely on people making stupid assumptions and failing to check assertions. People with a lot of brainpower can do those things just as easily as people without.

Physicists asked to evaluate paranormal claims do very poorly, yet they are clearly very brainy. It takes more than just brains to be intelligent - you have to use the brains properly.

If I had a dollar for every brainy person who'd been gulled because they thought they were "too smart" to require being skeptical...

Replies from: SforSingularity
comment by SforSingularity · 2009-08-13T19:11:15.112Z · LW(p) · GW(p)

Physicists asked to evaluate paranormal claims do very poorly, yet they are clearly very brainy.

Reference, please. I defy the implied claim that "Physicists asked to evaluate paranormal claims do worse than the average person". I bet 6:1 against this.

If I had a dollar for every brainy person who'd been gulled because they thought they were "too smart" to require being skeptical...

and if I had a dollar for every average idiot who sleepwalked straight into an obvious scam I would make a lot more money.

Replies from: Aurini, Annoyance
comment by Aurini · 2009-08-22T00:08:09.951Z · LW(p) · GW(p)

Project Alpha by James Randi: http://en.wikipedia.org/wiki/Project_Alpha

Scientists tend to be trusting and naive, since neither nature nor their peers are prone to lying. That's why magicians make such great skeptics -- their profession is nothing but lying!

comment by Annoyance · 2009-09-01T13:48:48.928Z · LW(p) · GW(p)

If I had a dollar for every brainy person who'd been gulled because they thought they were "too smart" to require being skeptical...

and if I had a dollar for every average idiot who sleepwalked straight into an obvious scam I would make a lot more money.

Those sets are not disjoint.

Replies from: SforSingularity
comment by SforSingularity · 2009-09-01T14:27:29.964Z · LW(p) · GW(p)

I define "average idiot" to be disjoint from "brainy person". Does that sound reasonable?

Of course, I am sure that there are some very clever people who sleepwalked straight into a really obvious scam without even questioning it, but I am making the empirical claim that this doesn't happen as much as it does for people of below average intelligence.

comment by fburnaby · 2009-08-12T18:55:43.524Z · LW(p) · GW(p)

Get him out, please.

comment by Larks · 2009-08-09T20:36:53.706Z · LW(p) · GW(p)

Can anyone think of good ways to notice when outright deception is being used?

I suspect one of the best ways may be to try to re-create their reasoning from the beginning, so you engage the logical inference part of your brain in trying to (re)reason, rather than the 'listening to and believing' part, as we now know that we first believe, and have to consciously reject, new ideas, rather than the other way around.

Where money is concerned, I suppose you could check whether income matches expenditures.

other examples of flagrant misdirection

I can think of many examples of cases where candidates evade the most basic of economics, but as Politics is the Mind-Killer, I think it's probably best not to bring them up.

Replies from: Larks
comment by Larks · 2009-08-19T22:11:55.062Z · LW(p) · GW(p)

other examples of flagrant misdirection

The National Lottery!

comment by Mike Bishop (MichaelBishop) · 2009-08-15T21:38:42.174Z · LW(p) · GW(p)

Ask people to write/draw the problem and I bet you get far better responses.

comment by PhilGoetz · 2009-08-10T02:36:49.049Z · LW(p) · GW(p)

Restating the problem in simpler terms, without narrative, would help with this example.

Replies from: Bo102010
comment by Bo102010 · 2009-08-10T02:53:21.190Z · LW(p) · GW(p)

Which problem do you mean? The original riddle?

Actor A charges actors B1, B2, and B3 $10 each, for a total charge of $30. Next, A changes the total charge to $25. Next, Actor C gives $1 of the $5 difference to each of the Bs, and keeps $2. After having paid $10 and returned $1, each of the 3 Bs paid $9. $9 times 3 is $27, plus the $2 kept by C is $29. What happened to the extra $1 so that the sum is $30?

The flagrant lying is occurs from "plus the $2 kept by C" to the end.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-10T18:55:52.005Z · LW(p) · GW(p)

You're still telling it as a narrative. If you wrote it out as an Excel spreadsheet, I think the difficulty would vanish.

Replies from: Richard_Kennaway, Bo102010
comment by Richard_Kennaway · 2009-08-10T19:28:38.797Z · LW(p) · GW(p)

Spreadsheet?? Just look at it this way: where is the money? The guests have paid $27, of which $25 is with the innkeeper and $2 is with the bellhop. Problem gone.

comment by Bo102010 · 2009-08-11T00:05:51.737Z · LW(p) · GW(p)

The aim of my post is to point out that there is no difficulty until the riddler leads you into thinking there is. Nonetheless, you could do:

1) A: $0, B1: $10, B2: $10, B3: $10, C: $0

2) A: $30, B1: $0, B2: $0, B3: $0, C: $0

3) A: $25, B1: $0, B2: $0, B3: $0, C: $5

4) A: $25, B1: $1, B2: $1, B3: $1, C: $2

And then misdirect by saying "But," then stating,

[B1(1) - B1(4)] + [B2(1) - B2(4)] + [B3(1) - B3(4)] + C = 29

and then asking "Where'd the missing dollar go?"