Belief in Belief vs. Internalization

post by Desrtopa · 2010-11-29T03:12:10.614Z · LW · GW · Legacy · 59 comments

Contents

59 comments

Related to Belief In Belief

Suppose that a neighbor comes to you one day and tells you “There’s a dragon in my garage!” Since all of us have been through this before at some point or another, you may be inclined to save time and ask “Is the dragon by any chance invisible, inaudible, intangible, and does it convert oxygen to carbon dioxide when it breathes?”

The neighbor, however, is a scientific minded fellow and responds “Yes, yes, no, and maybe, I haven’t checked. This is an idea with testable consequences. If I try to touch the dragon it gets out of the way, but it leaves footprints in flour when I sprinkle it on the garage floor, and whenever it gets hungry, it comes out of my garage and eats a nearby animal. It always chooses something weighing over thirty pounds, and you can see the animals get snatched up and mangled to a pulp in its invisible jaws. It’s actually pretty horrible. You may have noticed that there have been fewer dogs around the neighborhood lately.”

This triggers a tremendous number of your skepticism filters, and so the only thing you can think of to say is “I think I’m going to need to see this.”

“Of course,” replies the neighbor, and he sets off across the street, opens the garage door, and is promptly eaten by the invisible dragon.

Tragic though it is, his death provides a useful lesson. He clearly believed that there was an invisible dragon in his garage, and he was willing to stick his neck out and make predictions based on it. However, he hadn’t internalized the idea that there was a dragon in his garage, otherwise he would have stayed the hell away to avoid being eaten. Humans have a fairly general weakness at internalizing beliefs when we don’t have to come face to face with their immediate consequences on a regular basis.

You might believe, for example, that starvation is the single greatest burden on humanity, and that giving money to charities that aid starving children in underdeveloped countries has higher utility than any other use of your surplus funds. You might even be able to make predictions based on that belief. But if you see a shirt you really like that’s on sale, you’re almost certainly not going to think “How many people will go hungry if I buy this who I could have fed?” It’s not a weakness of willpower that causes you to choose the shirt over the starving children, they simply don’t impinge on your consciousness at that level.

When you consider if you really, properly hold a belief, it’s worth asking not only how it controls your anticipations, but whether your actions make sense in light of a gut-level acceptance of its truth. Do you merely expect to see footprints in flour, or do you move out of the house to avoid being eaten?

59 comments

Comments sorted by top scores.

comment by AlanCrowe · 2010-11-29T12:25:50.497Z · LW(p) · GW(p)

This reminds me of the classic industrial accident involving a large, pressurised storage tank. There is a man-sized door to allow access for maintenance and a pressure gauge. The maintenance man is supposed to wait for the pressure to fall to zero before he undoes the heavy steel latches. It is a big tank and he gets bored with waiting for the pressure to vent. The gauge says one pound per square inch. One pound doesn't sound like much so the man undoes the latches. Since the force is per square inch it is several hundred times larger than expected. The heavy door flies open irresistibly and kills the man.

I'm not seeing how the parable helps one be less wrong in real life. In the parable the victim has seen a dog taken by the dragon. If the maintenance man had seen an apprentice crushed in an earlier similar accident the experience would scar him mentally and he would always be wary of pressure vessels. I'm worrying that the parable is cruder than the problems we face in real life.

I don't know more than I've already said about pressure vessel accidents. Is there an underlying problem of crying wolf; too many warning messages obscure the ones that are really matters of life and death? Is it a matter of incentives; the manager gets a bonus if he encourages the maintenance team to work quickly, but doesn't go to jail when cutting corners leads to a fatal accident? Is it a matter of education; the maintenance man just didn't get pressure? Is it a matter of labeling; why not label the gauge by the door with the force per door area? Is it matter of class; the safety officer is middle class, the maintenance man is working class, the working class distrust the middle class and don't much believe what they say?

Replies from: shokwave, more_wrong, Kaj_Sotala, taw
comment by shokwave · 2010-11-29T13:22:00.267Z · LW(p) · GW(p)

Is there an underlying problem

I don't know either, but I do know that an internalised, correct understanding of pressure and one of its measures 'pounds per square inch' would be a sufficient condition to save his life. The parable of the pressure vessel seems to be a case of an incorrect belief (one pound isn't much), whereas the parable of the invisible dragon seems to be a case of a correct belief (invisible dragon in my garage) that hasn't been internalised, and so has not produced beliefs it ought to (invisible DRAGON IN MY GARAGE!)

Replies from: SilasBarta, Desrtopa
comment by SilasBarta · 2010-11-29T21:49:18.751Z · LW(p) · GW(p)

I don't know either, but I do know that an internalised, correct understanding of pressure and one of its measures 'pounds per square inch' would be a sufficient condition to save his life.

I think this is a good opportunity to point out that many people haven't internalized what it means to say "the atmosphere's pressure is about 15 psi". It implies that, if you were to lie face down and someone like me stood on your back, eliciting excruciating, "GET OFF ME!" pain on your end, they've only increased the pressure on your back by maybe 30% of what was on it your entire life, even as it may seem like much more than that.

Indeed, when I visited the Boston LW meetup, a few people there initially refused to believe the implications of 15 psi atmospheric pressure, apparently never having connected that figure to everyday experience.

Replies from: gwern
comment by gwern · 2010-11-29T23:29:11.708Z · LW(p) · GW(p)

Fortunately, there are easy experiments to impress people with. As a kid, my favorite one was laying a ruler on a table so half of it was sticking out perpendicularly, put a few layers of newspaper over the other half, and then quickly hit the exposed half downwards - and fail to knock it off the table because atmospheric pressure helped hold it down.

(At least, I think this is how it went. It was a long time ago. I'm sure there are other nifty experiments.)

comment by Desrtopa · 2010-11-29T18:54:24.610Z · LW(p) · GW(p)

I suppose that in doing it in the form of a parable (or this parable, anyway,) I erred on the side of being memorable over clear, but that was what I had in mind when I wrote it. A dragon in one's garage is something where it's intuitively obvious that you don't want to go near, once you internalize the fact that it's really there. That's the kind of mistake that we've had millions of years of evolution to prepare us against making. Opening up the garage door to investigate is the sort of behavior that only makes sense when you haven't internalized the idea that there's really something in there that's liable to eat you.

Realistically, the man would probably be terrified if he had seen it eat other animals already, but I threw that in to make the parable flow better. The invisibility and inaudibility probably wouldn't be sufficient in real life given that, but they're stand in qualities for the sort of remove that might prevent one from internalizing a belief.

Replies from: shokwave
comment by shokwave · 2010-11-30T05:27:00.612Z · LW(p) · GW(p)

I upvoted because I immediately understood what you meant; I am humble enough to believe that is a fact about the post and not about my skill at understanding.

comment by more_wrong · 2014-05-31T01:36:58.218Z · LW(p) · GW(p)

Is there an underlying problem of crying wolf; too many warning messages obscure the ones that are really matters of life and death?

This is certainly an enormous problem for interface design in general for many systems where there is some element of danger. The classic "needle tipping into the red" is an old and brilliant solution for some kinds of gauges - an analogue meter where you can see the reading tipping toward a brightly marked "danger zone", usually with a 'safe' zone and an intermediate zone also marked, has surely prevented many accidents. If the pressure gauge on the door had such a meter where green meant "safe to open hatches" and red meant "terribly dangerous", that might have been a better design than just raw numbers.

I haven't worked with pressure doors but I have worked with large vacuum systems, cryogenic systems, labs with lasers that could blind you or x-ray machines that can be dangerously intense, and so on. I can attest that the designers of physics lab equipment do indeed put a good deal of thought and effort into various displays that indicate when the equipment is in a dangerous state.

However, when there are /many/ things that can go dangerously wrong, it becomes very difficult to avoid cluttering the sensorium of the operator with various warnings. The classic example are the control panels for vehicles like airplanes or space ships; you can see a beautiful illustration of the 'indicator clutter problem' in the movie "Airplane!":

comment by Kaj_Sotala · 2010-11-29T16:33:06.811Z · LW(p) · GW(p)

I'm not seeing how the parable helps one be less wrong in real life. In the parable the victim has seen a dog taken by the dragon.

I must admit I up-voted the post mostly because I thought the parable was funny.

comment by taw · 2010-12-05T15:34:18.299Z · LW(p) · GW(p)

The gauge says one pound per square inch. One pound doesn't sound like much so the man undoes the latches. [...] I'm not seeing how the parable helps one be less wrong in real life.

Easy - not using metric system will kill you.

Replies from: shokwave
comment by shokwave · 2010-12-05T15:50:15.624Z · LW(p) · GW(p)

Seventy grams per square centimeter sounds like even less, though.

comment by Nornagest · 2010-11-29T04:15:35.217Z · LW(p) · GW(p)

I'm not entirely sure how belief in belief fits in here. The dragon's unlucky host doesn't merely believe in belief: as you go out of your way to point out, he has excellent evidence of the creature's existence and can make predictions based on it. His fatal error is of a different category: rather than adopting a belief for signaling reasons and constructing a model which excuses him from providing empirical evidence for it, he's constructed a working empirical model and failed to note some of its likely consequences.

An imperfect model of an empirical reality can show fatal gaps when applied to the real world. But that's not the error of a tithing churchgoer whose concern for his immortal soul disappears in the face of a tempting Tag Heuer watch; it's the error of a novice pilot who fails to pull out of a tailspin, or of a novice chemist who mistakenly attempts to douse a sodium fire with a water-based foam. A level-one error, in other words, whereas belief in belief would be level zero or off the scale entirely.

Replies from: Desrtopa
comment by Desrtopa · 2010-11-29T04:24:47.243Z · LW(p) · GW(p)

The post was inspired by a comment which I felt confused lack of internalization with belief in belief. On reflection, I probably didn't establish the connection sufficiently.

Replies from: Nornagest
comment by Nornagest · 2010-11-29T04:39:54.537Z · LW(p) · GW(p)

Yeah, that clarifies some things. Reading over the OP, I note with some embarrassment that you never used the phrase "belief in belief" in the body text -- but I also note that Mass_Driver didn't, either.

"Understanding Your Understanding" does a pretty good job of illustrating the levels of belief, but now I'm starting to think that it might be a good idea to look at the same scale from the perspective of expected error types, not just the warning signs the article already includes.

comment by Eugine_Nier · 2010-11-30T01:16:28.214Z · LW(p) · GW(p)

See also, Reason as Memetic Immune Disorder, which discusses failure to internalize as a way to protect us from the consequences of false belief.

comment by Spurlock · 2010-11-29T19:49:06.823Z · LW(p) · GW(p)

In the LW entry on RationalWiki they make fun of Eliezer for the Roko incident.

I always felt like this was unfair, because it amounts to attacking him for actually believing the things he talks about. That is, it's okay to talk about an immensely-powerful AI happening in the future, it's just not okay to act on that belief.

If you don't disagree with someone's beliefs, don't chastise that person for acting consistently with them.

Replies from: shokwave, Desrtopa, David_Gerard, wedrifid
comment by shokwave · 2010-11-30T05:50:12.770Z · LW(p) · GW(p)

This is because of spill-over from 'religious tolerance'. Most people will feel uncomfortable mocking a ridiculous belief; they have the "everyone is entitled to their own opinion" meme in mind. This makes people disagree with other beliefs much less than they ought to.

People are much more comfortable mocking ridiculous actions (because everyone is not entitled to their own facts) - which is why evangelists are scorned where the average religious person wouldn't be, despite evangelists acting consistently and the average religious person acting inconsistently on beliefs that the mocker doesn't disagree with.

comment by Desrtopa · 2010-11-29T20:48:20.536Z · LW(p) · GW(p)

I think the people who wrote the entry probably do disagree with Eliezer's beliefs in this regard. They seem to be mocking his beliefs, not just the actions he takes based on them.

That's not to say that there's any shortage of people who do take issue with, or even outright mock people, for acting on beliefs they do not disagree with.

Replies from: Spurlock
comment by Spurlock · 2010-11-29T21:17:26.513Z · LW(p) · GW(p)

Perhaps I should have said that I detect both: mock the belief, but additionally mock that it's acted on.

comment by David_Gerard · 2010-11-30T09:02:13.611Z · LW(p) · GW(p)

I suspect (*) the principle is: sincere stupidity is no less stupid than insincere stupidity.

It is important here to note that it's a silly wiki of no importance that doesn't pretend to be of any importance. (Many readers aren't happy with its obnoxiousness.) It just happens to be one of the few places on the Internet with a detailed article on LessWrong.

If this is considered a problem, the solution would be a publicity push for LessWrong to get it to sufficient third-party notability for a Wikipedia article or something. The question then is whether that would be good for the mission: "refining the art of human rationality." I'd suggest seeing how the influx of HP:MoR readers affects things. Small September before large one.

(*) "I suspect" as I'm not going to be so foolish as to claim the powers of a spokesman.

comment by wedrifid · 2010-11-30T01:06:27.115Z · LW(p) · GW(p)

I always felt like this was unfair, because it amounts to attacking him for actually believing the things he talks about. That is, it's okay to talk about an immensely-powerful AI happening in the future, it's just not okay to act on that belief.

"That isn't" the belief being mocked.

comment by DSimon · 2010-11-29T20:08:04.741Z · LW(p) · GW(p)

Why should the guy have anticipated that the dragon would eat him? He's been poking around in the garage doing various experiments, and during all that time the dragon has merely stomped around and avoided him, showing no interest in eating him at all.

Also: why does he call it a dragon, and not just an "invisible creature"? Dragon is a pretty narrow category.

Replies from: Desrtopa
comment by Desrtopa · 2010-11-29T20:36:18.979Z · LW(p) · GW(p)

The dragon didn't eat him before because it wasn't hungry. If there's a tiger in your garage, it probably won't attack you as soon as it sees you, but the longer you spend in its vicinity, the greater your chances of being mauled.

The footprints were dragon-shaped, and it preferentially targeted the types of dogs that dragons most like to eat.

Replies from: DSimon
comment by DSimon · 2010-11-29T20:59:44.944Z · LW(p) · GW(p)

[...]it preferentially targeted the types of dogs that dragons most like to eat.

But that doesn't exclude other creatures. For example, the Giant Chupacabra is known to have similar preferences for the kinds of dogs it eats (when it can't find goats, its preferred meal).

comment by anon895 · 2010-11-29T03:44:50.160Z · LW(p) · GW(p)

Possibly related: Taking Ideas Seriously.

comment by kybernetikos · 2010-11-29T14:41:30.788Z · LW(p) · GW(p)

I've heard this contrasted as 'knowledge', where you intellectually assent to something and can make predictions from it and 'belief', where you order your life according to that knowledge, but this distinction is certainly not made in normal speech.

A common illustration of this distinction (often told by preachers) is that Blondin the tightrope walker asked the crowd if they believed he could safely carry someone across the Niagra falls on a tightrope, and almost the whole crowd shouted 'yes'. Then he asked for a volunteer to become the first man ever so carried, at which point the crowd shut up. In the end the only person he could find to accept was his manager.

Replies from: CronoDAS, wedrifid
comment by CronoDAS · 2010-11-29T16:19:05.511Z · LW(p) · GW(p)

Would they be safe? Probably. Would they enjoy the experience? Probably not...

Replies from: kybernetikos
comment by kybernetikos · 2010-11-29T16:59:49.670Z · LW(p) · GW(p)

Yeah, that is a problem with the illustration. However, I don't think it's completely devoid of use.

Taking a risk based on some knowledge is a very strong sign of having internalised that knowledge.

Replies from: steven0461
comment by steven0461 · 2010-11-29T19:26:14.843Z · LW(p) · GW(p)

Risking one's life to make a point requires not just belief but an extreme degree of belief, which the crowd was not asked to express.

comment by wedrifid · 2010-11-29T15:09:18.219Z · LW(p) · GW(p)

A common illustration of this distinction (often told by preachers) is that Blondin the tightrope walker asked the crowd if they believed he could safely carry someone across the Niagra falls on a tightrope, and almost the whole crowd shouted 'yes'. Then he asked for a volunteer to become the first man ever so carried, at which point the crowd shut up. In the end the only person he could find to accept was his manager.

Which is, of course, followed by handing out buckets of stones and pointing out suitable targets of righteous retribution. Adulterers, people who eat beetles, anyone who missed the sermon...

Replies from: ndm25, kybernetikos
comment by ndm25 · 2010-11-29T19:14:50.535Z · LW(p) · GW(p)

Is that a knee-jerk insult pointed at religion? If so, you're the AI Professor who takes cheap shots at Republicans.

If not, apologies, I must have missed the point.

Replies from: wedrifid
comment by wedrifid · 2010-11-29T23:58:00.451Z · LW(p) · GW(p)

Is that a knee-jerk insult pointed at religion? If so, you're the AI Professor who takes cheap shots at Republicans.

Not remotely, and that labeling strikes me as decidedly out of place and mildly objectionable.

Replies from: Relsqui
comment by Relsqui · 2010-11-30T11:27:45.349Z · LW(p) · GW(p)

I've seen you delete comments that received objecting responses a few times now. Why do you do that?

comment by kybernetikos · 2010-11-29T15:13:32.252Z · LW(p) · GW(p)

Which is, of course, followed by handing out buckets of stones and pointing out suitable targets of righteous retribution

I sense much anger in you....

people who eat beetles

I've not heard that one. Why are they regarded suitable targets for religious wrath?

comment by MichaelVassar · 2010-12-01T22:36:51.149Z · LW(p) · GW(p)

Back when I was around 17 I remember being in this situation. I had a not very close friend (I don't remember his name) who claimed to have been in a cave where he encountered a malign ghost or demon. I was uncomfortable explaining that while I though his claim to almost certainly be false, I wasn't going to go and check in any event, as whether the claim was true or false the best thing to do would be not to go to the cave and check.

comment by JamesPfeiffer · 2010-11-29T23:44:02.165Z · LW(p) · GW(p)

Along with the other physics-related examples here, Richard Dawkins' pendulum video seems relevant here: http://www.youtube.com/watch?v=Bsk5yPFm5NM

comment by Vladimir_Nesov · 2010-11-29T14:05:19.290Z · LW(p) · GW(p)

The next step in improving the sanity of your decision algorithm. Observations control what you should expect, and expectations control how you should act.

comment by shokwave · 2010-11-29T10:22:24.921Z · LW(p) · GW(p)

Somewhat related: at least one person has internalized their belief about the Singularity in a way that appears at least as weird as our hypothetical neighbor boarding up his garage and moving house.

I wanted to add that because it is important to note that the answer to

if you really, properly hold a belief ... [do] your actions make sense in light of a gut-level acceptance of its truth[?].

is automatically going to be "of course they do!", and that link has a situation that ought to challenge you on this topic.

Replies from: Desrtopa, Risto_Saarelma
comment by Desrtopa · 2010-11-29T13:34:27.443Z · LW(p) · GW(p)

I'm quite prepared to admit that there are many cases where my actions do not make sense in light of a gut level acceptance of my beliefs. I may not think donating money to a charity to support starving children is the highest utility use of my money, but even in spite of my own experiences with starvation, starving children are very much an invisible dragon to me.

There's a big gap between being a strong enough rationalist to acknowledge that one's actions don't make sense, and being a strong enough rationalist to reverse the situation, but at least the understanding can't hurt in making a start.

Replies from: shokwave
comment by shokwave · 2010-11-30T05:32:09.712Z · LW(p) · GW(p)

I'm quite prepared to admit that there are many cases where my actions do not make sense in light of a gut level acceptance of my beliefs.

Excellent, you are ahead of me. My initial reaction to the post was to run through a list of my prominent beliefs to see if they all made sense. They all did, and I only just barely was able to catch myself in time to think "What a coincidence, every single one?". Then the "Singularity as retirement plan" quote occurred to me.

starving children are very much an invisible dragon to me.

I support this 'x is my invisible dragon' turn of phrase!

Replies from: Desrtopa, None, Relsqui
comment by Desrtopa · 2010-11-30T05:45:59.066Z · LW(p) · GW(p)

I support this 'x is my invisible dragon' turn of phrase!

I thought it would be a good figure of speech too, but I'm afraid if I used it outside the context of this thread, people would think of Sagan's dragon, not mine. This parable would have to become a lot more famous for people to start to get it.

Replies from: shokwave
comment by shokwave · 2010-11-30T08:11:38.967Z · LW(p) · GW(p)

This parable would have to become a lot more famous for people to start to get it.

This is the process I am trying to kickstart by throwing my support behind the phrase.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-11-30T20:46:37.542Z · LW(p) · GW(p)

The two concepts could serve as a rhetorical crowbar:

Is this the kind of invisible dragon that isn't really there but you're in denial? ...or the kind that IS really there but you're in denial?

This in turn makes me think that there are some kinds of evidence that affect our behavior, and other kinds that affects our beliefs, and only partial overlap. (E.G. you know the dragon is there but you're not evolved to be as afraid as you should be, because you can't see, hear, or smell it.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2010-12-01T05:12:15.775Z · LW(p) · GW(p)

This in turn makes me think that there are some kinds of evidence that affect our behavior, and other kinds that affects our beliefs, and only partial overlap. (E.G. you know the dragon is there but you're not evolved to be as afraid as you should be, because you can't see, hear, or smell it.)

The standard LW terminology for this is near and far modes of thought.

comment by [deleted] · 2010-11-30T14:04:23.314Z · LW(p) · GW(p)

My invisible dragons:

Preventative medicine. (Sanitizing things, flu shots, drinking adequate water, etc.) Risk prevention in general (backing up files, locking my possessions, not going out after dark.) I probably don't do enough of that stuff compared to how bad I'd feel if risks actually occurred. Probably include proper diet among things that I would do differently if I successfully internalized what I believe in principle.

comment by Relsqui · 2010-11-30T11:33:39.411Z · LW(p) · GW(p)

Upvoted for

"What a coincidence, every single one?"

but while I'm here,

I support this 'x is my invisible dragon' turn of phrase!

me too.

comment by Risto_Saarelma · 2010-11-30T09:07:15.825Z · LW(p) · GW(p)

Somewhat related: at least one person has internalized their belief about the Singularity in a way that appears at least as weird as our hypothetical neighbor boarding up his garage and moving house.

I'm not so sure. Retirement plans are far, boarding up the garage is near.

Replies from: shokwave
comment by shokwave · 2010-11-30T09:49:12.108Z · LW(p) · GW(p)

Yes, but "cancelling your 401k, not getting an IRA, minimum legal contributions to your pension, etc" seem like near-thinking reactions to the concept of the Singularity.

Replies from: wedrifid
comment by wedrifid · 2010-11-30T10:00:23.931Z · LW(p) · GW(p)

Yes and slightly more specifically an expected Singularity within one's own lifetime. Not unusual among those who expect a Singularity at all, but at least not universal. People who expect a Singularity in, say 200 years and also think that systems such as 401k will maintain relevance throughout their lifetime may still go with the 401k.

Replies from: shokwave
comment by shokwave · 2010-11-30T10:19:56.109Z · LW(p) · GW(p)

My apologies, I should have said "near-thinking reactions to his personal beliefs about the Singularity". The quote in that link is clearly from somebody who believes the Singularity will happen with high probability before he retires, so making it sound like it's true of any understanding of the Singularity is quite false.

Replies from: wedrifid
comment by wedrifid · 2010-11-30T10:29:55.696Z · LW(p) · GW(p)

My apologies, I should have said "near-thinking reactions to his personal beliefs about the Singularity". The quote in that link is clearly from somebody who believes the Singularity will happen with high probability before he retires

Thanks, I was looking at just the more local context so responded literally.

comment by Eugine_Nier · 2010-12-02T02:26:34.661Z · LW(p) · GW(p)

One way to think about this is that in both failure of internalization and belief in belief, the believer's brain is in the same state. The only difference between the two cases is whether the belief in question corresponds to the territory.

In particular, there is no way to tell these two cases apart by introspection.

comment by kybernetikos · 2010-11-29T20:54:47.162Z · LW(p) · GW(p)

Imagine a raffle where the winner is chosen by some quantum process. Presumably under the many worlds interpretation you can see it as a way of shifting money from lots of your potential selves to just one of them. If you have a goal you are absolutely determined to achieve and a large sum of money would help towards it, then it might make a lot of sense to take part, since the self that wins will also have that desire, and could be trusted to make good use of that money.

Now, I wonder if anyone would take part in such a raffle if all the entrants who didn't win were killed on the spot. That would mean that everyone would win in some universe, and cease to exist in the other universes where they entered. Could that be a kind of intellectual assent vs belief test for Many Worlds?

Replies from: wedrifid, ata
comment by wedrifid · 2010-11-30T00:58:43.601Z · LW(p) · GW(p)

Now, I wonder if anyone would take part in such a raffle if all the entrants who didn't win were killed on the spot. That would mean that everyone would win in some universe, and cease to exist in the other universes where they entered. Could that be a kind of intellectual assent vs belief test for Many Worlds?

No. No. No!

Quantum Sour-Grapes (ie. what you described) could be the result of a technically coherent value system but not a sane one. Unless there is some kind of physical or emotional torture involved dying doesn't make things better regardless of QM!

comment by ata · 2010-11-29T21:06:15.928Z · LW(p) · GW(p)

Could that be a kind of intellectual assent vs belief test for Many Worlds?

No, because it assumes you're indifferent to any effects you have on worlds that you don't personally get to experience.

Replies from: kybernetikos
comment by kybernetikos · 2010-11-29T21:12:50.764Z · LW(p) · GW(p)

I suppose the goal you were going to spend the money on would have to be of sufficient utility if achieved to offset that in order to make the scenario work. Maybe saving the world, or creating lots of happy simulations of yourself, or finding a way to communicate between them.

Replies from: Bongo, Eugine_Nier
comment by Bongo · 2010-11-30T01:15:01.794Z · LW(p) · GW(p)

In that case it sounds like an obviously legit test. Someone disagree?

comment by Eugine_Nier · 2010-11-30T01:07:18.005Z · LW(p) · GW(p)

In that case what does 'Quantum' and/or many worlds have to do with this?