Pascal's wager

post by duckduckMOO · 2013-04-22T04:41:19.766Z · LW · GW · Legacy · 30 comments

Contents

30 comments


I started this as a comment on "Being half wrong about pascal's wager is even worse" but its really long, so I'm posting it in discussion instead.

 

Also I illustrate here using negative examples (hell and equivalents) for the sake of followability and am a little worried about inciting some paranoia so am reminding you here that every negative example has an equal and opposite positive partner. For example pascal's wager has the opposite where accepting sends you to hell, it also has the opposite where refusing sends you to heaven. I haven't mentioned any positive equivalents or opposites below. Also all of these possibilities are literally effectively 0 so don't be worrying.

 

"For so long as I can remember, I have rejected Pascal's Wager in all its forms on sheerly practical grounds: anyone who tries to plan out their life by chasing a 1 in 10,000 chance of a huge pay-off is almost certainly doomed in practice.  This kind of clever reasoning never pays off in real life..."

 

Pascal's wager shouldn't be in in the reference class of real life. It is a unique situation that would never crop up in real life as you're using it. In the world in which pascal's wager is correct you would still see people who plan out their lives on a 1 in 10000 chance of a huge pay-off fail 9999 times out of 10000. Also, this doesn't work for actually excluding pascal's wager. If pascal's wager starts off excluded from the category real life you've already made up your mind so this cannot quite be the actual order of events.

 

In this case 9999 times you waste your Christianity and 1/10000 you don't go to hell for eternity, which is, at a vast understatement, much worse than 10000 times as bad as worshipping god even at the expense of the sanity it costs to force a change in belief, the damage it does to your psyche to live as a victim of self inflicted Stockholm syndrome, and any other non obvious cost: With these premises choosing to believe in God produces infinitely better consequences on average.

 

Luckily the premises are wrong. 1/10000 is about 1/10000 too high for the relevant probability. Which is:

the probability that the wager or equivalent, (anything whose acceptance would prevent you going to hell is equivalent) is true

MINUS

the probability that its opposite or equivalent, (anything which would send you to hell for accepting is equivalent), is true 

 

1/10000 is also way too high even if you're not accounting for opposite possibilities.

 

 

Equivalence here refers to what behaviours it punishes or rewards. I used hell because it is in the most popular wager but it applies to all wagers. To illustrate: If its true that there is one god: ANTIPASCAL GOD, and he sends you to hell for accepting any pascal's wager, then that's equivalent to any pascal's wager you hear having an opposite (no more "or equivalent"s will be typed but they still apply) which is true because if you accept any pascal's wager you go to hell. Conversely, If PASCAL GOD is the only god and he sends you to hell unless you accept any pascal's wager, that's equivalent to any pascal's wager you hear being true.

 

The real trick of pascals wager is the idea that they're generally no more likely than their opposite. For example, there are lots of good, fun, reasons to assign the Christian pascal's wager a lower probability than its opposite even engaging on a Christian level:

 

Hell is a medieval invention/translation error: the eternal torture thing isn't even in the modern bibles.

The belief or hell rule is hella evil and gains credibility from the same source (Christians, not the bible) who also claim that god is good as a more fundamental belief, which directly contradicts the hell or belief rule.

The bible claims that God hates people eating shellfish, taking his name in vain, and jealousy. Apparently taking his name in vain is the only unforgivable sin. So if they're right about the evil stuff, you're probably going to hell anyway.

It makes no sense that god would care enough about your belief and worship to consign people to eternal torture but not enough to show up once in a while.

it makes no sense to reward people for dishonesty.

The evilness really can't be overstated. eternal torture as a response to a mistake which is at its worst due to stupidity (but actually not even that: just a stacked deck scenario), outdoes pretty much everyone in terms of evilness. worse than pretty much every fucked up thing every other god is reputed to have done put together. The psychopath in the bible doesn't come close to coming close.

 

The problem with the general case of religious pascal's wagers is that people make stuff up (usually unintentionally) and what made up stuff gains traction has nothing to do with what is true. When both Christianity and Hinduism are taken seriously by millions (as were the Roman/Greek gods, and Viking gods, and Aztec gods, and Greek gods, and all sorts of other gods at different times, by large percentages of people) mass religious belief is 0 evidence. At most one religion set (e.g. Greek/Roman, Christian/Muslim/Jewish, etc) is even close to right so at least the rest are popular independently of truth.

 

The existence of a religion does not elevate the possibility that the god they describe exists above the possibility that the opposite exists because there is no evidence that religion has any accuracy in determining the features of a god, should one exist.

 

You might intuitively lean towards religions having better than 0 accuracy if a god exists but remember there's a lot of fictional evidence out there to generalise from. It is a matter of judgement here. there's no logical proof for 0 or worse accuracy (other than it being default and the lack of evidence) but negative accuracy is a possibility and you've probably played priest classes in video games or just seen how respected religions are and been primed to overestimate religion's accuracy in that hypothetical. Also if there is a god it has not shown itself publicly in a very long time, or ever. So it seems to have a preference for not being revealed.  Also humans tend to be somewhat evil and read into others what they see in themselves. and I assume any high tier god (one that had the power to create and maintain a hell, detect disbelief, preserve immortal souls and put people in hell) would not be evil. Being evil or totally unscrupled has benefits among humans which a god would not get. I think without bad peers or parents there's no reason to be evil. I think people are mostly evil in relation to other people.  So I religions a slight positive accuracy in the scenario where there is a god but it does not exceed priors against pascal's wager (another one is that they're pettily human) or perhaps even the god's desire to stay hidden. 

 

Even if God itself whispered pascal's wager in your ear there is no incentive for it to actually carry out the threat: 

 

There is only one iteration.

AND

These threats aren't being made in person by the deity. They are either second hand or independently discovered so:

The deity has no use for making the threat true, to claim it more believably, as it might if it was an imperfect liar (at a level detectable by humans) that made the threats in person.

The deity has total plausible deniability.

Which adds up to all of the benefits of the threat having already being extracted by the time the punishment is due and no possibility of a rep hit (which wouldn't matter anyway.)

 

So, All else being equal. i.e. unless the god is the god of threats or pascal's wagers (whose opposites are equally likely):

 

If God is good (+ev on human happiness -ev on human sadness that sort of thing), actually carrying out the threats has negative value.

If god is scarily-doesn't-give-a-shit-neutral to humans, it still has no incentive to actually carry out the threat and a non zero energy cost.

if god gives the tiniest most infinitesimal shit about humans its incentive to actually carry out the threat is negative.

 

If God is evil you're fucked anyway:

The threat gains no power by being true, so the only incentive a God can have for following through is that it values human suffering. If it does, why would it not send you to hell if you believed in it? (remember that the god of commitments is as likely as the god of breaking commitments)

 

Despite the increased complexity of a human mind I think the most (not saying its at all likely just that all others are obviously wrong) likely motivational system for a god which would make it honour the wager is that that God thinks like a human and therefore would keep its commitment out of spite or gratitude or some other human reason. So here's why I think that one is wrong. It's generalizing from fictional evidence: humans aren't that homogeneous (and one without peers would be less so), and if a god gains likelihood to keep a commitment from humanness it also gains not -designed-to-be-evil-ness that would make it less likely to make evil wagers.  It also has no source for spite or gratitude, having no peers. Finally could you ever feel spite towards a bug? Or gratitude? We are not just ants compared to a god, we're ant-ant-ant-etc-ants.

 

Also there's the reasons that refusing can actually get you in trouble:  bullies don't get nicer when their demands are met. It's often not the suffering they're after but the dominance, at which point the suffering becomes an enjoyable illustration of that dominance.  As we are ant-ant-etc-ants this probability is lower but The fact that we aren't all already in hell suggests that if god is evil it is not raw suffering that it values. Hostages are often executed even when the ransom is paid. Even if it is evil, it could be any kind of evil: its preferences cannot have been homogenised by memes and consensus.

 

There's also the rather cool possibility that if human-god is sending people to hell, maybe its for lack of understanding. If it wants belief it can take it more effectively than this. If it wants to hurt you it will hurt you anyway. Perhaps peerless, it was never prompted to think through the consequences of making others suffer. Maybe god, in the absence of peers just needs someone to explain that its not nice to let people burn in hell for eternity. I for one remember suddenly realising that those other fleshbags hosted people. I figured it out for myself but if I grew up alone as the master of the universe maybe I would have needed someone to explain it to me.

 

30 comments

Comments sorted by top scores.

comment by Qiaochu_Yuan · 2013-04-22T05:03:33.475Z · LW(p) · GW(p)

and I think quite good

General writing comment: this stopped me from reading the rest of your post. It's a mild crackpot signal.

Also, I really, really hate posts with weird spacing between paragraphs. I don't have a good reason for this.

Replies from: Xachariah, None
comment by Xachariah · 2013-04-22T06:40:36.052Z · LW(p) · GW(p)

Regarding whitespace, I usually like this style of spacing in articles. It makes it much easier to see what's going on and identify clusters of ideas.
It's like punctuation for paragraphs!

In this case, however, the whitespace seemed to have been placed at random and did not separate ideas.

comment by [deleted] · 2013-04-22T05:36:15.194Z · LW(p) · GW(p)

It makes it really hard to focus on.

Same thing happens when people paste their articles from Word and the font ends up being different than the default.

comment by Shmi (shminux) · 2013-04-22T06:30:38.598Z · LW(p) · GW(p)

Downvoted without reading for user-hostile writing and formatting style. Show some respect to your readers, man.

comment by duckduckMOO · 2013-04-22T07:45:33.278Z · LW(p) · GW(p)

Is the spacing less annoying now? It wasn't at random: it had 4 gaps between topics, 2 between points and one in a few minor places were I just wanted to break it up. The selection of that scheme was pretty much random though. I just spaced it like I would read it out loud. Which was kind of stupid. I can't expect people to read it in my voice. Anyway is this any better?

Got rid of the "and I think quite good." I just meant I liked it enough to want to share it in a discussion post. I assume that's not the interpretation that was annoying people. How did people read it that made it a crackpot signal?

Replies from: Richard_Kennaway, wedrifid, orthonormal, ygert
comment by Richard_Kennaway · 2013-04-22T10:28:58.207Z · LW(p) · GW(p)

Is the spacing less annoying now?

No. The spacing is just as annoying. It still looks random. Use section titles, bulleted lists, etc. as appropriate, not more space between paragraphs.

But I don't think that will fix this article. The content is just as rambling and random.

Re "and I think quite good": this should -- literally -- go without saying. Anyone who posts something thinks it good enough to post.

Replies from: DaFranker
comment by DaFranker · 2013-04-22T18:05:34.873Z · LW(p) · GW(p)

Anyone who posts something thinks it good enough to post.

Or is running a controlled experiment.

Replies from: gwern
comment by gwern · 2013-04-22T20:06:26.475Z · LW(p) · GW(p)

To be fair, every post involved there was by someone who thought it good enough to post. The quality of posts wasn't being manipulated - the first comment was.

comment by wedrifid · 2013-04-22T08:01:57.966Z · LW(p) · GW(p)

Is the spacing less annoying now? It wasn't at random: it had 4 gaps between topics, 2 between points and one in a few minor places were I just wanted to break it up.

Consider adopting the 'headings and subheadings' practice.

comment by orthonormal · 2013-04-22T17:42:05.756Z · LW(p) · GW(p)

Just wanted to positively reinforce you for reading the earlier criticism on spacing, and editing accordingly. It's great that you have the habit of listening and constructively responding to feedback!

Like the other people here have said, this still has a ways to go before it gets to the usual standard of readability for a post (where the reader should have an interesting reason at the start to keep reading, and know at each point where they are in the scheme of the argument), but that's something one learns how to do by practice.

(This also applies to the criticism about the content being meandering and confused: several times I've started writing a post, then realized that I didn't have a clear idea where I was going, and so I left it as a draft for the time being. Once I'd written a few substantive posts, I had a pretty good idea which drafts deserved to be posted and which ones needed further development. In the latter case, starting a conversation on an Open Thread is a good way to help shape one's thinking for a post.)

comment by ygert · 2013-04-22T08:08:09.909Z · LW(p) · GW(p)

Got rid of the "and I think quite good." I just meant I liked it enough to want to share it in a discussion post. I assume that's not the interpretation that was annoying people. How did people read it that made it a crackpot signal?

What people disliked was the bad grammar. If you want people to react positively to what you write, you need to make it easy to read. This includes using good spelling (which you do seem to have managed to do) and good grammar.

Replies from: evand
comment by evand · 2013-04-22T14:29:56.607Z · LW(p) · GW(p)

Exactly.

Spelling checks are mandatory. For starters: "followability" is not a word. Neither is "dun" (well, actually it is, but not one that means anything relevant to your article). "Pascal" should always be capitalized.

The bit about the article starting as a comment can go away -- who cares? And if we do, is that really the best thing to lead off with, to catch the reader's interest? The first sentence after that is an awkward, rambly, run-on sentence.

And so on, and so forth. This article needs a lot of editing, at a bare minimum. I'm also fairly sure the content isn't that interesting, but the lack of editing was sufficient to make me stop reading.

comment by DanielLC · 2013-04-22T04:55:14.899Z · LW(p) · GW(p)

The real trick of pascals wager is the idea that they're generally no more likely than their opposite.

Technically true, but half of them are more likely then their opposite, and the other half are less likely. If the payoff is large enough, that difference will be sufficient to cause trouble.

Replies from: mwengler, private_messaging
comment by mwengler · 2013-04-22T16:52:46.341Z · LW(p) · GW(p)

Technically true, but half of them are more likely then their opposite, and the other half are less likely.

This would matter if you KNEW which half was which. Which you generally don't.

Replies from: DanielLC
comment by DanielLC · 2013-04-22T21:48:20.384Z · LW(p) · GW(p)

Probability is in the mind. You always know which is more likely. It's the one you think is more likely.

People are sort of built to set probabilities to Schelling points which would make it difficult, but you'd still have some intuition or something pointing a little in one direction.

Replies from: mwengler
comment by mwengler · 2013-04-23T03:44:20.721Z · LW(p) · GW(p)

Probability is in the mind. You always know which is more likely. It's the one you think is more likely.

I would very much enjoy playing poker with you for money.

Replies from: DanielLC
comment by DanielLC · 2013-04-23T04:57:55.708Z · LW(p) · GW(p)

If you're going to be betting based on what you think is less likely, then I would like to play with you too.

Replies from: mwengler
comment by mwengler · 2013-04-23T15:34:19.923Z · LW(p) · GW(p)

If you're going to be betting based on what you think is less likely, then I would like to play with you too.

OK, telegraphic writing and reading is failing me. I'm looking for the meaning behind this and I get stuck on the idea that P_MoreLikely = 1 - P_LessLikely at least if there are only two choices, so I can't figure out the important difference between betting on what is more likely and betting on what is less likely.

The point of my post is that probability is hardly just what you think it is. And that there are plenty of ways that people actually think about the probability of poker hands that turn out to be quite consistent with their losing money. Far be it from me to infer publicly that that means they were "wrong" about such a subjective thing as probability. But I am happy to collect their money.

Replies from: DanielLC
comment by DanielLC · 2013-04-23T19:52:41.554Z · LW(p) · GW(p)

Probability is in the mind. If you know a coin is biased, but you don't know which way it's biased, then the first flip is fair. If you suspect that it's biased towards heads, then it's biased towards heads.

You could also think of yourself as a coin. Nobody is stupid enough to be biased towards wrong. You'd have to be smart to manage that. You might have biases in each individual decision that make you consistently wrong, but if you have a bucket of coins, and you know that they all are biased but more are biased towards landing on heads then landing on tails, then if you take a coin out of the bucket and flip it, it's biased towards heads.

If you know you're not logically omniscient, the correct action isn't to set all probabilities to 50%. It's to try and find your biases and correct for them, but use whatever you have at your disposal until then.

Replies from: mwengler
comment by mwengler · 2013-04-23T20:47:26.741Z · LW(p) · GW(p)

I read the probability post you referenced. The question is WHAT is in your mind. If one person has a whole hell of a lot more correctly determined Bayesian conclusions about poker hands than another, and the two of them play poker, they will both bet based on what is in their heads. The one with the better refined knowledge about poker hands will take money, on average, from the one with the worser knowledge. If the game is fixed that might change things, but if the game is fixed and neither of them has prior knowledge of this, it is still more likely the knowledgable player will figure out how the game is fixed, and how to exploit that, than the less knowledgable player.

So if we disagree about the probability of something, do you just agree that for you the probability is p and for me it is p'? I don't. The frequentist interpretation of probability doesn't exist because people are idiots, rather it exists because for a very broad range of things it provides an excellent map of the world. If I think I am going to be just as good at poker because me and my opponent both have heads and probability is just in our heads, and my opponent simply knows more about the odds of poker, I will lose. We both just had probabilties in our heads, though. And if my opponent had known LESS about poker, it would have appeared that mine were at least as good as his. But someone who thinks probabilities are whatever he thinks they are is precisely the kind of person you want to bet against. Not being a frequentist does not excuse you from the very real distributions of outcomes the world will give you in dealing out cards from a shuffled deck.

If you know a coin is biased, but you don't know which way it's biased, then the first flip is fair.

By that you mean you would not expect to do better betting on heads vs tails. OK.

If you suspect that it's biased towards heads, then it's biased towards heads.

No, your suspicions can not bend reality. If it comes up heads first, then you would think it more probable that it is biased towards heads than that it is biased towards tails. You can't even assign a numerical probability other than >50% to it coming up heads a 2nd time without knowing more about how it might be biased. Is it biased in a way which gives it runs (more likely to hit heads a 2nd time after hitting it the first?) Is it biased in a way that gives it at most a 5% deviation from fair? Even having access to a very long sequence of results from the biased coin doesn't let you easily determine what the bias is. What if it is biased in a way so that every 67th flip is heads? How long before you notice that?

Yes detecting bias is important, but so is figuring the odds when games are fair, when things are as they seem to be. There is a tremendous amount of money to be made and lost playing fair games.

comment by private_messaging · 2013-04-22T05:41:30.499Z · LW(p) · GW(p)

In the case of literal mugging by someone who forgot their gun and decided to talk about matrix instead, the large half is that if you pay you have less money for a potential mugger who, when asked for a proof, said, ok, and made a display appear in front of you, showing something impressive.

I'm thinking there's three entirely different issues here:

1: Priors may be too high for some reason (e.g. 2^-(theory length) priors do not lend to a converging sum). It looks like invalid actions resulting from legitimate probability assignment and legitimate expected utility calculation, but it really isn't - the sum does not converge and it's apparent sign depends to the order of summation. It's just a case of bad math being bad.

2: Low probability that comes from huge number of alternative scenarios or shakiness of the argument also relates to inability to evaluate other actions sufficiently - invalid expected utility estimate due to partial sum. The total is unreasonably biased by the choice of the terms which are summed (in the expected utility calculation).

3: Ignoring general tendency of actions conditional on evidence to have higher utility than actions not conditional on evidence. (As is the case for that literal mugger example). Not considering alternatives conditional on evidence (e.g. "decide to pay only to a mugger with a proof" is a valid action). The utility of assets (money you have) is easy to under-evaluate because it requires modelling of the future situations and your responses.

edit: also it's obviously wrong to just ignore low probability scenarios with some cost. When observing proper lab safety precautions, or looking both ways when crossing the street, you're doing just that. Likewise for not playing other variations of Russian Roulette. The issue tends to arise when scenarios are purely speculative, which makes me think that it's speculations that are to blame - you assign too high probability to a speculation, and then, when estimating the utility sum, you neglect to apply appropriate adjustment (if the scenario was chosen at random, regression to the mean) for the incompleteness of your sum.

Replies from: ArisKatsaris, DanielLC
comment by ArisKatsaris · 2013-04-22T10:42:30.838Z · LW(p) · GW(p)

In the case of literal mugging by someone who forgot their gun and decided to talk about matrix instead, the large half is that if you pay you have less money for a potential mugger who, when asked for a proof, said, ok, and made a display appear in front of you, showing something impressive.

I remember this being one of the solutions people came up with in some of the very early discussions about Pascal's mugging, but it is generally considered highly unsatisfactory. To keep from an action that would be seen positive-expected-sum by itself because one's worried that "some Matrix Lord may appear with evidence in the future requiring my resources", only worsens the problem transforming it into the muggerless and worse variety of Pascal's mugging -- which would prevent you from ever using any resources for any reasons, even ones considered prudent.

E.g. "Should I install a fire-alarm with 100 dollars for the purposes of early warning in cases of a fire?" "No, I will have then less resources in case a Matrix-Lord comes with evidence and requires them of me." A mind that utilized such a logic would no longer even need a mugger in the first place to fall into insanity...

Besides even if we specified "require evidence before allocating resources" what is the limiting factor for what sort of evidence is to be considered good enough?

Replies from: private_messaging, Richard_Kennaway
comment by private_messaging · 2013-04-22T14:34:17.892Z · LW(p) · GW(p)

You might die before you meet some matrix lord, you know. Fire alarm wise, you're in the clear. And if you have #1, it's not pascal's mugging situation, it's "your utility function does not work at all" situation, you need to either use bounded utility or use speed prior (which makes priors for that smaller).

edit: and even if your priors are correct, you're still facing the problem that your sums are not complete.

comment by Richard_Kennaway · 2013-04-22T11:25:59.260Z · LW(p) · GW(p)

E.g. "Should I install a fire-alarm with 100 dollars for the purposes of early warning in cases of a fire?" "No, I will have then less resources in case a Matrix-Lord comes with evidence and requires them of me." A mind that utilized such a logic would no longer even need a mugger in the first place to fall into insanity...

I am reminded of the Island Where Dreams Come True in The Voyage of the Dawn Treader, which is exactly what its name says. Not daydreams or longings, but all of your worst nightmares. Having once imagined a thing calls it into existence there.

The muggerless mugging follows from giving a hypothesis some credence just because you imagined it, otherwise called the Solomonoff prior. I recall Eliezer writing here some years ago that he did not have a solution. I don't know if he has since found one.

comment by DanielLC · 2013-04-22T21:53:46.005Z · LW(p) · GW(p)

Priors may be too high for some reason (e.g. 2^-(theory length) priors do not lend to a converging sum).

I've mentioned elsewhere that this is generally what causes it. The problem is, is that really a good enough reason to use different priors? Consider the similar situation where someone rejects the 2^-(theory length) priors on the basis that it would say God doesn't exist, and they don't want to deal with that.

It's just a case of bad math being bad.

Are you saying you can get around it just by using better math, instead of messing with priors?

Replies from: private_messaging
comment by private_messaging · 2013-04-23T05:24:55.346Z · LW(p) · GW(p)

The problem is, is that really a good enough reason to use different priors?

Sum not converging is reason enough; its not that there's potential "pascal's mugging" problem, it's that the utility is undefined entirely.

Consider the similar situation where someone rejects the 2^-(theory length) priors on the basis that it would say God doesn't exist, and they don't want to deal with that.

That prior doesn't say God doesn't exist; some very incompetent people who explain said prior tell that it does, but the fact is that we do not know and will never know. At most, Gods are not much longer to encode than universes where intelligent life evolves, anyway (hence the Gods in form of superintelligences, owners of our simulation and so on).

Are you saying you can get around it just by using better math, instead of messing with priors?

What do you mean? The "bad math" is this idea that utility is even well defined given a dubious prior where it is not well defined. It's not like humans use theory-length prior, anyway.

What you can do is use "speed prior" or variation thereof. It discounts for size of universe (-ish), making the sum converge.

Note that it still leaves any practical agent with a potential problem, in that arguments by potentially hostile parties may bias it's approximations of the utility, bu providing speculations which involve large, but not physically impossible under known laws of physics, utilities, which are highly speculative and thus the approximate utility calculations do not equally adjust both sides of the utility comparisons.

Replies from: DanielLC
comment by DanielLC · 2013-04-23T06:53:52.928Z · LW(p) · GW(p)

Sum not converging is reason enough; its not that there's potential "pascal's mugging" problem, it's that the utility is undefined entirely.

For any prior with infinitely many possibilities, you can come up with some non-converging utility function. Does that mean we can change how likely things are by changing what we want?

The other strategy is to change your utility function, but that doesn't seem right either. Should I care less about 3^^^3 people just because it's a situation that might actually come up?

Replies from: private_messaging
comment by private_messaging · 2013-04-23T07:05:03.631Z · LW(p) · GW(p)

For any prior with infinitely many possibilities, you can come up with some non-converging utility function. Does that mean we can change how likely things are by changing what we want?

Prior is not how likely things are. It's just a way to slice the probability of 1 among the competing hypotheses. You allocate slices by length, you get that length based prior, you allocate slices by runtime and length, you get the speed prior.

Ideally you'd want to quantify all symmetries in the evidence and somehow utilize those, so that you immediately get prior of 1/6 for a side of symmetric die when you can't make predictions. But the theory-length prior doesn't do that either.

The other strategy is to change your utility function, but that doesn't seem right either. Should I care less about 3^^^3 people just because it's a situation that might actually come up?

It seems to me that such situation really should get unlikely faster than 2^-length gets small.

Replies from: DanielLC
comment by DanielLC · 2013-04-23T19:38:05.019Z · LW(p) · GW(p)

Prior is not how likely things are. It's just a way to slice the probability of 1 among the competing hypotheses.

And I could allocate it so that there is almost certainly a god, or even so there is certainly a god. That wouldn't be a good idea though, would it?

It seems to me that such situation really should get unlikely faster than 2^-length gets small.

What would you suggest to someone who had a different utility function, where you run into this problem when using the speed prior?

Also, the speed prior looks bad. It predicts the universe should be small and short-lived. This is not what we have observed.

Do you think there is a universe outside of our past light cone? It would increase the program length to limit it to that, but not nearly as much as it would decrease the run time.

Replies from: private_messaging
comment by private_messaging · 2013-04-23T20:55:31.875Z · LW(p) · GW(p)

And I could allocate it so that there is almost certainly a god, or even so there is certainly a god. That wouldn't be a good idea though, would it?

There isn't single "solomonoff induction", choice of the machine is arbitrary and for some machines the simplest way to encode our universe is through some form of god (the creator/owner of a simulation, if you wish). In any case the prior for universe with god is not that much smaller than prior for universe without, because you can obtain a sentient being simply by picking data out of any universe where such evolves. Note that these models with some god work just fine, and no, even though I am an atheist, I don't see what's the big deal.

Also, the speed prior looks bad. It predicts the universe should be small and short-lived. This is not what we have observed.

The second source of problems is attribution of reality to internals of the prediction method. I don't sure it is valid for either prior. Laws of the universe are most concisely expressed as properties which hold everywhere rather than as calculation rules of some kind; the rules are derived as alternate structures that share same properties.