What do you mean by Pascal's mugging?
post by XiXiDu · 2014-11-20T16:38:46.970Z · LW · GW · Legacy · 24 commentsContents
24 comments
Some people[1] are now using the term Pascal's mugging as a label for any scenario with a large associated payoff and a small or unstable probability estimate, a combination that can trigger the absurdity heuristic.
Consider the scenarios listed below: (a) Do these scenarios have something in common? (b) Are any of these scenarios cases of Pascal's mugging?
(1) Fundamental physical operations -- atomic movements, electron orbits, photon collisions, etc. -- could collectively deserve significant moral weight. The total number of atoms or particles is huge: even assigning a tiny fraction of human moral consideration to them or a tiny probability of them mattering morally will create a large expected moral value. [Source]
(2) Cooling something to a temperature close to absolute zero might be an existential risk. Given our ignorance we cannot rationally give zero probability to this possibility, and probably not even give it less than 1% (since that is about the natural lowest error rate of humans on anything). Anybody saying it is less likely than one in a million is likely very overconfident. [Source]
(3) GMOS might introduce “systemic risk” to the environment. The chance of ecocide, or the destruction of the environment and potentially humans, increases incrementally with each additional transgenic trait introduced into the environment. The downside risks are so hard to predict -- and so potentially bad -- that it is better to be safe than sorry. The benefits, no matter how great, do not merit even a tiny chance of an irreversible, catastrophic outcome. [Source]
(4) Each time you say abracadabra, 3^^^^3 simulations of humanity experience a positive singularity.
If you read up on any of the first three scenarios, by clicking on the provided links, you will notice that there are a bunch of arguments in support of these conjectures. And yet I feel that all three have something important in common with scenario four, which I would call a clear case of Pascal's mugging.
I offer three possibilities of what these and similar scenarios have in common:
- Probability estimates of the scenario are highly unstable and highly divergent between informed people who spent a similar amount of resources researching it.
- The scenario demands skeptics to either falsify or accept its decision relevant consequences. The scenario is however either unfalsifiable by definition, too vague, or almost impossibly difficult to falsify.
- There is no or very little direct empirical evidence in support of the scenario.[2]
In any case, I admit that it is possible that I just wanted to bring the first three scenarios to your attention. I stumbled upon each very recently and found them to be highly..."amusing".
[1] I am also guilty of doing this. But what exactly is wrong with using the term in that way? What's the highest probability for which the term is still applicable? Can you offer a better term?
[2] One would have to define what exactly counts as "direct empirical evidence". But I think that it is pretty intuitive that there exists a meaningful difference between the risk of an asteroid that has been spotted with telescopes and a risk that is solely supported by a priori arguments.
24 comments
Comments sorted by top scores.
comment by jimrandomh · 2014-11-20T17:44:56.624Z · LW(p) · GW(p)
None of the above are Pacal's Mugging, as stated. While some people have taken to using Pascal's Mugging as a generic term for anything that's very-low-probability and very-high-impact, I think that's missing the point of the original thought experiment. Pascal's Mugging is a scenario in which the size of the impact and the smallness of the probability are entangled together. It shows that, if your utility function and epistemology are broken in a particular technical way, then no degree of improbability, no matter how astronomical, will suffice to let you ignore something.
comment by Luke_A_Somers · 2014-11-20T20:06:42.303Z · LW(p) · GW(p)
Given our ignorance we cannot rationally give zero probability to this possibility, and probably not even give it less than 1% (since that is about the natural lowest error rate of humans on anything)
I am pretty sick of 1% being given as the natural lowest error rate of humans on anything. It's not.
In this particular case, we've made balls of stuff much colder than this, though smaller. So not only does this killer effect have to exist, but it also needs to be size-dependent like fission.
If you give me 100 theories as far-fetched as this, I'd be more confident that all of them are false, than that any are true.
Replies from: Strilanc↑ comment by Strilanc · 2014-11-21T17:51:28.385Z · LW(p) · GW(p)
I am pretty sick of 1% being given as the natural lowest error rate of humans on anything. It's not.
Hmm. Our error rate moment to moment may be that high, but it's low enough that we can do error correction and do better over time or as a group. Not sure why I didn't realize that until now.
(If the error rate was too high, error correction would be so error-prone it would just introduce more error. Something analogous happens in quantum error correction codes).
comment by iarwain1 · 2014-11-21T00:41:26.585Z · LW(p) · GW(p)
I keep seeing people responding to Pascal's Wager / Mugging by saying we just shouldn't pay attention to very low probabilities. (IIRC Eliezer said something similar as well). But intuitively I don't think this is true.
Imagine that a stranger comes up to you and offers you a cup of water and tells you, "Please drink this to humor me. It has a chemical I invented which seems to kill a tiny fraction of people in fascinatingly gruesome ways. But don't worry, I've already tested 10,000 people and only two of them have been affected." I have a hard time imagining that anybody would drink it.
The way I usually look at Pascal's Wager / Mugging is that this is just one of those paradoxes we seem to get into when we take decision theory and probability theory to extremes. There may or may not be a good decision-theoretic response to what precisely is the answer to the paradox, but in the meantime we need to go with our intuition that it's just wrong, even if we don't know exactly why.
So ultimately I think we just need to go with our intuitions on this. Is 1:10,000 too small a probability? What does your heart tell you?
Side note: I noticed that Nick Bostrom seems to invoke these sort of arguments several times in his book. I think he mentioned it regarding how much we should worry about x-risks especially given our astronomically large potential cosmic endowment. I also think he alluded to it when he mentions that a superintelligence would first take steps to prevent all sorts of catastrophic risks before proceeding with some new technologies.
Replies from: Manfred, Unknowns, shminux, V_V, Jiro↑ comment by Manfred · 2014-11-21T02:37:29.844Z · LW(p) · GW(p)
Well, there's small and then there's small. Sadly, human brains are bad at handling this sort of distinction. One in 10,000 is only 13 or 14 coin flips in a row (note that 'number of coin flips' is a log scale, which is useful to help humans conceptualize big or small numbers). 14 coin flips is more like 1 coin flip than it's like a thousand coin flips.
↑ comment by Unknowns · 2014-11-21T01:19:08.522Z · LW(p) · GW(p)
...there may or or may not be a good response to those arguments against my religion or philosophy or whatever, but in the meantime I need to go with my intuition that the arguments are just wrong, even if I don't know exactly why. ...
Replies from: iarwain1↑ comment by iarwain1 · 2014-11-21T14:33:05.522Z · LW(p) · GW(p)
I would say (mostly based on intuition, again) that we should assign a probability to our intuitions being correct vs. complicated, subtle arguments. We should take into account that our intuitions are often correct (depending on the issue in question) even if we can't always explain why. We should also take into account that on similar types of issues with complicated subtle arguments we might not be smart enough to determine how to resolve all the arguments (again, depending on the issue in question).
In some cases there will be so much evidence that our intuitions should bow, and in other cases there will be only weak arguments vs. powerful intuitions in which case (assuming it's a case where our intuitions have some weight) we'd probably say the intuitions should win.
What about a grey case where we don't know which one wins? In epistemic questions (which map better fits the territory), we can just leave it as unresolved, assigning probabilities close to 50% for both sides. In some decision questions, however, we must pick one side or the other, and in these cases my impression (I don't know too much decision theory) is that we essentially pick one side at random.
In the case of a pet religion / philosophy, we know that our intuitions are subject to a number of powerful biases which we need to take into account, and in many cases the arguments against our religion / philosophy might be very powerful. In the case of Pascal's Mugging decision problems, however, to the best of my knowledge there are only weak biases involved and very iffy arguments, so I'd think we should go with our intuitions.
One more point: Most of the arguments involving extreme cases of decision theory that I've seen (including Pascal's Mugging) start from the assumption that "this is obviously wrong, but why?". So we're anyway appealing to our intuition that it's wrong. Then we go on to say that "maybe it's wrong because of X", and we then extrapolate X back up the probability scale to less extreme cases and say that "well, if X is correct then we should counter-intuitively not worry about this case either". In which case we end up using an extrapolation of a tentative response to one intuition in order to argue against another intuition. I (intuitively?) think there's something wrong with that.
Replies from: Unknowns↑ comment by Unknowns · 2014-11-22T19:44:30.433Z · LW(p) · GW(p)
I agree with basically all of this.
It irritates me when people talk as though Pascal's Mugging or Wager is an obvious fallacy, but then give response which are fallacious themselves, like saying that the probability of the opposite is equal (it is not), or that there are alternative scenarios which are just as likely, and then give much less probable scenarios (e.g. that there is a god that rewards people for being atheist), or say that when you are dealing with infinities it does not matter which one is more probable (it does). You are quite correct that people are just going with their intuition and trying to justify that, and it would be much more honest if people admitted that.
It seems to me that the likely true answer is that there is a limit to how much a human being can care about something, so when you assign an incredibly high utility, this does not correspond to anything you care about it in reality, and so you don't follow decision theory when this comes up. For example, suppose that you knew with 100% certainty that XiXiDu's last scenario was true: whenever you say "abracadabra", it causes a nearly immeasurable amount of utility in the simulations, but of course this never touches your experience. Would you do nothing for the rest of your life except say "abracadabra"? Of course not, no more than you are going to donate all of your money to save children in Africa. This simply does not correctly represent what you care about. This is also the answer to Eliezer's Lifespan Dilemma. Eliezer does NOT care about an infinite lifespan as much as he says he does, or he would indeed take the deal. Likewise, if he really cared infinitely about eternal life, he would become a Christian (or a member of some other religion promising eternal life) immediately, no matter how low the probability of success. But neither he nor any human being cares infinitely about anything.
Replies from: hairyfigment↑ comment by hairyfigment · 2014-11-25T06:59:11.596Z · LW(p) · GW(p)
I agree that people frequently give fallacious responses, and that the opposite is not equal in probability (it may be much higher). I disagree with roughly everything else in the parent. In particular, "god" is not a natural category. By this I mean that if we assume a way to get eternal life exists, the (conditional) probability of any religion that makes this promise still seems vanishingly small - much smaller than the chance of us screwing up the hypothetical opportunity by deliberately adopting an irrational belief.
This does not necessarily mean that we can introduce infinite utility and it will add up to normality. But e.g. we observe the real Eliezer taking unusual actions which could put him in a good position to live forever if the possibility exists.
↑ comment by Shmi (shminux) · 2014-11-21T04:07:45.986Z · LW(p) · GW(p)
I've already tested 10,000 people and only two of them have been affected." I have a hard time imagining that anybody would drink it.
I queried my brain and it said that it cares not just about small but definite probabilities (even though 1:5000 is already uncomfortably high), but also about the certainly of estimating them. For example "none of 10,000 people have been affected, but I expect eventually some would" feels much more comfortable than 2 in 10,000.
So, my feeling is that very low probability, together with high uncertainty of the number given, probably due to poor calibration on such events, is what makes me ignore apparent Pascal's Muggers without thinking too hard.
Not sure what makes you decide whether a low probability offer is a lottery or a mugging attempt.
↑ comment by V_V · 2014-11-22T02:16:19.385Z · LW(p) · GW(p)
I don't think that Pascal's mugging it's just a scenario where the expected payoff contains a small probability multiplied by an utility large in absolute value.
I think it is better to use the term "Pascal's mugging", or perhaps "Pascal scam" to describe a scenario where, in addition to the probabilities and the utilities being extreme, there is also lots of "Knightian" (= difficult to formalize) uncertainty about them.
↑ comment by Jiro · 2014-11-21T22:25:40.775Z · LW(p) · GW(p)
I would consider Pascal's Mugging to be a situation with a low probability, a high impact, and where the low probability consists of several components, one of which is uncertainty about what the others are.
Telling me that some substance kills 2 of 10000 people is not Pascal's Mugging. Telling me that some substance kills everyone who takes it, but I know that there's a 99.998% chance you're a liar, is closer.
Edit: And I would consider all four scenarios Pascal's Mugging.
comment by ArisKatsaris · 2014-11-21T12:34:34.627Z · LW(p) · GW(p)
In my current (fuzzy) thinking, it can be considered Pascal's Mugging if the probability is several orders of magnitude too low for you to even attempt to calculate, and your 'mugger' therefore needs depend on the payoff being large enough to try to outweigh in your mind the not actually calculated tininess of the probability.
Your scenarios 1, 2 and 4 fall in that category where I'm concerned. Your scenario 3 does not -- the payoff being limited to merely the global ecosystem means that I can try to evaluate the probability I assign to the systemic risk of GMOs, and evaluate the benefit vs the risk in a normal fashion. Other technologies that may increase existential risks because of weaponisation or accident (nuclear fission, AI, nanotechnology, advanced bioengineering) can be treated similarly to GMOs with normal benefit-vs-risks calculations, they're not Pascal's muggings.
comment by lmm · 2014-11-20T20:26:04.941Z · LW(p) · GW(p)
I use the term for any case where people try to multiply a high utility by a low probability and get a substantial number. I think this is an error in general, and it's useful to have a term for it, rather than reserving the term for a specific technical scenario which is no longer especially interesting.
Replies from: Unknowns, solipsist, ChristianKl↑ comment by Unknowns · 2014-11-21T01:45:08.868Z · LW(p) · GW(p)
Whether or not you get a substantial number when you multiply a high utility by a low probability is a question of mathematics with a definite answer depending on the particular values in question, and sometimes you do get such a number, so it is not an error in general.
↑ comment by ChristianKl · 2014-11-21T15:03:07.661Z · LW(p) · GW(p)
I think this is an error in general, and it's useful to have a term for it
Why do you think that the term Pascal mugging is good for that purpose? I haven't seen a case where calling something that way advanced the discussion.
Replies from: lmm↑ comment by lmm · 2014-11-21T19:59:05.579Z · LW(p) · GW(p)
Do you feel the same way about other named fallacies, e.g. "ad homien" or "slippery slope"?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-21T20:39:10.739Z · LW(p) · GW(p)
No. Those names are a lot better. "ad hominen" is a descriptive phrase even when it happens to be in latin. Slippery slope is even very plain English.
Both terms also have the advantage of being widely understood and not be local jargon.
comment by RowanE · 2014-11-20T21:15:54.561Z · LW(p) · GW(p)
Well, with 4, as long as you're just floating the hypothesis rather than attempting a Pascal's Mugging and claiming you have knowledge from outside the Matrix that this is true, we have no evidence at all indicating it's more likely than the equal-and-opposite hypothesis that it causes 3^^^^3 negative singularities - this probably also applies to 1, but not 2 and 3, it seems if they have extreme outcomes they're more likely to go one way than the other - even considering that they might somehow produce FAI, it's more likely that they'd produce uFAI.
Replies from: solipsist↑ comment by solipsist · 2014-11-21T00:16:19.922Z · LW(p) · GW(p)
Unless you have a good reason to believe the opposite hypotheses balance each other out to log₁₀(3^^^^3) decimal places, I don't think that line of argument buys you much.
Replies from: RowanE↑ comment by RowanE · 2014-11-21T08:29:17.568Z · LW(p) · GW(p)
I don't think I have to believe that, what's wrong with just being agnostic as to which hypothesis outweighs the other?
Replies from: solipsist↑ comment by solipsist · 2014-11-21T16:16:08.720Z · LW(p) · GW(p)
The information value of which outcome outweighs the other is HUGE. More expected lives hinge on a 0.01 shift in the balance of probabilities than would live if we merely colonize the visible universe with humans the size of quarks.