Pascal's Mugging, Finite or Unbounded Resources?
post by Irgy · 2015-10-15T04:01:54.393Z · LW · GW · Legacy · 26 commentsContents
Large but finite resources An aside on sufficient evidence Unbounded resources Bounding the unbounded Conclusion None 26 comments
This article addresses Pascal's Mugging, for information about this scenario see here.
I am going to attack the problem by breaking it up into two separate cases, one in which the mugger claims to be from a meta-world with large but finite resources, and one in which the mugger claims to be from a meta-world with unbounded resources. I will demonstrate that in both cases the mugging fails, for different reasons, and argue that much of the appeal of the mugging comes from the conflation of these two cases.
Large but finite resources
In this case, the mugger claims to be from a world with a bounded number of resources, but still large enough to torture n, e.g. n=3^^^^3, people. I will argue that the prior for such a world should be of the order of 1/n or lower, and in particular not 1/complexity(n). With a prior of 1/n or less, the mugging fails, because no matter how large a number the mugger claims, the likelihood of their claim being true decreases proportionally. Thus there need be no value for which the claim is more worrisome than implausible.
We're faced with uncertainty because the world the mugger claims to be from is outside our universe. We have no information on which to base our estimate of its size, other than that it is substantially bigger than our universe (at least in the case of a matrix-like simulating world this is necessary). However, that ignorance is also our strength, because a prior distribution in the face of ignorance of a scale exists, and that prior is 1/n.
The first reason not to use a complexity prior is that there is simply no reason to use one. What reason is there that a world with a particular finite number of resources would be more likely to be a computable size? If you were to guess the size of our universe, certainly you might round the number to the nearest power of 10, but not because you think a round number is more likely to be correct. A world of difficult to describe size is just as likely to exist as a world with a similar but easily describable size.
A critical point here is that the complexity of the world itself of size n is proportional to n, not complexity(n). In order for a computer program to model the behaviour of a world of size n, it does not suffice to just generate the number n itself. It needs to model the behaviour of every single one of those n elements that make up the world. Such a program would need memory of size n just to keep track of one time step. To say that such a world should be give a prior of 1/complexity(n) is to conflate complexity(n) with complexity(world(n)). If AIXI were to consider such a world, it would need to treat that world as having a complexity of n. Otherwise it would be like AIXI measuring the complexity of the size of the computer program that could generate its inputs, rather than measuring the complexity of the program itself.
You may have noticed that the 1/n prior is itself unnormalisable, due to its infinite integral (at both zero and towards infinity in this case). Ignorance priors all have this property of being "improper" priors, which cannot be normalised. They work because once you add a single piece of evidence, the resulting distribution can be normalised. Which raises the question: What is that additional evidence in this case?
Well, in the particular case of a matrix like simulating world, there's the one other piece of knowledge we have that it's large enough to simulate our universe. Aside from setting a lower bound (which helps with the infinite integral near zero but not out to infinity), you might then ask, given a world of a particular size, what are the chances that it would simulate a universe of specifically the size of ours. The number of alternatively sized universes which they could simulate is proportional to n for sufficiently large n, thus the chance of ours being the size it is becomes 1/n. Combined with the ignorance prior you reach 1/n^2, and now you can actually integrate and normalise.
Thus I would conclude that overall the plausibility of the large but finite world of size n which the mugger claims to be from is proportional to 1/n^2, making the desire to pay lower, not higher, as 'n' grows. Note that either of the two arguments here is sufficient for the mugging to fail.
An aside on sufficient evidence
One final aside on this case; in Pascal's Muggle: Infinitesimal Priors and Strong Evidence, Eliezer ridicules the idea of assigning priors this low to an event based on the idea that it would imply that compelling evidence to the contrary would be unable to convince you otherwise. However, this is a flat-out misapplication of probability theory.
p(unlikely scenario | extreme evidence) = p(unlikely scenario) * p(extreme evidence | unlikely scenario) / p(extreme evidence)
In order for p(unlikely scenario | extreme evidence) ~= 1 in the face of the prior p(unlikely scenario) ~= 1/3^^^^3, all that's required is p(extreme evidence) ~= 1/3^^^^3. That is to say, the likelihood of seeing such evidence is low. Forget "no amount of evidence", just one such piece of evidence would be sufficient. All that's required is that the evidence itself is unlikely. And evidence which can only be generated by an unlikely scenario will of course be itself unlikely. As a simple example, imagine I found a method of picking a random integer between 0 and 3^^^^3 (just assume for the sake of argument that such a thing was possible to do). I would correctly assign a probability of 1/3^^^^3 of seeing the number '7'. But, if I performed the method, and saw the output of 7, I wouldn't "fail to consider this sufficient evidence to convince me" that the result was 7. The arguments relating to the bandwidth of our sensory system fail to account for (inefficient) encodings of that information which may have some configurations with arbitrarily low likelihood.
Of course in practice, in these unlikely situations, competing theories that start with "I'm dreaming" or "I'm delusional" may dominate. All scenarios markedly less likely than those have the burden of disproving those possibilities first. But this is not an impossible burden, and is in any case exactly as it should be.
Unbounded resources
I'm going to use access to a machine with unlimited computing resources as my working example here but I hope that the points translate well enough to other settings. I'm also going to briefly make a distinction between "infinite" and "unbounded": There are infinitely many of something if the cardinality of the set of such things in existance is infinite. There are unboundedly many of something if, for any number 'n', it would be possible to generate 'n' of those things. Unbounded is a lower requirement, but is sufficient for this discussion. I make this distinction mostly just to explain why I'm using the term at all (since you might otherwise expect "infinite").
In contrast to the finite resources scenario, in the infinite or unbounded resources scenario I think it's quite correct to say that the difficulty of generating a program that would torture n people is in proportion to the complexity, rather than the scale, of 'n'. Given unlimited resources, the only barrier is writing the program itself, the difficulty of which is barely any more work than required by the definition of complexity.
However, in this scenario, there's no need for a mugger at all! We've mugged ourselves already with our own moral mathematics. The 3^^^^3 people to mugger wishes to torture are utterly insignificant in the face of the 3^^^^^3 people who we could simulate in paradise, if we outsmart or overpower the mugger and take control of those resources ourselves. Does it sound unlikely we'd be able to overcome them? Of course, but how unlikely? It certainly doesn't scale with the value, so as with the original dilemna just pick a bigger number if you need to (which you don't, I can assure you, it's big enough).
And yet, even that is insufficiently ambitious. I would posit that with unbounded resources available, any course of action we could describe is dominated by considerations of an only slightly more complicated but substantially more important alternative. We're frozen with inaction in the face of the utter futility of anything we're even capable of thinking of. And we don't even need a mugger to trigger this catastrophe. So long as we assign a non-zero probability of such a mugging occurring in the future, we should be worrying about it right now.
The point is that in this situation, just paying the mugger and carrying on cannot be the best course of action, because it's not the right choice if they're lying, and if they're not then it's dominated by other much larger considerations. Thus the mugging still fails, not necessarily because of the implausibility of their threat but because of the utter irrelevance of it in the face of unboundedly more important other considerations.
Bounding the unbounded
Although this is tangential to my main point, I will consider how the concept of unbounded resources could be handled. Even though I've demonstrated that the mugging fails, the larger issue of considering the possibility of unbounded resources still seems a little unresolved. Here's a few options, each of which take seriously but none of which I'm completely convinced of yet. In some cases I also talk about how this resolution impacts the mugging. I'll add that they are not at all mutually exclusive either, they could all be valid.
* Ignore the possibility, at least until we actually have to deal with it, which will most likely be never and in any case gives us time to work out the maths in the meantime. A practical if thoroughly unsatisfying solution. A sub-case of this would be to plan to completely reinvent or even abandon quantitative morality in the face of the collapse of quantitative limits. What we replace it with is hard to say without better understanding the nature of the unlimited resources available.
* Ignore the possibility by symmetry. We know nothing about worlds with unbounded resources, so any action we take is just as likely to hurt as help our chances of utilising them for unbounded good. The question then is whether a mugger as described would be sufficient to break that symmetry. Personally I don't think they do, in the same way that I don't think the religions on earth break the symmetry of what a god might want were one to exist. I see no reason to privilege their hypotheses over the negation of them. Similarly the threats of a mugger who is clearly psychopathic and in any case has absolutely no need of my money may not break the symmetry on what I might expect to happen if I pay or don't. Essentially, I'm saying don't trust the mugger any more than I distrust them. Still, even if you accept this claim, it feels a little like dodging the question. It shouldn't be that hard to reformulate the scenario in a way that's sufficient to break the symmetry.
* Assign probability zero to infinite (and unbounded?) hypotheticals. Note that mathematically, something can be "possible" and still have probability 0. One example is the chance of a randomly chosen Real number chosen within (0, 1) being rational. This would be the natural extension of the 1/n prior for resources of scale n. While mathematically plausible and philosophically satisfying, I'm willing to be, but not yet quite convinced this is correct. The trouble I have is that infinite things seem in some ways far less complex than large finite things. Generating an infinite loop is one of the easiest things to program a computer to do. In saying so though, am I making the same mistake I describe above, in conflating complexity(X) with complexity(size(X))? AIXI may consider an unbounded space of programs and unbounded computing resources, but it certainly does not integrate over any programs of themselves infinite length (and indeed would get nowhere if it even tried). Do unbounded resources correspond to a program of infinite length or just a finite program running on unbounded hardware? I'm not yet sure either way.
* Fail to lose sleep over it regardless. Personally I act to optimise my own utility. That utility does honestly consider the utility of others, but it is nonetheless my own. It is also bounded within a time-frame because there's only so happy or sad I can be, and also bounded over time by geometric discounting. Being just my own utility it's not subject to being multiplied by an arbitrary number of people (and no I don't care if they're copies of me either). In being bounded, the harsh reality is that there's only so much I can care about the scale of a tragedy before it all just becomes numbers. So call me evil if you like but either way I'm not motivated to pay, nor, more generally, motivated to worry about the possibility of unbounded resources existing. Of course this doesn't really resolve the mugging itself. You could modify the scenario to replace myself having to pay with instead a small, plausible but entirely moral threat (e.g. "I'll punch that guy in the face"). I would then be motivated to make the correct moral decision regardless of bounds on my utility (though I suppose my motivation to be correct is itself bounded). It makes me wonder actually, nobody wants to pay themselves, but how many people actually would pay in this alternative case of an entirely moral trade off?
Conclusion
In the finite resources case, the decision to make is real, and not dominated by unavoidable larger considerations. The scenario itself is reasonable and entirely finite.
In the infinite resources case, the plausibility of the mugger's threat is only as low as 1/complexity(n) and thus they are able to create a threat which scales faster than its implausibility.
By not making it entirely clear which of these cases is considered, the original presentation of Pascal's Mugging served to generate a scenario which appeared to have the merits of both cases and the weaknesses of neither. However, by separating these two cases it becomes clear that the mugging fails in both, either because of the implausibility of finite but large resources, or the overwhelming, moral-system destroying power of unbounded resources. Although the unbounded resources problem is still unresolved (to my satisfaction at least), any resolution of it would be very likely to also resolve this case of the mugging (or if not then at least change our thinking about it substantially). Thus, in no case is it correct to pay, at least without the mugger providing unimaginably stronger evidence than is presented.
The collapse our our moral systems in the face of unlimited resources may have been the key point Elizier was making with Pascal's Mugging, and I certainly haven't contradicted that here. But I have I hope made it clear that unbounded resources are required to do this not just large numbers, and the hypothetical muggers are the least of our problems in these scenarios.
26 comments
Comments sorted by top scores.
comment by AlexMennen · 2015-10-15T08:19:07.453Z · LW(p) · GW(p)
I will argue that the prior for such a world should be of the order of 1/n or lower
This class of argument has been made before. The standard counterargument is that whatever argument you have for this conclusion, you cannot be 100% certain of its correctness. You should assign some nonzero probability to the hypothesis that the probability does not decrease fast enough for the correct expected utilities to be bounded. Then, taking this uncertainty into account, your expected utilities are unbounded.
The arguments relating to the bandwidth of our sensory system fail to account for (inefficient) encodings of that information which may have some configurations with arbitrarily low likelihood.
There is a positive lower bound to the probability of observing any given data (given a bound on the description length of the data), because you might just be getting random input. Given any observation that could be the result of some 1/3^^^^3 event, it could also just randomly pop into your brain for no reason with probability far greater than that. If you see a mechanism to output a random integer from 1 to 3^^^^3, and that its output was 7, you should be almost 100% confident that there was an error in your senses or your memory or your reasoning that convinced you that the mechanism works as described, etc (where "etc" means "anything other than that you observed the output of a mechanism that generates a random integer from 1 to 3^^^^3, and it was 7").
The point is that in this situation, just paying the mugger and carrying on cannot be the best course of action, because it's not the right choice if they're lying, and if they're not then it's dominated by other much larger considerations. Thus the mugging still fails, not necessarily because of the implausibility of their threat but because of the utter irrelevance of it in the face of unboundedly more important other considerations.
This totally fails to resolve the paradox. The conclusion that you should drop everything else and go all in on pursuing arbitrarily small probabilities of even more vast outcomes is, if anything, even more counter-intuitive than the conclusion that you should give the mugger $5.
Of course this doesn't really resolve the mugging itself. You could modify the scenario to replace myself having to pay with instead a small, plausible but entirely moral threat (e.g. "I'll punch that guy in the face"). I would then be motivated to make the correct moral decision regardless of bounds on my utility (though I suppose my motivation to be correct is itself bounded).
There is no reason that the "moral component" of your utility function must be linear. In fact, the boundedness of your utility function is the correct solution to Pascal's mugging.
Replies from: Irgy↑ comment by Irgy · 2015-10-15T09:24:43.212Z · LW(p) · GW(p)
This class of argument has been made before. The standard counterargument is that whatever argument you have for this conclusion, you cannot be 100% certain of its correctness. You should assign some nonzero probability to the hypothesis that the probability does not decrease fast enough for the correct expected utilities to be bounded. Then, taking this uncertainty into account, your expected utilities are unbounded.
Standard counterargument it may be but it seems pretty rubbish to me. It seems to have the form "You can't be sure you're right about X and the consequences of being wrong can be arbitrarily bad therefore do Y". This seems like a classic case of a fully general counterargument.
If I assign a non-zero probability to being wrong in my assessment of the likelihood of any possible scenario then I'm utterly unable to normalise my distribution. Thus I see this approach as an utter failure, as far as attempts to account for logical uncertainty go.
Accounting for logical uncertainty is an interesting and to my mind unsolved problem, if we ever do solve it I'll be interested to see how it impacts this scenario.
There is a positive lower bound to the probability of observing any given data...
This is exactly what I was addressing with the discussion of the dreaming/crazy theories, random sensory input is just another variant of that. And as I said there I don't see this as a problem.
The conclusion that you should drop everything else and go all in on pursuing arbitrarily small probabilities of even more vast outcomes is, if anything, even more counter-intuitive than the conclusion that you should give the mugger $5.
Certainly, and I don't honestly reach that conclusion myself. The point I make is that this collapse happens as soon as you as much as consider the possibility of unbounded resources, the mugging is an unnecessary complication. That it might still help highlight this situation is the point I'm directly addressing in the final paragraph.
There is no reason that the "moral component" of your utility function must be linear. In fact, the boundedness of your utility function is the correct solution to Pascal's mugging.
I can see compelling arguments for bounded personal utility, but I can't see compelling argument that moral catastrophes are bounded. So, as much as it would solve the mugging (and particularly an entirely morality-based version of it), I'm not convinced that it does so correctly.
Replies from: None, AlexMennen, hairyfigment↑ comment by [deleted] · 2015-10-15T15:45:23.679Z · LW(p) · GW(p)
The problem with Pascal's mugging is that it IS a fully general counterargument under classical decision theory. That's why it's a paradox right now. But saying "There's a problem with this paradox - therefore, I'll just ignore the problem' is not a solution.
Replies from: Irgy, Lumifer↑ comment by Irgy · 2015-10-15T22:39:40.626Z · LW(p) · GW(p)
I'm not trying to ignore the problem I'm trying to progress it. If for example I reduce the mugging to just a non-special example of another problem, then I've reduced the number of different problems that need solving by one. Surely that's useful?
↑ comment by Lumifer · 2015-10-15T15:53:52.679Z · LW(p) · GW(p)
it IS a fully general counterargument under classical decision theory
Naive utilitarianism is NOT a "classical decision theory", at least for humans.
Replies from: None↑ comment by [deleted] · 2015-10-15T16:10:17.877Z · LW(p) · GW(p)
I'm not sure why you're trying to attack the language I'm using here. Steelman my argument (remove classical if you'd like) and respond to that..
Replies from: Lumifer↑ comment by Lumifer · 2015-10-15T16:17:16.388Z · LW(p) · GW(p)
Sure. Pascal's Mugging is not a "fully general counterargument" to anything sensible. It is one of multiple problems which come up when you are trying to shut up and multiply on the basis of a too-simple model which doesn't work in reality outside of toy examples.
Saying "there are multiple problems with (probability x utility) calculations" DOES imply that discarding this approach might be helpful.
Replies from: AlexMennen, None↑ comment by AlexMennen · 2015-10-15T19:33:11.686Z · LW(p) · GW(p)
Multiplying probability with utility is central to classical decision theory, and Pascal's mugging is not a problem for it. Pascal's mugging only becomes a problem when you make certain strong assumptions about the shape of the utility function.
↑ comment by [deleted] · 2015-10-15T17:53:03.978Z · LW(p) · GW(p)
"there are multiple problems with (probability x utility) calculations" DOES imply that discarding this approach might be helpful.
Agreed with the caveat - if and only if the alternative approach can mimic most of the benefits that this approach brings.
I'm not aware of any other decision theories that really can come close to rigorously defining decision making, so until those are developed, it makes sense to try and create patches to what we already have.
Replies from: Lumifer↑ comment by Lumifer · 2015-10-15T18:51:23.176Z · LW(p) · GW(p)
that really can come close to rigorously defining
You're optimizing for the wrong thing. "Matching reality" is a much more useful criterion than "rigorous".
You can get very rigorous about spherical cows in vacuum.
Replies from: None↑ comment by [deleted] · 2015-10-15T19:11:43.977Z · LW(p) · GW(p)
You're optimizing for the wrong thing. "Matching reality" is a much more useful criterion than "rigorous".
It's easy to match reality when you're non-rigorous. You just describe how you make decisions in plain language, and you have a decision making criterion.
But, when your decisions become very complicated (what startup should I start and why)) , turns out that vague explanation isn't much help. This is when you need rigor.
Replies from: Lumifer↑ comment by Lumifer · 2015-10-15T19:40:59.348Z · LW(p) · GW(p)
It's easy to match reality when you're non-rigorous
Not if you want to make forecasts (= good decisions for the future).
But, when your decisions become very complicated (what startup should I start and why)) , turns out that vague explanation isn't much help. This is when you need rigor.
That's when you need to avoid simplistic models which will lead you astray. Your criterion is still the best forecast. Given the high level of uncertainty and noise I am not at all convinced that the more rigor you can bring, the better.
Replies from: None↑ comment by AlexMennen · 2015-10-15T16:52:47.487Z · LW(p) · GW(p)
Standard counterargument it may be but it seems pretty rubbish to me. It seems to have the form "You can't be sure you're right about X and the consequences of being wrong can be arbitrarily bad therefore do Y". This seems like a classic case of a fully general counterargument.
No, it isn't. It can't be used against agents with bounded utility functions. Agents with utility functions that are unbounded but defined in terms of what the true probability distribution should be, so that they can be proved to converge, are also immune.
If I assign a non-zero probability to being wrong in my assessment of the likelihood of any possible scenario then I'm utterly unable to normalise my distribution. Thus I see this approach as an utter failure, as far as attempts to account for logical uncertainty go.
That is correct. People are in general unable to specify probability distributions over an infinite number of outcomes with asymptotics that correctly reflect their actual state of uncertainty.
This is exactly what I was addressing with the discussion of the dreaming/crazy theories, random sensory input is just another variant of that. And as I said there I don't see this as a problem.
You said:
All scenarios markedly less likely than those have the burden of disproving those possibilities first. But this is not an impossible burden
I was trying to point out that it actually is an impossible burden. The universe does not contain enough room for the amount of information it would take to raise hypotheses with a prior of 1/3^^^^3 to a posterior above that for "I'm seeing random noise".
I can see compelling arguments for bounded personal utility, but I can't see compelling argument that moral catastrophes are bounded. So, as much as it would solve the mugging (and particularly an entirely morality-based version of it), I'm not convinced that it does so correctly.
It's not like there is some True Ethical Utility Function, and your utility function is some combination of that with your personal preferences. The moral component of your utility function just reflects the manner in which you care about other people, and there is no reason it should be linear. Bounded utility functions are the correct solution to Pascal's mugging because using them is the way we came to the inclusion that paying the mugger is the wrong move in the first place. You did not conclude that you shouldn't pay the mugger because of your confidence that the probability that he tells the truth is less than 1/n (n being the scale of the consequences described by the mugger); human brains cannot process probabilities that small when making intuitive judgments, and if you came to doubt your reasoning for the probability decreasing faster than 1/n, I bet you would not say "Oh, in that case I guess I'd just pay the mugger". Instead, your actual reason for confidence that you shouldn't pay the mugger is probably just that you don't think you should let tiny probabilities throw you around, but this is the reasoning of an agent with bounded utility.
Replies from: Irgy↑ comment by Irgy · 2015-10-15T23:30:06.366Z · LW(p) · GW(p)
No, it isn't. It can't be used against agents with bounded utility functions.
Ok fully general counterargument is probably an exaggeration but it does have some similar undesirable properties:
Your argument does not actually address the argument it's countering in any way. If 1/n is the correct prior to assign to this scenario surely that's something we want to know? Surely I'm adding value by showing this?
If your argument is accepted then it makes too broad a class of statements into muggings. In fact I can't see why "arglebargle 3^^^^3 banana" isn't a mugging according to your argument. If I've reduced the risk of factual uncertainty in the original problem to a logical uncertainty, this is progress. If I've shown the mugging is equivalent to gibberish, this is progress.
I do think it raises a relevant point though that until we've resolved how to handle logical uncertainty we can't say we've fully resolved this scenario. But reducing the scenario from a risk of factual uncertainty to the more general problem of logical uncertainty is still worthwhile.
I was trying to point out that it actually is an impossible burden.
Only if you go about it the wrong way. The "my sensory system is receiving random noise" theory does not generally compel us to act in any particular way, so the balance can still be influenced by small probabilities. Maybe you'd be "going along with it" rather than believing it but the result is the same. Don't get me wrong, I think there are modifications to behaviour which should be made in response to dreaming/crazy/random theories, but this is essentially the same unsolved problem of acting with logical uncertainty as discussed above.
In any case all I was trying to do with that section was undermine the ridicule given to assigning suitably low probabilities to things. The presence of alternative theories may affect how we act, and the dominance of them over superexponentially low probabilities may smother the relevance of choices that depend on them, but none of this makes assigning those values incorrect. And I support this by demonstrating that at least in the absence of alternative catch-all theories, by assigning those probabilities you are not making it impossible to believe these things, despite the bandwidth of your sensory system. Which is far from a proof that their correct in itself, but does undermine the point being made in the Pascal's Muggle article.
It's not like there is some True Ethical Utility Function, and your utility function is some combination of that with your personal preferences.
Well we have a different take on meta-ethics is all. Personally I think Coherent Extrapolated Volition applied to morality leads to a unique limit, which, while in all likelihood is not just unfindable but also unverifiable, still exists as the "One True Ethical Function" in a philosophical sense.
I agree that the amount to which a person cares about others is and should be bounded. But I separate the scale of a moral tragedy itself from the extent to which I or anyone else is physically capable of caring about it. I think nonlinearly mapping theoretically unbounded moral measurements into my own bounded utility is more correct than making the moral measurements nonlinearly to begin with.
Consider for example the following scenario: 3^^^^3 people are being tortured. You could save 1000 of them by pressing a button, but you'd have to get off a very comfy couch to do it.
With bounded moral values the difference between 3^^^^3 people and 3^^^^3-1000 is necessarily insignificant. But with my approach, I can take the difference between the two values in an unbounded, linear moral space, then map the difference into my utility to make the decision. I don't believe this can be done without having a linear space to work in at some point.
Which is the core of my problem with your preferred resolution using bounded moral utility. I agree that bounding moral utility would resolve the paradox but I still don't think you've made a case that it's correct.
Replies from: AlexMennen↑ comment by AlexMennen · 2015-10-16T07:13:01.552Z · LW(p) · GW(p)
If 1/n is the correct prior to assign to this scenario surely that's something we want to know? Surely I'm adding value by showing this?
Not much, since the correct solution doesn't rely on this anyway. Besides, your attempt to show this was incorrect; perhaps I should have addressed that in my original comment, but I'll do that now:
A critical point here is that the complexity of the world itself of size n is proportional to n, not complexity(n). In order for a computer program to model the behaviour of a world of size n, it does not suffice to just generate the number n itself. It needs to model the behaviour of every single one of those n elements that make up the world. Such a program would need memory of size n just to keep track of one time step.
This argument relies on a misunderstanding of what Kolmogorov complexity is. The complexity of an object is the length of the source code of the shortest program that generates it, not the amount of memory used by that program. There are short programs that use extremely large amounts of memory; hence there are extremely large objects with low complexity.
If your argument is accepted then it makes too broad a class of statements into muggings. In fact I can't see why "arglebargle 3^^^^3 banana" isn't a mugging according to your argument.
Huh?
The "my sensory system is receiving random noise" theory does not generally compel us to act in any particular way, so the balance can still be influenced by small probabilities.
You've changed the claim you're defending without acknowledging that you've done so; earlier you were saying that you actually can receive evidence that could convince you of hypotheses with probability 1/3^^^^3, not just that you could receive evidence that would make you act as if you were convinced. Your new argument is still wrong though. It is plausibly true that the specific hypothesis that all of your observations have been random noise does not offer any action guidance at all. But it is not true for the hypothesis that your observations temporarily lapsed into random noise when you saw what looked like convincing evidence for a 1/3^^^^3-probability event. In this hypothesis, you still have a significant amount of information that you can use to compare possible actions, and the probability of that hypothesis is still a hell of a lot more than 1/3^^^^3. Hypotheses with a prior of 1/3^^^^3 will never dominate your decision-making procedure.
In any case all I was trying to do with that section was undermine the ridicule given to assigning suitably low probabilities to things.
Your undermining attempt failed because the ridicule is warranted.
I agree that the amount to which a person cares about others is and should be bounded.
I'm confused because it sounds like you're conceding here that bounded utility is correct, but elsewhere you say otherwise.
I think nonlinearly mapping theoretically unbounded moral measurements into my own bounded utility is more correct than making the moral measurements nonlinearly to begin with.
Fine so far, but...
With bounded moral values the difference between 3^^^^3 people and 3^^^^3-1000 is necessarily insignificant. But with my approach, I can take the difference between the two values in an unbounded, linear moral space, then map the difference into my utility to make the decision. I don't believe this can be done without having a linear space to work in at some point.
No, that does not give you a well-defined utility function. You can see this if you try to use it to compare three or more different outcomes.
Replies from: Irgy↑ comment by Irgy · 2015-10-16T10:53:43.638Z · LW(p) · GW(p)
This argument relies on a misunderstanding of what Kolmogorov complexity is. The complexity of an object is the length of the source code of the shortest program that generates it, not the amount of memory used by that program.
I know that.
The point about memory is the memory required to store the program data, not the memory required to run the program. The program data is part of the program, thus part of the complexity. A mistake I maybe made though was to talk about the current state rather than the initial conditions, since the initial conditions are what give the complexity of the program (though initialising with the current state is possible and equivalent). In my defence though talking about the memory was only meant to be illustrative.
To elaborate, you could simulate the laws of physics with a relatively small program, but you could not simulate the universe itself without a program as complex as the universe. You might think of it as a small "laws of physics" simulator and a large input file, but the complexity measure must include this input file. If it did not the program would not be deterministicly linked to its output.
Huh?
Ok let me spell it out.
you cannot be 100% certain of its correctness. You should assign some nonzero probability to the hypothesis that the probability does not decrease fast enough for the correct expected utilities to be bounded. Then, taking this uncertainty into account, your expected utilities are unbounded.
A person comes up to you and says "arglebargle 3^^^^3 banana". This appears to you to be gibberish. However, you cannot be 100% certain of the correctness of this assertion. It could be that they're trying to perform Pascal's Mugging, but your ears are blocked and you didn't hear them right. You should assign some nonzero probability to this hypothesis, and that value will be greater than 3^^^^3. Thus the risk is sufficient that you should pay them $5 just in case.
This is what I mean by calling your argument too general. Now obviously neither you nor I would consider "arglebargle 3^^^^3 banana" a mugging, but I do not see a meaningful difference between your counterargument and the argument I present above.
You've changed the claim you're defending
No, I'm just making a multi-faceted point. Maybe if I break it up it would help:
- Given two alternatives A and B, where A is initially considered 3^^^^3 times less likely than the B, it is possible, even with limited sensory bandwidth, to receive evidence to convince you that B is more likely than A. This is a point which I believe was not considered correctly in Pascal's Muggle.
- Separate to this are other catch-all theories (C, D, etc.) which are impossible to disprove and potentially much more likely than 3^^^^3.
- However, catch-all theories may not influence a decision relating to A and B because they do not generally reward the corresponding actions differently.
- When they do influence a decision, it is most likely for the better. The best way to handle this is in my opinion still unsolved.
Pascal's Muggle was a scenario where the protagonist belligerently stuck to B because they felt it was impossible to generate sufficient evidence to support A. There's a lot of other discussion in that post, and it's quite sensible on the whole, but this is the aspect I was focusing on.
I'm confused because it sounds like you're conceding here that bounded utility is correct, but elsewhere you say otherwise.
I'm saying bounded utility is correct for an individual or agent. But I'm also saying bounds are not justified for the aggregation of the desires of an unbounded number of agents. These statements are not inconsistent.
No, that does not give you a well-defined utility function. You can see this if you try to use it to compare three or more different outcomes.
Well ok you're right here, the case of three or more outcomes did make me rethink how I consider this problem.
It actually highlights that the impact of morals on my personal utility is indirect. I couldn't express my utility as some kind of weighted sum of personal and (nonlinearly mapped) moral outcomes, since if I did I'd have the same problem getting off the couch as I argued you would. I think in this case it's the sense of having saved the 1000 people that I would value, which only exists by comparison with the known alternative. Adding more options to the picture would definitely complicate the problem and unless I found a shortcut I might honestly be stuck evaluating the whole combinatorial explosion of pairs of options.
But, exploring my own utility aside, the value of treating morality linearly is still there. If I bounded the morals themselves I would never act because I would honestly think there was as good as no difference between the outcomes at all, even when compared directly. Whereas by treating morals linearly I can at least pin that sense of satisfaction in having saved them on a real quantitative difference.
Replies from: AlexMennen↑ comment by AlexMennen · 2015-10-17T00:16:34.360Z · LW(p) · GW(p)
The program data is part of the program, thus part of the complexity.
If by program data you mean the input to the program, then that is correct, but there are large objects computed by short programs with short input or even no input, so your overall argument is still incorrect.
... "arglebargle 3^^^^3 banana" ...
Ok yes, unbounded utility functions have nonconvergence all over the place unless they are carefully constructed not to. This does not require anyone spouting some piece of gibberish at you, or even anything out of the ordinary happening.
Given two alternatives A and B, where A is initially considered 3^^^^3 times less likely than the B, it is possible, even with limited sensory bandwidth, to receive evidence to convince you that B is more likely than A. This is a point which I believe was not considered correctly in Pascal's Muggle.
I already explained why this is incorrect, and you responded by defending your separate point about action guidance while appearing to believe that you had made a rebuttal.
However, catch-all theories may not influence a decision relating to A and B because they do not generally reward the corresponding actions differently. When they do influence a decision, it is most likely for the better. The best way to handle this is in my opinion still unsolved.
As I said, there will be hypotheses with priors much higher than 1/3^^^^3 that can explain whatever observations you see and do reward your possible actions differently, and then the hypotheses will probability less 1/3^^^^3 will not contribute anything non-negligible to your expected utility calculations.
I'm saying bounded utility is correct for an individual or agent. But I'm also saying bounds are not justified for the aggregation of the desires of an unbounded number of agents. These statements are not inconsistent.
If you're saying that the extent to which an individual cares about the desires of an unbounded number of agents is unbounded, then you are contradicting yourself. If you aren't saying that, then I don't see why you wouldn't accept boundedness of your utility function as a solution to Pascal's mugging.
Replies from: Irgy↑ comment by Irgy · 2015-10-17T22:48:53.155Z · LW(p) · GW(p)
there are large objects computed by short programs with short input or even no input, so your overall argument is still incorrect.
I have to say, this caused me a fair bit of thought.
Firstly, I just want to confirm that you agree a universe as we know it has complexity of the order of its size. I agree that an equivalently "large" universe with low complexity could be imagined, but its laws would have to be quite different to ours. Such a universe, while large, would be locked in symmetry to preserve its low complexity.
Just an aside on randomness, you might consider a relatively small program generating even this universe, by simply simulating the laws of physics, which include a lot of random events quite possibly including even the Big Bang itself. However I would argue that the definition of complexity does not allow for random calculations. To make such calculations, a pseudo random input is required, the length of which is added to the complexity. AIXI would certainly not be able to function otherwise.
The mugger requires more than just a sufficiently large universe. They require a universe which can simulate 3^^^^3 people. A low complexity universe might be able to be large by some measures, but because it is locked in a low complexity symmetry, it cannot be used simulate 3^^^^3 unique people. For example the memory required (remember I mean the memory within the mugger's universe itself, not the memory used by the hypothetical program used to evaluate that universe's complexity) would need to be of the order of 3^^^^3, however while the universe may have 3^^^^3 particles if those particles are locked in a low-complexity symmetry then they cannot possibly hold 3^^^^3 bits of data.
In short, a machine of complexity of 3^^^^3 is fundamentally required to simulate 3^^^^3 different people. My error was to argue about the complexity of the mugger's universe, when what matters is the complexity of the mugger's computing resources.
I already explained why this is incorrect, and you responded by defending your separate point about action guidance while appearing to believe that you had made a rebuttal.
No, all of your arguments relate to random sensory inputs, which are alternative theories 'C' not the 'A' or 'B' that I referred to. To formalise:
I claim there exists theories A and B along with evidence E, such that: p(B) > 3^^^^3p(A) p(A|E) > p(B|E) complexity(E) << 3^^^^3 (or more to the point it's within our sensory bandwidth.
You have only demonstrated that there exists theory C (random input) such that C != B for any B satisfying the above, which I also tentatively agree with.
So the reason I switch to a separate point is because I don't consider my original statement disproven, but I accept that theories like C may limit the relevance of it. Thus I argue about the relevance of it, with this business about whether it affects your action or not. To be clear, I do agree (and I have said this) that C-like theories can influence action (as you argue). I am trying to argue though that in many cases they do not. It's hard to resolve since we don't actually have a specific case we're considering here, this whole issue is off on a tangent from the mugging itself.
I admit that the text of mine you quoted implies I meant it for any two theories A and B, which would be wrong. What I really meant was that there exist such (pairs of) theories. The cases where it can be true need to be very limited anyway because most theories do not admit evidence E as described, since it requires this extremely inefficiently encoded input.
If you're saying that the extent to which an individual cares about the desires of an unbounded number of agents is unbounded, then you are contradicting yourself. If you aren't saying that, then I don't see why you wouldn't accept boundedness of your utility function as a solution to Pascal's mugging.
I'm not saying the first thing. I do accept bounded utility as a solution to the mugging for me (or any other agent) as an individual, as I said in the original post. If I was mugged I would not pay for this reason.
However, I am motivated (by a bounded amount) to make moral decisions correctly, especially when they don't otherwise impact me directly. Thus if you modify the mugging to be an entirely moral question (i.e. someone else is paying), I am motivated to answer it correctly. To answer it correctly, I need to consider moral calculations, which I still believe to be unbounded. So for me there is still a problem to be solved here.
Replies from: AlexMennen↑ comment by AlexMennen · 2015-10-18T07:29:19.672Z · LW(p) · GW(p)
Firstly, I just want to confirm that you agree a universe as we know it has complexity of the order of its size. I agree that an equivalently "large" universe with low complexity could be imagined, but its laws would have to be quite different to ours. Such a universe, while large, would be locked in symmetry to preserve its low complexity.
No. Low complexity is not the same thing as symmetry. For example, you can write a short program to compute the first 3^^^^3 digits of pi. But it is widely believed that the first 3^^^^3 digits of pi have almost no symmetry.
I would argue that the definition of complexity does not allow for random calculations. To make such calculations, a pseudo random input is required, the length of which is added to the complexity.
Mostly correct. However, given a low-complexity program that uses a large random input, you can make a low-complexity program that simulates it by iterating through all possible inputs, and running the program on all of them. It is only when you try to run it on one particular high-complexity input without also running it on the others that it requires high complexity. Thus the lack of ability for a low-complexity program to use randomness does not prevent it from producing objects in its output that look like they were generated using randomness.
No, all of your arguments relate to random sensory inputs, which are alternative theories 'C' not the 'A' or 'B' that I referred to. To formalise: I claim there exists theories A and B along with evidence E, such that: p(B) > 3^^^^3p(A) p(A|E) > p(B|E) complexity(E) << 3^^^^3 (or more to the point it's within our sensory bandwidth.
Oh, I see. This claim is correct. However, it does not seem that important to me, since p(A|E) will still be negligible.
To be clear, I do agree (and I have said this) that C-like theories can influence action (as you argue). I am trying to argue though that in many cases they do not.
It would be quite surprising if none of the "C-like" theories could influence action, given that there are so many of them (the only requirement to be "C-like" is that it is impossible in practice to convince you that C is less likely than A, which is not a strong condition, since the prior for A is < 1/3^^^^3).
However, I am motivated (by a bounded amount) to make moral decisions correctly, especially when they don't otherwise impact me directly. Thus if you modify the mugging to be an entirely moral question (i.e. someone else is paying), I am motivated to answer it correctly. To answer it correctly, I need to consider moral calculations, which I still believe to be unbounded. So for me there is still a problem to be solved here.
Ah, I think you're actually right that utility function boundedness is not a solution here (I actually still think that the utility function should be bounded, but that this is not relevant under certain conditions that you may be pointing at). Here's my attempt at an analysis:
Assume for simplicity that there exist 3^^^^3 people (this seems okay because the ability of the mugger to affect them is much more implausible than their existence). The probability that there exists any agent which can affect on the order of 3^^^^3 people, and uses this ability to do bizarre Pascal's mugging-like threats, is small (let's say 10^-20).The probability that a random person pretends to be Pascal's mugger is also small, but not as small (let's say 10^-6). Thus if people pay Pascal's mugger each time, this results in an expected 3^^^^3/(10^6) people losing $5 each, and if people do not pay, the expected number of people effected is 3^^^^3/(10^20), and there probably isn't anything you can do to someone that matters 10^14 times as much as the marginal value of $5 (10^14 is actually pretty big). Thus it is a better policy to not pay. This did not directly use any negligible probabilities (nothing like 1/3^^^^3, I mean). However, this is arguably suspect because it implicitly assigns a probability of less than 1/3^^^^3 to the hypothesis that there exists a unique Pascal's mugger who actually does have the powers he claims and that I am the person he approaches with the dilemma. I'll have to think about this more.
Replies from: Irgy↑ comment by Irgy · 2015-10-18T23:58:51.224Z · LW(p) · GW(p)
(meta) Well, I'm quite relieved because I think we're actually converging rather than diverging finally.
No. Low complexity is not the same thing as symmetry.
Yes sorry symmetry was just how I pictured it in my head, but it's not the right word. My point was that the particles aren't acting independently, they're constrained.
Mostly correct. However, given a low-complexity program that uses a large random input, you can make a low-complexity program that simulates it by iterating through all possible inputs, and running the program on all of them.
By the same token you can write a low complexity program to iteratively generate every number. That doesn't mean all numbers have low complexity. It needs to be the unique output of the program. If you tried to generate every combination then pick one out as the unique output, the picking-one-out step would require high complexity.
I think as a result of this whole discussion I can simplify my entire "finite resources" section to this one statement, which I might even edit in to the original post (though at this stage I don't think many more people are ever likely to read it):
"It is not possible to simulate n humans without resources of complexity at least n."
Everything else can be seen as simply serving to illustrate the difference between a complexity of n, and a complexity of complexity(n).
It would be quite surprising if none of the "C-like" theories could influence action, given that there are so many of them
It's easy to give a theory a posterior probability of less than 1/3^^^^3, by giving it zero. Any theory that's actually inconsistent with the evidence is simply disproven. What's left are theories which either accept the observed event, i.e. those which have priors < 1/3^^^^3 (e.g. that the number chosen was 7 in my example), and theories which somehow reject either the observation itself or the logic tying the whole thing together.
It's my view that theories which reject either observation or logic don't motivate action because they give you nothing to go on. There are many of them, but that's part of the problem since they include "the world is like X and you've failed to observe it correctly" for every X, making it difficult to break the symmetry.
I'm not completely convinced there can't be alternative theories which don't fall into the two categories above (either disproven or unhelpful), but they're specific to the examples so it's hard to argue about them in general terms. In some ways it doesn't matter if you're right, even if there was always compelling arguments not to act on a belief which had a prior of less than 1/3^^^^3, Pascal's Muggle could give those arguments and not look foolish by refusing to shift his beliefs in the face of strong evidence. All I was originally trying to say was that it isn't wrong to assign priors that low to something in the first place. Unless you disagree with that then we're ultimately arguing over nothing here.
Here's my attempt at an analysis
This solution seems to work as stated, but I think the dilemma itself can dodge this solution by constructing itself in a way that forces the population of people-to-be-tortured to be separate from the population of people-to-be-mugged. In that case there isn't of the order of 3^^^^3 people paying the $5.
(meta again) I have to admit it's ironic that this whole original post stemmed from an argument with someone else (in a post about a median utility based decision theory), which was triggered by me claiming Pascal's Mugging wasn't a problem that needed solving (at least certainly not by said median utility based decision theory). By the end of that I became convinced that the problem wasn't considered solved and my ideas on it would be considered valuable. I've then spent most of my time here arguing with someone who doesn't consider it unsolved! Maybe I could have saved myself a lot of karma by just introducing the two of you instead.
Replies from: AlexMennen↑ comment by AlexMennen · 2015-10-20T21:38:59.265Z · LW(p) · GW(p)
"It is not possible to simulate n humans without resources of complexity at least n."
Still disagree. As I pointed out, it is possible to for a short program to generate outputs with a very large number of complex components.
It's my view that theories which reject either observation or logic don't motivate action because they give you nothing to go on. There are many of them, but that's part of the problem since they include "the world is like X and you've failed to observe it correctly" for every X, making it difficult to break the symmetry.
Given only partial failure of observation or logic (where most of your observations and deductions are still correct), you still have something to go on, so you shouldn't have symmetry there. For everything to cancel so that your 1/3^^^^3-probability hypothesis dominates your decision-making, it would require a remarkably precise symmetry in everything else.
Maybe I could have saved myself a lot of karma by just introducing the two of you instead.
I have also argued against the median utility maximization proposal already, actually.
↑ comment by hairyfigment · 2015-10-16T16:46:15.157Z · LW(p) · GW(p)
Either I've misunderstood the OP completely, or the prior is based on an explicit assumption of finite resources - an assumption which would ordinarily have a probability far less than 1 - (1/3^^^^3), though in everyday circumstances we can pretty much call it 'certainty'. So no, the counterargument is absolutely valid.
Also, as you should know if you read the Muggle post, Eliezer most certainly did mean Pascal's Mugging to draw attention to the failure of expected utility to converge. So you should be clearer at the start about what you think your argument does. What you have now almost seems like a quick disclaimer added when you realized the OP had failed.
(Edited to fix typo.)
Replies from: Irgy↑ comment by Irgy · 2015-10-16T21:50:54.821Z · LW(p) · GW(p)
Sorry, but I don't know which section of my reply this is addressing and I can't make complete sense of it.
an explicit assumption of finite resources - an assumption which would ordinarily have a probability far less than 1 - (1/3^^^^3)
The OP is broken into two main sections, one assuming finite resources and one assuming infinite.
Our universe has finite resources, why would an assumption of finite resources in an alternative universe be vanishingly unlikely? Personally I would expect finite resources with probability ~=1. I'm not including time as a "resource" here by the way, because infinite future time can be dealt with by geometric discounting and so isn't interesting.
What you have now almost seems like a quick disclaimer added when you realized the OP had failed.
It would especially help to know which quote you are referring to here.
Overall I endeavoured to show that the mugging fails in the finite case, and is nothing particularly special in the infinite case. The mugging as I see it is intended as a demonstration that large, low complexity numbers are a problem. I argue that infinite resources are a problem, but large, low complexity numbers on their own are not.
I still don't consider my arguments to have failed (though it's becoming clear that at least my presentation of them has since no-one seems to have appreciated it), I do disclaim that the mugging still raises the question of infinite resources, but reducing it to just that issue is not a failure.
I also remain firmly convinced that expected utilities (both personal and moral) can and should converge, it's just that the correct means of dealing with infinity needs to be applied, and I leave a few options open in that regard.
comment by Slider · 2015-10-17T17:12:34.538Z · LW(p) · GW(p)
The case for the infinite rests on the cost being only finite while the resources used to alledegly produce it infinite. What if the claim is straight up infinite?
Replies from: Irgy↑ comment by Irgy · 2015-10-17T22:05:36.116Z · LW(p) · GW(p)
Well, you'd need a method of handling infinite values in your calculations. Some methods exist, such as taking limits of finite cases (though much care needs to be taken), using a number system like the Hyperreals or the Surreals if appropriate, or comparing infinite cardinals, it would depend a little on the details of how such an infinite threat was made plausible. I think in most cases my argument about the threat being dominated by other factors would not hold in this case.
While my point about specific other actions dominating may not hold in this case, I think the overall point that infinite resources cause problems far more fundamental than the mugging is if anything strengthened by your example. As is the general point that large numbers on their own are not the problem.