[Paper] On the 'Simulation Argument' and Selective Scepticism
post by Pablo (Pablo_Stafforini) · 2013-05-18T18:31:10.901Z · LW · GW · Legacy · 57 commentsContents
57 comments
Jonathan Birch recently published an interesting critique of Bostrom's simulation argument. Here's the abstract:
Nick Bostrom’s ‘Simulation Argument’ purports to show that, unless we are confident that advanced ‘posthuman’ civilizations are either extremely rare or extremely rarely interested in running simulations of their own ancestors, we should assign significant credence to the hypothesis that we are simulated. I argue that Bostrom does not succeed in grounding this constraint on credence. I first show that the Simulation Argument requires a curious form of selective scepticism, for it presupposes that we possess good evidence for claims about the physical limits of computation and yet lack good evidence for claims about our own physical constitution. I then show that two ways of modifying the argument so as to remove the need for this presupposition fail to preserve the original conclusion. Finally, I argue that, while there are unusual circumstances in which Bostrom’s selective scepticism might be reasonable, we do not currently find ourselves in such circumstances. There is no good reason to uphold the selective scepticism the Simulation Argument presupposes. There is thus no good reason to believe its conclusion.
The paper is behind a paywall, but I have uploaded it to my shared Dropbox folder, here.
EDIT: I emailed the author and am glad to see that he's decided to participate in the discussion below.
57 comments
Comments sorted by top scores.
comment by JonathanBirch · 2013-05-19T20:51:57.102Z · LW(p) · GW(p)
Thanks, everyone, for your comments on my paper. It’s great to see that it is generating discussion. I think I ought to take this opportunity to give a brief explanation of the argument I make in the paper, for the benefit of those who haven’t read it.
The basic argument goes like this. In the first section, I point out that the ‘Simulation Argument’ invokes (at different stages) two assumptions that I call Good Evidence (GE) and Impoverished Evidence (IE). GE is the assumption that I possess good evidence regarding the true physical limits of computation. IE is the assumption that my current evidence does not support any empirical claims non-neutral with respect to the hypothesis (SIM) that I am simulated—for example, the empirical claim that I possess two physically real human hands.
Although GE and IE may look in tension with one another, they are not necessarily incompatible. We can generate a genuine incompatibility, however, by introducing a third claim, Parity of Evidence (PE), stating that my epistemic access to the facts about my own physical constitution is at least as good as my epistemic access to the facts about the true physical limits of computation. Since GE, IE and PE are jointly incompatible, at least one of them must be false.
My own view (and a common view, I imagine) is that IE is false, while GE and PE are true. But rejecting IE would fatally compromise the Simulation Argument. So I spend most of the paper considering the two alternatives open to Bostrom: i.e., rejecting GE or rejecting PE. I argue that, if Bostrom rejects GE, the Simulation Argument still fails. I then argue that, if he rejects PE, the Simulation Argument succeeds, but it’s pretty hard to see how PE could be false. So neither of these alternatives is particularly promising.
One common response I’ve encountered focusses on GE, and asks: why does Bostrom actually need GE? Surely all he really needs is the conditional assumption that, if my evidence is veridical, then GE is true. This conditional assumption allows him to say that, if my evidence is veridical, then the Simulation Argument goes through in its original form; whereas if my evidence is not veridical because I’m simulated, then I’m simulated—so we just end up at the same conclusion by a different route.
This is roughly the line of response pressed here by Benja and Eliezer Yudkowsky. It’s a very reasonable response to my argument, but I don’t think it works. The quick explanation is that it’s just not true that, conditional on my evidence being veridical, the Simulation Argument goes through in its original form. This is essentially because conditionalizing on my evidence being veridical makes SIM a lot less likely than it otherwise would be, and this vitiates the indifference-based reasoning on which the Simulation Argument is based. But Benja is right to press me on the formal details here, so I’ll reply to his objection in a separate comment.
Replies from: kybernetikos↑ comment by kybernetikos · 2014-06-19T00:02:28.652Z · LW(p) · GW(p)
It seems as if your argument rests on the assertion that my access to facts about my physical condition is at least as good as my access to facts about the limitations of computation/simulation. You say the 'physical limitations', but I'm not sure why 'physical' in my universe is particularly relevant - what we care about is whether it's reasonable for there to be many simulations of someone like me over time or not.
I don't think this assertion is correct. I can make a statement about the limits of computation / simulation - i.e. that there is at least enough simulation power in the universe to simulate me and everything I am aware of, that is true whether I am in a simulation or in a top level universe, or even whether I believe in matter and physics at all.
I believe that this assertion, that the top level universe contains at least enough simulation power to simulate someone like myself and everything of which they are aware is something that I have better evidence for than the assertion that I have physical hands.
Have I misunderstood the argument, or do you disagree that I have better evidence for a minimum bound to simulation power than for any specific physical attribute?
comment by nigerweiss · 2013-05-18T18:57:49.778Z · LW(p) · GW(p)
I think a slightly sturdier argument is that we live in an unbelievably computationally expensive universe, and we really don't need to. We could easily be supplied with a far, far grainier simulation and never know the difference. If you're interested in humans, you'd certainly take running many orders of magnitude more simulations, over running a single, imperceptibly more accurate simulation, far slower.
There are two obvious answers to this criticism: the first is to raise the possibility that the top level universe has so much computing power that they simply don't care. However, if we're imagining a top level universe so vastly different from our own, the anthropic argument behind the Bostrom hypothesis sort of falls apart. We need to start looking at confidence distributions over simulating universes, and I don't know of a good way to do that.
The other answer is that we are living in a much grainier simulation, and either there are super-intelligent demons flitting around between ticks of the world clock, falsifying the results of physics experiments and making smoke detectors work, or that there is a global conspiracy of some kind, orchestrated by the simulators, to which most of science is party, to convince the bulk of the population that we are living in a more computationally expensive universe. From that perspective, the Simulation Argument starts to look more like some hybrid of solipsism and a conspiracy theory, and seems substantially less convincing.
Replies from: Eliezer_Yudkowsky, NancyLebovitz, AlanCrowe, JoshuaZ, Randaly, JGWeissman, roystgnr, elharo↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-19T08:06:46.358Z · LW(p) · GW(p)
It would be trivial for an SI to run a grainy simulation that was only computed out in greater detail when high-level variables of interest depended on it. Most sophisticated human simulations already try to work like this, e.g. particle filters for robotics or the Metropolis transport algorithm for ray-tracing works like this. No superintelligence would even be required, but in this case it is quite probable on priors as well, and if you were inside a superintelligent version you would never, ever notice the difference.
It's clear that we're not living in a set of physical laws designed for cheapest computation of intelligent beings, i.e., we are inside an apparent physics (real or simulated) that was chosen on other grounds than making intelligent beings cheap to simulate (if physics is real, then this follows immediately). But we could still, quite easily, be cheap simulations within a fixed choice of physics. E.g., the simulators grew up in a quantum relativistic universe, and now they're much more cheaply simulating other beings within an apparently quantum relativistic universe, using sophisticated approximations that change the level of detail when high-level variables depend on it (so you see the right results in particle accelerators) and use cached statistical outcomes for proteins folding instead of recomputing the underlying quantum potential energy surface every time, or even for whole cells when the cells are mostly behaving as a statistical aggregate, etc. This isn't a conspiracy theory, it's a mildly-more-sophisticated version of what sophisticated simulation algorithms try to do right now - expend computational power where it's most informative.
Replies from: nigerweiss, Benja↑ comment by nigerweiss · 2013-05-19T19:17:53.098Z · LW(p) · GW(p)
Unless P=NP, I don't think it's obvious that such a simulation could be built to be perfectly (to the limits of human science) indistinguishable from the original system being simulated. There are a lot of results which are easy to verify but arbitrarily hard to compute, and we encounter plenty of them in nature and physics. I suppose the simulators could be futzing with our brains to make us think we were verifying incorrect results, but now we're alarmingly close to solipsism again.
I guess one way to to test this hypothesis would be to try to construct a system with easy-to-verify but arbitrarily-hard-to-compute behavior ("Project: Piss Off God"), and then scrupulously observe its behavior. Then we could keep making it more expensive until we got to a system that really shouldn't be practically computable in our universe. If nothing interesting happens, then we have evidence that either we aren't in a simulation, or P=NP.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-05-20T00:23:12.514Z · LW(p) · GW(p)
, or the simulating entity has mindbogglingly large amounts of computational power. But yes, it would rule out broad classes of simulating agents.
↑ comment by Benya (Benja) · 2013-05-19T08:45:22.357Z · LW(p) · GW(p)
I think it's correct that this makes the simulation argument goes through, but I don't believe the "trivial". As far as I can see, you need the simulation code to literally keep track of will humans notice this -- my intuition is that this would require AGI-grade code (without that I expect you would either have noticeable failures or you would have something that is so conservative about its decisions of what not to simulate that it will end up simulating the entire atmosphere on a quantum level because when and where hurricanes occur influences the variables it's interested in), and I suppose you could call this squabbles over terminology but AGI-grade code is above my threshold for "trivial".
[ETA: Sorry, you did say "for a superintelligence" -- I guess I need to reverse my squabble over words.]
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-19T10:55:11.122Z · LW(p) · GW(p)
As far as I can see, you need the simulation code to literally keep track of will humans notice this
Not necessarily - when you build a particle accelerator you're setting up lots of matter to depend on the exact details of small amounts of matter, which might be detectable on a much more automatic level. But in any case, most plausible simulators have AGI-grade code anyway.
Replies from: Benja↑ comment by Benya (Benja) · 2013-05-19T11:17:52.184Z · LW(p) · GW(p)
Not necessarily - when you build a particle accelerator you're setting up lots of matter to depend on the exact details of small amounts of matter, which might be detectable on a much more automatic level.
Ok; my point was that, due to butterfly effects, it seems likely that this is also true for the weather or some other natural process, but if there is a relatively simple way to calculate a well-calibrated probability distribution for whether any particular subatomic interaction will influence large amounts of matter, that should probably do the trick. (This works whether or not this distribution can actually detect the particular interactions that will influence the weather, as long as it can reliably detect the particle accelerator ones.)
But in any case, most plausible simulators have AGI-grade code anyway.
Fair enough, I think. Also I just noticed that you actually said "trivial for a SI", which negates my terminological squabble -- argh, sorry. ... OK, comment retracted.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-19T12:21:26.322Z · LW(p) · GW(p)
my point was that, due to butterfly effects, it seems likely that this is also true for the weather or some other natural process
Hm. True. I still feel like there ought to be some simple sense in which butterfly effects don't render a well-calibrated statistical distribution for the weather poorly calibrated, or something along those lines - maybe, butterfly effects don't correlate with utility in weather, or some other sense of low information value - but that does amp up the intelligence level required.
I later said "No SI required" so your retraction may be premature. :)
Replies from: wedrifid↑ comment by NancyLebovitz · 2013-05-18T19:49:22.207Z · LW(p) · GW(p)
Another possibility is that whoever is running the simulation is both computationally very rich and not especially interested in humans, they're interested in the sub-atomic flux or something. We're just a side-effect.
Replies from: nigerweiss↑ comment by nigerweiss · 2013-05-18T19:59:31.502Z · LW(p) · GW(p)
In that case, you've lost the anthropic argument entirely, and whether or not we're a simulation relies on your probability distributions over possible simulating agents, which is... weird.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-05-18T20:27:27.351Z · LW(p) · GW(p)
How did I lose the anthropic argument? We're still only going to know about the sort of universe we're living in.
Replies from: nigerweiss↑ comment by nigerweiss · 2013-05-18T21:05:18.388Z · LW(p) · GW(p)
The original form of the Bostrom thesis is that, because we know that our descendants will probably be interested in running ancestor simulations, we can predict that, eventually, a very large number of these simulations exist. Thus, we are more likely to be living in an ancestor simulation than the actual, authentic history that they're based on.
If we take our simulators to be incomprehensible, computationally-rich aliens, then that argument is gone completely. We have no reason to believe they'd run many simulations that look like our universe, nor do we have a reason to believe that they exist at all. In short, the crux of the Bostrom argument is gone.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-05-18T21:59:24.538Z · LW(p) · GW(p)
Thanks for the reminder.
I can see a case that we're more likely to be living in an ancestor simulation (probably not very accurate) than to be actual ancestors, but I believe strongly that the vast majority of simulations will not be ancestor simulations, and therefore we are most likely to be in a simulation that doesn't have a close resemblance to anyone's past.
Replies from: nigerweiss↑ comment by nigerweiss · 2013-05-18T22:07:58.133Z · LW(p) · GW(p)
I can see a case that we're more likely to be living in an ancestor simulation (probably not very accurate) than to be actual ancestors, but I believe strongly that the vast majority of simulations will not be ancestor simulations, and therefore we are most likely to be in a simulation that doesn't have a close resemblance to anyone's past.
That seems... problematic. If your argument depends on the future of people like us being likely to generate lots of simulations, and of us looking nothing like the past of the people doing the simulating, that's contradictory. If you simply think that every possible agency in the top level of reality is likely to run enough simulations that people like us emerge accidentally, that seems like a difficult thesis to defend.
Replies from: Desrtopa, elharo↑ comment by Desrtopa · 2013-05-19T14:03:16.030Z · LW(p) · GW(p)
I don't see anything contradictory about it. There's no reason that a simulation that's not of the simulators' past need only contain people incidentally. We can be a simulation without being a simulation created by our descendants.
Personally, if I had the capacity to simulate universes, simulating my ancestors would probably be somewhere down around the twentieth spot on my priorities list, but most of the things I'd be interested in simulating would contain people.
I don't think I would regard simulating the universe as we observe it as ethically acceptable though, and if I were in a position to do so, I would at the very least lodge a protest against anyone who tried.
Replies from: nigerweiss, NancyLebovitz↑ comment by nigerweiss · 2013-05-19T19:06:18.000Z · LW(p) · GW(p)
We can be a simulation without being a simulation created by our descendants.
We can, but there's no reason to think that we are. The simulation argument isn't just 'whoa, we could be living in a simulation' - it's 'here's a compelling anthropic argument that we're living in a simulation'. If we disregard the idea that we're being simulated by close analogues of our own descendants, we lose any reason to think that we're in a simulation, because we can no longer speculate on the motives of our simulators.
Replies from: elharo, Desrtopa↑ comment by elharo · 2013-05-20T10:57:29.008Z · LW(p) · GW(p)
I think the likelihood of our descendants simulating us is negligible. While it is remotely conceivable that some super-simulators who are astronomically larger than us and not necessarily subject to the same physical laws, could pull off such a simulation, I think there is no chance that our descendants, limited by the energy output of a star, the number of atoms in a few planets, and the speed of light barrier, could plausibly simulate us at the level of detail we experience.
This is the classic fractal problem. As the map becomes more and more accurate, it become larger and larger until it is the same size as the territory. The only simulation our descendants could possibly achieve, assuming they don't have better things to do with their time, would be much less detailed than reality.
↑ comment by Desrtopa · 2013-05-19T22:30:46.212Z · LW(p) · GW(p)
I don't think that the likelihood of our descendants simulating us at all is particularly high; my predicted number of ancestor simulations should such a thing turn out to be possible is zero, which is one reason I've never found it a particularly compelling anthropic argument in the first place.
But, if people living in universes capable of running simulations tend to do run simulations, then it's probable that most people will be living in simulations, regardless of whether anyone ever chooses to run an ancestor simulation.
Replies from: nigerweiss↑ comment by nigerweiss · 2013-05-20T00:12:37.567Z · LW(p) · GW(p)
Zero? Why?
At the fundamental limits of computation, such a simulation (with sufficient graininess) could be undertaken with on the order of hundreds of kilograms of matter and a sufficient supply of energy. If the future isn't ruled by a power singlet that forbids dicking with people without their consent (i.e. if Hanson is more right than Yudkowsky), then somebody (many people) with access to that much wealth will exist, and some of them will run such a simulation, just for shits and giggles. Given the no-power-singlets, I'd be very surprised if nobody decided to play god like that. People go to Renaissance fairs, for goodness sakes. Do you think that nobody would take the opportunity to bring back whole lost eras of humanity in bottle-worlds?
As for the other point, if we decide that our simulators don't resemble us, then calling them 'people' is spurious. We know nothing about them. We have no reason to believe that they'd tend to produce simulations containing observers like us (the vast majority of computable functions won't). Any speculation, if you take that approach, that we might be living in a simulation is entirely baseless and unfounded. There is no reason to privilege that cosmological hypothesis over simpler ones.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-20T00:49:55.383Z · LW(p) · GW(p)
I think it's more likely than not that simulating a world like our own would be regarded as ethically impermissible. Creating a simulated universe which contains things like, for example, the Killing Fields of Cambodia, seems like the sort of thing that would be likely to be forbidden by general consensus if we still had any sort of self-governance at the point where it became a possibility.
Plus, while I've encountered plenty of people who suggest that somebody would want to create such a simulation, I haven't yet known anyone to assert that they would want to make such a simulation.
I don't understand why you're leaping from "simulators are not our descendants" to "simulators do not resemble us closely enough to meaningfully call them 'people.'" If I were in the position to create universe simulations, rather than simulating my ancestors, I would be much more interested in simulating people in what, from our perspective, is a wholly invented world (although, as I said before, I would not regard creating a world with as much suffering as we observe as ethically permissible.) I would assign a far higher probability to simulators simulating a world with beings which are relatable to them than a world with beings unrelatable to them, provided they simulate a world with beings in it at all, but their own ancestors are only a tiny fraction of relatable being space.
↑ comment by NancyLebovitz · 2013-05-19T14:24:54.044Z · LW(p) · GW(p)
Also, simulating one's ancestors would be something that you'd only need to do once, or (more likely) enough times to accommodate different theories. Simulating one's ancestors in what-if scenarios would probably be more common, unless the simulators just don't care about that sort of fun.
↑ comment by elharo · 2013-05-20T10:50:50.082Z · LW(p) · GW(p)
I don't think it's that hard to defend. That people like us emerge accidentally is the default assumption of most working scientists today. Personally I find that a lot more likely than that we are living in a simulation.
And even if you think that it is more likely that we are living in a simulation (I don't, by the way) there's still the question of how the simulators arose. I'd prefer not to make it an infinite regress. Such an approach veers dangerously close to unfalsifiable theology. (Who created/simulated God? Meta-God. Well then, who created/simulated Meta-God? Meta-Meta-God. And who created/simulated Meta-Meta-God?...)
Sometime, somewhere there's a start. Occam's Razor suggests that the start is our universe, in the Big Bang, and that we are not living in a simulation. But even if we are living in a simulation, then someone is not living in a simulation.
I also think there are stronger, physical arguments for assuming we're not in a digital simulation. That is, I think the universe routinely does things we could not expect any digital computer to do. But that is a subject for another post.
↑ comment by AlanCrowe · 2013-05-19T20:22:10.027Z · LW(p) · GW(p)
The human brain is subject to glitches, such as petit mal, transient ischaemic attack, or misfiling a memory of a dream as a memory of something that really happened.
There is a lot of scope for a cheap simulation to produce glitches in the matrix without those glitches spoiling the results of the simulation. The inside people notice something off and just shrug. "I must have dreamt it" "I had a petit mal." "That wasn't the simulators taking me off line to edit a glitch out of my memory, that was just a TIA. I should get my blood pressure checked."
And the problem of "brain farts" gives the simulators a very cheap way for protecting the validity of the results simulation against people noticing glitches and derailing the simulation by going on a glitch hunt motivated by the theory that they might be living in a simulation. Simply hide the simulation hypothesis by editing Nick Bostrom under the guise of a TIA. In the simulation Nick wakes up with his coffee spilled and his head on the desk. Thinking up the simulation hypothesis "never happened". In all the myriad simulations, the simulation hypothesis is never discussed.
I'm not sure that entirely resolves the matter. How can the simulators be sure that editing out the simulation hypothesis works as smoothly as they expect? Perhaps they run a few simulations with it left in. If it triggers an in-simulation glitch hunt that compromises the validity of the simulation, they have their answer and can turn off the simulation.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-05-19T20:54:09.483Z · LW(p) · GW(p)
I've wondered about that sort of thing-- if you look for something and find it somewhere that you'd have sworn you'd checked three times, you'll assume it's a problem with your memory or a sort of ill-defined perversity of things, not a Simulation glitch.
↑ comment by JoshuaZ · 2013-05-19T15:02:11.389Z · LW(p) · GW(p)
The problem is more serious than that, in that not only is our universe computationally expensive, it is set up in a way such that it would (apparently) have a lot of trouble doing universe simulations. You cannot simulate n+1 arbitrary bits with just n qubits. This means that a simulation computer needs to be at least as effectively large as what it is simulating. You can assume that some aspects are more coarse grained (so you don't do a perfect simulation of most of Earth, just say the few kilometers near the surface that humans and other life are likely to be), but this is still a lot of stuff.
↑ comment by Randaly · 2013-05-19T09:52:33.468Z · LW(p) · GW(p)
A fourth answer is that the entire world/universe isn't being simulated; only a small subset of it is. I believe that more arguments about simulations assume that more simulators wouldn't simulate the entire current population.
Replies from: nigerweiss, NancyLebovitz↑ comment by nigerweiss · 2013-05-19T19:03:34.985Z · LW(p) · GW(p)
That doesn't actually solve the problem: if you're simulating fewer people, that weakens the anthropic argument proportionately. You've still only got so much processor time to go around.
↑ comment by NancyLebovitz · 2013-05-19T14:27:10.533Z · LW(p) · GW(p)
Might my lack of desire to travel mean that I'm more likely to be a PC?
Replies from: Eneasz↑ comment by JGWeissman · 2013-05-18T21:11:28.236Z · LW(p) · GW(p)
The other answer is that we are living in a much grainier simulation, and either there are super-intelligent demons flitting around between ticks of the world clock, falsifying the results of physics experiments and making smoke detectors work, or that there is a global conspiracy of some kind, orchestrated by the simulators, to which most of science is party, to convince the bulk of the population that we are living in a more computationally expensive universe.
To the extent that super-intelligent demons / global conspiracies are both required for a grainier simulation to work and unreasonable to include in a simulation hypothesis, this undermines your claim that "We could easily be supplied with a far, far grainier simulation and never know the difference. If you're interested in humans, you'd certainly take running many orders of magnitude more simulations, over running a single, imperceptibly more accurate simulation, far slower."
Replies from: nigerweiss↑ comment by nigerweiss · 2013-05-18T21:56:34.849Z · LW(p) · GW(p)
Not for the simulations to work - only for the simulations to look exactly like the universe we now find ourselves in. 95% of human history could have played out, unchanged, in a universe without relativistic effects or quantum weirdness, far more inexpensively. We simply wouldn't have had the tools to measure the difference.
Even after the advent of things like particle accelerators, we could still be living in a very similar but-less-expensive universe, and things would be mostly unchanged. Our experiments would tell us that Newtonian mechanics are perfectly correct to as many decimal places as we can measure, and that atoms are distinct, discrete point objects with a well-defined mass, position, and velocity, and that would be fine. That'd just be the way things are. Very few non-physicist people would be strongly impacted by the change.
In other words, if they're interested in simulating humans, there are very simple approximations that would save an enormous quantity of computing power per second. The fact that we don't see those approximations in place (and, in fact, are living in such a computationally lavish universe) is evidence that we are not living in a simulation.
Replies from: JGWeissman, elharo↑ comment by JGWeissman · 2013-05-18T23:48:04.423Z · LW(p) · GW(p)
Ok, before you were talking about "grainier" simulations, I thought you meant computational shortcuts. But now you are talking about taking out laws of physics which you think are unimportant. Which is clever, but it is not so obvious that it would work.
It is not so easy to remove "quantum weirdness" because quantum is normal and lots of things depend on it. Like atoms not losing their energy to electromagnetic radiation. You want to patch that by making atoms indivisible and forget about the subatomic particles? Well, there goes chemistry, and electricity. Maybe you patch those also, but then we end up with a grab bag of brute facts about physics, unlike the world we experience, where if you know a bit about quantum mechanics, the periodic table of the elements actually makes sense. Transistors also depend on quantum, and if you patch that, and the engineering of the transistors depends on people understanding quantum mechanics. So now you need to patch things on the level of making sure inventors invent the same level of technology, and we are back to simulator-backed conspiracies.
Replies from: Luke_A_Somers, nigerweiss↑ comment by Luke_A_Somers · 2013-05-20T00:20:52.374Z · LW(p) · GW(p)
If it's an ancestor simulation for the purposes of being an ancestor simulation, then it could well evaluate everything on a lazy basis, with the starting points being mental states.
It would go as far as it needed in resolving the world to determine what the next mental state ought to be. A chair can just be 'chair' with a link to its history so it doesn't generate inconsistencies.
You have a deep hierarchy of abstractions, and only go as deep as needed.
Replies from: JGWeissman↑ comment by JGWeissman · 2013-05-20T01:25:44.867Z · LW(p) · GW(p)
I agree, and I thought at first that was the sort thing nigerweiss was referring to with "grainier" simulations, until they started talking about a "universe without relativistic effects or quantum weirdness".
↑ comment by nigerweiss · 2013-05-19T02:09:29.106Z · LW(p) · GW(p)
There's a sliding scale of trade-offs you can make between efficiency and Kolmogorov complexity of the underlying world structure. The higher the level your model is, the more special cases you have to implement to make it work approximately like the system you're trying to model. Suffice to say that it'll always be cheaper to have a mind patch the simpler model than to just go ahead and run the original simulation - at least, in the domain that we're talking about.
And, you're right - we rely on Solomonoff priors to come to conclusions in science, and a universe of that type would be harder to do science in, and history would play out differently. However, I don't think there's a good way to get around that (that doesn't rely on simulator-backed conspiracies). There are never going to be very many fully detailed ancestor simulations in our future - not when you'd have to be throwing the computational mass equivalents of multiple stars at each simulation, to run them at a small fraction of real time. Reality is hugely expensive. The system of equations describing, to the best of our knowledge, a single hydrogen atom in a vacuum, are essentially computationally intractable.
To sum up:
If our descendants are willing to run fully detailed simulations, they won't be able to run very many for economic reasons - possibly none at all, depending on how many optimizations to the world equations wind up being possible.
If our descendants are unwilling to run fully detailed simulations, then we would either be in the past, or there would be a worldwide simulator-backed conspiracy, or we'd notice the discrepancy, none of which seem true or satisfying.
Either way, I don't see a strong argument that we're living in a simulation.
↑ comment by elharo · 2013-05-20T10:39:43.201Z · LW(p) · GW(p)
This argument is anthropomorphizing. It assumes that the purpose of the purported simulation is to model humanity. Suppose it isn't? Suppose the purpose of the simulation is to model a universe with certain physical laws, and one of the unexpected outcomes is that intelligent technological life happens to evolve on a small rocky planet around one star out in the spiral arm of one galaxy. That could be a completely unexpected outcome, maybe even an unnoticed outcome, of a simulation with a very different purpose.
↑ comment by roystgnr · 2013-05-19T01:36:21.140Z · LW(p) · GW(p)
We live in something that is experimentally indistinguishable from an unbelievably computationally expensive universe... but there are whole disciplines of mathematics dedicated to discovering computationally easy ways to calculate results which are indistinguishable from unbelievably computationally expensive underlying mathematical models. If we can already do that, how much easier might it be for The Simulators?
Replies from: roystgnr↑ comment by roystgnr · 2013-05-20T14:05:15.306Z · LW(p) · GW(p)
Could anyone explain why this deserved multiple downvotes? Would a couple examples have helped? There's now a heavily upvoted comment from several hours later making the same point I was, so presumably I'm not just being hit by disagreement confused with disapproval.
↑ comment by elharo · 2013-05-20T10:34:45.092Z · LW(p) · GW(p)
Something doesn't click here. You claim "that we live in an unbelievably computationally expensive universe, and we really don't need to. We could easily be supplied with a far, far grainier simulation and never know the difference"; but how do we know that we do live in a computationally expensive universe if we can't recognize the difference between this and a less computationally expensive universe? Almost by definition anything we can measure (or perhaps more accurately have measured) is a necessary component of the simulation.
comment by Benya (Benja) · 2013-05-19T08:26:34.243Z · LW(p) · GW(p)
It's interesting, but it's also, as far as I can tell, wrong.
Birch is willing to concede that if I know that almost all humans live in a simulation, and I know nothing else that would help me distinguish myself from an average human, then I should be almost certain that I'm living in a simulation; i.e., P(I live in a simulation | almost everybody lives in a simulation) ~ 1. More generally, he's willing to accept that P(I live in a simulation | a fraction x of all humans live in a simulation) = x; similar to how, if I know that 60% of all humans have a gene that has no observable effects, and I don't know anything about whether I specifically have that gene, I should assign 60% probability to the proposition that I have that gene.
However, Bostrom's argument rests on the idea that our physics experiments show that there is a lot of computational power in the universe that can in principle be used for simulations. Birch points out that if we live in a simulation, then our physics experiments don't necessarily give good information about the true computational power in the universe. My first intuition would be that the argument still goes through if we don't live in a simulation, so perhaps we can derive an almost-contradiction from that? [ETA: Hm, that wasn't a very good explanation; Eliezer's comment does better.] Birch considers such a variation and concludes that we would need a principle that P(I live in a simulation | if I don't live in a simulation, then a fraction x of all humans lives in a simulation) >= x, and he doesn't see a compelling reason to believe that. (The if-then is a logical implication.)
But this follows from the principle he's willing to accept. "If I don't live in a simulation, then a fraction x of all humans lives in a simulation" is logically equivalent to (A or B), where A = "A fraction x of all humans lives in a simulation" and B = "the fraction of all humans that live in a simulation is != x, but I, in particular, live in a simulation"; note that A and B are mutually exclusive. Birch is willing to accept that P(I live in a simulation | A) = x, and it's certainly true that P(I live in a simulation | B) = 1. Writing p := P(A | A or B), we get
P(SIM | A or B) = [P(SIM | A) p] + [P(SIM | B) (1-p)] = [x * p] + [1-p] >= x.
Replies from: JonathanBirch, ciphergoth, ciphergoth↑ comment by JonathanBirch · 2013-05-20T07:02:15.593Z · LW(p) · GW(p)
Thanks Benja. This is a good objection to the argument I make in the 'Rejecting Good Evidence' section of the paper, but I think I can avoid it by formulating BIP* more carefully.
Suppose I’m in a situation in which it currently appears to me as though f-sim = x. In effect, your suggestion is that, in this situation, my evidence can be characterized by the disjunction (A ∨ B). You then reason as follows:
(1) Conditional on A, my credence in SIM should be >= x.
(2) Conditional on B, my credence in SIM should be 1.
(3) So overall, given that A and B are mutually exclusive, my credence in SIM should be >= x.
I accept that this is a valid argument. The problem with it, in my view, is that (A ∨ B) is not a complete description of what my evidence says.
Let V represent the proposition that my evidence regarding f-sim is veridical (i.e., the true value of f-sim is indeed what it appears to be). If A is true, then V is also true. So a more complete description of what my evidence says is (A ∧ V) ∨ (B ∧ ~V).
Now we need to ask: is it true that, conditional on (A ∧ V), my credence in SIM should be >= x?
BIP doesn’t entail that it should be, since BIP takes no account of the relevance of V. And V is surely relevant, since (on the face of it, at least) V is far more likely to be true if I am not simulated (i.e., Cr (V | ~SIM) >> Cr (V | SIM)).
Indeed, if one were to learn that (A ∧ V) is true, one might well rationally assign credence =< x to SIM. However, it’s not important to my argument that one’s credence should be =< x: all that matters is that there is no compelling reason to think that it should be >= x.
In short, then, your argument shows that, conditional on a certain description of what my evidence indicates, my credence in SIM should be >= x. But that description is the not the most complete description available—and we must always use the most complete description available, because we often find in epistemology that incomplete descriptions of the evidence lead to incorrect inferences.
Nevertheless, I think your response does expose an error in the paper. I should have formulated BIP* like this, explicitly introducing V:
BIP*: Cr [SIM | ((f-sim = x) ∧ V) ∨ ((f-sim ≠ x) ∧ ~V)] >= x
When BIP* is formulated like this, it is not entailed by BIP. Yet this is the modified principle Bostrom actually needs, if he wants to recover his original conclusion while rejecting Good Evidence. So I think the overall argument still stands, once the error you point out is corrected.
Replies from: Benja, ciphergoth↑ comment by Benya (Benja) · 2013-05-20T19:20:07.337Z · LW(p) · GW(p)
Well -- look, you can't possibly fix your argument by reformulating BIP*, because your paper gives a correct mathematical proof that its version of BIP*, plus the "quadripartite disjunction", are enough to imply the simulation argument! :-)
(For people who haven't read the paper: the quadripartite disjunction is H1 = almost no human civilizations survive to posthuman OR H2 = almost no posthuman civilizations run ancestor simulations OR H3 = almost all humans live in a simulation OR SIM = I live in a simulation. To get the right intuition, note that this is logically equivalent to "If I live in the real world, i.e. if not SIM, then H1 or H2 or H3".)
More formally, the argument in your paper shows that BIP* plus Cr(quadripartite disjunction) ~ 1 implies Cr(SIM) >~ 1 - Cr(H1 or H2).
I think that (a) there's a confusion about what the symbol Cr(.) means, and (b) what you're really trying to do is to deny Bostrom's original BIP.
Your credence symbol must be meant to already condition on all your information; recall that the question your paper is examining is whether we should accept that Cr(SIM) ~ 1, which is only an interesting question if this is supposed to take into account our current information. A conditional credence, like Cr(SIM | A), must thus mean: If I knew all I know now, plus the one additional fact that A is true, what would my credence be that I live in a simulation?
[ETA: I.e., the stuff we're conditioning on is not supposed to represent our state of knowledge, it is hypothetical propositions we're taking into account in addition to our knowledge! The reason I'm interested in the BIP* from the paper is not that I consider (A or B) a good representation of our state of knowledge (in which case Cr(SIM | A or B) would simply be equal to Cr(SIM)); rather, the reason is that the argument in your paper shows that together with the quadripartite disjunction it is sufficient to give the simulation argument.]
So Bostrom's BIP, which reads Cr(SIM | f_sim = x) = x, means that given all your current information, if you knew the one additional fact that the fraction of humans that live in a simulation is x, then your credence in yourself living in a simulation would be x. If you want to argue that the simulation argument fails even if our current evidence supports the quadripartite disjunction, because the fact that we observe what we observe gives us additional information that we need to take into account, then you need to argue that BIP is false. I can see ways in which you could try to do this: For example, you could question whether the particle accelerators of simulated humans would reliably work in accordance with quantum mechanics, and if one doesn't believe this, then we have additional information suggesting we're in the outside world. More generally, you'd have to identify something that we observe that none (or very very close to none) of the simulated humans would. A very obvious variant of the DNA analogy illustrates this: If the gene gives you black hair, and you have red hair, then being told that 60% of all humans have the gene shouldn't make you assign a 60% probability to having the gene.
The obvious way to take this into account in the formal argument would be to redefine f_sim to refer to, instead of all humans, only to those humans that live in simulations in which physics etc. looks pretty much like in the outside world; i.e., f_sim says how many of those humans actually live in a simulation. Then, the version of BIP referring to this new f_sim should be uncontroversial, and the above counterarguments would become an attack on the quadripartite disjunction (which is sensible, because they're arguments about the world, and the quadripartite disjunction is where all the empirically motivated input to the argument is supposed to go).
Replies from: torekp↑ comment by Paul Crowley (ciphergoth) · 2013-05-26T12:54:26.484Z · LW(p) · GW(p)
If your case is that BIP is insufficient to establish the conclusions Bostrom wants to establish, I'm pretty sure it does in fact suffice. If you accept both of these:
- BIP: Cr[SIM|f-sim ≥ x] ≥ x (where f-sim is over all observers in our evidential situation)
- Cr[f-sim ≥ x |V ] Cr[V|¬SIM] ≥ y_x
then we derive Cr[SIM] ≥ 1- (1-x)/y_x. x is some estimate of what f-sim might be in our world if we are not in a simulation and our current evidence is veridical, and y_x is our estimate of how likely a large f-sim is given the same assumptions; it's likely to be around f_I f_p.
↑ comment by Paul Crowley (ciphergoth) · 2013-05-19T11:28:51.119Z · LW(p) · GW(p)
(Updated)
I think your interpretation of "if I don't live in a simulation, then a fraction x of all humans lives in a simulation" as P(SIM or A) is wrong; it makes more sense to interpret it as P(A|¬SIM). This actually makes the proof simpler: for any A, B, we have that P(A) ≤ P(A|B)/P(B|A) by Bayes theorem, so if we accept that P(¬SIM|A) = (1-x), then we have P(¬SIM) ≤ (1-x)/P(A|¬SIM).
Replies from: Benja, ciphergoth↑ comment by Benya (Benja) · 2013-05-20T17:07:42.147Z · LW(p) · GW(p)
I think your interpretation of "if I don't live in a simulation, then a fraction x of all humans lives in a simulation" as P(SIM or A) is wrong
Huh?
The paper talks about P(SIM | ¬SIM → A), which is equal to P(SIM | SIM ∨ A) because ¬SIM → A is logically equivalent to SIM ∨ A. I wrote the P(SIM | ¬SIM → A) from the paper in words as P(I live in a simulation | if I don't live in a simulation, then a fraction x of all humans lives in a simulation) and stated explicitly that the if-then was a logical implication. I didn't talk about P(SIM or A) anywhere.
↑ comment by Paul Crowley (ciphergoth) · 2013-05-19T21:50:16.057Z · LW(p) · GW(p)
Mark Eichenlaub and Patrick LaVictoire point out on Facebook that if we let p=P(A|B) and q=P(B|A), there's a bound P(A) <= p/(p+q-pq), which is smaller than p/q.
↑ comment by Paul Crowley (ciphergoth) · 2013-05-19T09:25:58.400Z · LW(p) · GW(p)
I've only skimmed the paper but AFAICT this wipes out its central claim. Good work.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-19T08:20:11.544Z · LW(p) · GW(p)
Glancing briefly at this paper, the core idea seems to be that if we are living in a simulation, we don't have good evidence about the power of computation outside the simulation, which then might not be enough to run lots of simulations. Doesn't this argument trivially fail as a matter of logic, because assuming ~SH, we do in fact have good evidence about the expected future power of computation, so that if you accept the first two claims the third claim ~SH still becomes inconsistent as the original SA holds, hence SA still goes through? Or did I miss something on account of skimming through a paper which seems rather long for its core argument?
Replies from: Benja, Mestroyer↑ comment by Benya (Benja) · 2013-05-19T08:28:26.622Z · LW(p) · GW(p)
Doesn't this argument trivially fail as a matter of logic, because assuming ~SH, we do in fact have good evidence about the expected future power of computation, so that if you accept the first two claims the third claim ~SH still becomes inconsistent as the original SA holds, hence SA still goes through?
↑ comment by Mestroyer · 2013-05-19T09:55:48.635Z · LW(p) · GW(p)
He's arguing against the version of SA where instead of SH being one of the options, "most people with experiences like ours are in simulations" is one of them. The section about rejecting "Good Evidence" deals with the version of SA where SH Is an option. He rejects it because the analog of the intuition pump used to justify the kind of anthropic reasoning used in the first version of SA isn't intuitive to him, but I think it's right.
comment by Amanojack · 2013-05-19T13:13:40.109Z · LW(p) · GW(p)
The Simulation Argument is incoherent in the first place, and no complicated refutation is required to illustrate this. It is simply nonsensical to speak of entities in "another" universe simulating "our" universe, as the word universe already means "everything that exists." (Note that more liberal definitions, like "universe = everything we can even conceive of existing," only serve to show the incoherence more directly: the speaker talks of everything she can conceive of existing "plus more" that she is also conceiving as existing - immediately contradictory.)
By the way, this is the same reason an AI in a box cannot ever know it's in a box. No matter how intelligent it may be, it remains an incoherent notion for an AI in a box to conceive of something "outside the box." Not even a superintelligence gets a free pass on self-contradiction.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-05-19T13:38:33.375Z · LW(p) · GW(p)
is simply nonsensical to speak of entities in "another" universe simulating "our" universe, as the word universe already means "everything that exists."
This seems a silly linguistic nitpick - e.g perhaps other people use "universe" to mean our particular set of three dimensions of space and one dimension of time, or perhaps other people use "universe" to mean everything which is causally connected forwards and backwards to our own existence, etc.
If the Simulation Argument used the word "local set of galaxies" instead of "universe", would you still call it incoherent? If changing a single word is enough to change an argument from coherent to incoherent, then frankly you didn't find a fundamental flaw, you found a linguistic nitpick.