Naturally solved problems that are easy to verify but that would be hard to compute
post by Douglas_Reay · 2017-03-29T13:01:54.336Z · LW · GW · Legacy · 16 commentsContents
16 comments
Nick Bostrom's Simulation Argument hit the news recently when a physicist published a blog post about it:
No, we probably don’t live in a computer simulation
Some of the ensuing responses discussed the fidelity with which such a simulation would need to be run, in order to keep the population living within it guessing as to whether they were in a digital simulation, which is a topic that's been discussed before on LessWrong:
If a simulation can be not just run, but also loaded from previous saved states then edited, it should be possible for the simulation's Architect to start it running with low granularity, wait for some inhabitant to notice an anomaly, then rewind a little, use a more accurate but computing intensive algorithm in the relevant parts of the inhabitant's timecone and edit the saved state to include that additional detail, before setting the simulation running again and waiting for the next anomaly.
nigerweiss suggested:
construct a system with easy-to-verify but arbitrarily-hard-to-compute behavior ("Project: Piss Off God"), and then scrupulously observe its behavior. Then we could keep making it more expensive until we got to a system that really shouldn't be practically computable in our universe.
but I'm wondering how easy that would be.
The problem would need to be physical (for example, make a net with labelled strands of differing lengths joining the nodes, then hang it from one corner), else humanity would have to be doing as much work as the simulation.
The solution should be discrete (for example, what are the labels on the strands making up the limiting path that prevents the lowest point from hanging further down)
The solution should be not just analytic, but also difficult to get via numerical analysis.
The problem should be scalable to very large sizes (so, for example, the net problem wouldn't work, because with large size nets making the strands sufficiently different in length that you could tell two close solutions apart would be a limiting factor)
And, ideally, the problem would be one that occurs (and is solved) naturally, such that humanity could just record data in multiple locations over a period of years, then later decide which examples of the problem to verify. (See this paper by Scott Aaronson: "NP-complete Problems and Physical Reality")
Any thoughts?
16 comments
Comments sorted by top scores.
comment by gwern · 2017-03-29T14:50:40.983Z · LW(p) · GW(p)
(See this paper by Scott Aaronson: "NP-complete Problems and Physical Reality")
To expand on this: in this paper, Scott Aaronson points out, it's quite hard to find a problem which looks like it is computing anything interestingly hard aside from quantum effects; for example, he tried using soap bubbles to compute minimal spanning trees, which people thought were naturally solving a NP-complete problem, and discovered that no, actually they just converge to easy local minima and aren't doing anything special. So if you think you've discovered some super-hard to compute behavior, it may simply be finding local minima or interfering with itself or doing something which winds up being 'easy' for a universe-simulating computer.
Replies from: bogus↑ comment by bogus · 2017-03-29T17:54:15.081Z · LW(p) · GW(p)
To expand on this: in this paper, Scott Aaronson points out, it's quite hard to find a problem which looks like it is computing anything interestingly hard aside from quantum effects
Yes. OTOH, quantum effects don't seem to provide a general solution for 'easily-checkable' problems (beyond the modest improvement that Grover's algorithm brings to black-box search). However, closed timelike curves would provide easy solutions for any problem in PSPACE, which does include NP.
comment by entirelyuseless · 2017-03-29T14:02:24.364Z · LW(p) · GW(p)
"Then we could keep making it more expensive until we got to a system that really shouldn't be practically computable in our universe."
This is testing the hypothesis that we live in a simulation which is inside of a universe which has laws identical to the laws of our simulated universe.
But of course, if we are in a simulation, the laws of the "real" universe could be entirely different. There might even be a halting oracle in that universe.
comment by Douglas_Reay · 2017-03-29T13:03:06.467Z · LW(p) · GW(p)
Are programmers more likely to pay attention to detail in the middle of a functioning simulation run (rather than waiting until the end before looking at the results), or to pay attention to the causes of unexpected stuttering and resource usage? Could a pattern of enforced 'rewind events' be used to communicate?
comment by der (DustinWehr) · 2017-04-04T16:37:45.073Z · LW(p) · GW(p)
For Bostrom's simulation argument to conclude the disjunction of the two interesting propositions (our doom, or we're sims), you need to assume there are simulation runners who are motivated to do very large numbers of ancestor simulations. The simulation runners would be ultrapowerful, probably rich, amoral history/anthropology nerds, because all the other ultrapowerful amoral beings have more interesting things to occupy themselves with. If it's a set-it-and-forget-it simulation, that might be plausible. If the simulation requires monitoring and manual intervention, I think it's very implausible.
Replies from: g_pepper↑ comment by g_pepper · 2017-04-04T17:11:48.852Z · LW(p) · GW(p)
For Bostrom's simulation argument to conclude the disjunction of the two interesting propositions (our doom, or we're sims), you need to assume there are simulation runners who are motivated to do very large numbers of ancestor simulations. The simulation runners would be ultrapowerful, probably rich, amoral history/anthropology nerds, because all the other ultrapowerful amoral beings have more interesting things to occupy themselves with.
While Bostrom's argument as originally stated does reference specifically ancestor studies, here Bostrom says:
More generally, simulators might create many simulated people who are very different from their own ancestors, or who live in worlds that are very different from the one that the simulators live in. It is possible that we are living in such a simulation. I don't know of any way of estimating the probability that our hypothetical simulators (or their ancestors) are similar to us, or that their world is similar to the world we experience. (The original paper focuses on ancestor-simulations because the methodology is more solid for that case. It is less clear whether some kind of principle of indifference could also be applied to a reference class of "observer-moments" that are very different from one another...).
So, I think that the simulation argument could be generalized to refer to "civilization simulations" in lieu of "ancestor simulations". If so, there is no reason to assume that the simulation runners would necessarily be history/anthropology nerds. In fact, there could be any number of reasons why running a civilization simulation could be useful or interesting.
comment by denimalpaca · 2017-03-31T14:51:29.364Z · LW(p) · GW(p)
An idea I keep coming back to that would imply we reject the idea of being in a simulation is the fact that the laws of physics remain the same no matter your reference point nor place in the universe.
You give the example of a conscious observer recognizing an anomaly, and the simulation runner rewinds time to fix this problem. By only re-running the simulation within that observer's time cone, the simulation may have strange new behavior at the edge of that time cone, propagating an error. I don't think that the error can be recovered so much as moved when dealing with lower resolution simulations.
It makes the most sense to me, that if we are in a simulation it be a "perfect" simulation in that the most foundational forces and quantum effects are simulated all the time, because they are all in a way interacting with each other all the time.
Replies from: Douglas_Reay, dogiv↑ comment by Douglas_Reay · 2018-03-08T16:28:26.139Z · LW(p) · GW(p)
Companies writing programs to model and display large 3D environments in real time face a similar problem, where they only have limited resources. One work around they common use are "imposters"
A solar system sized simulation of a civilisation that has not made observable changes to anything outside our own solar system could take a lot of short cuts when generating the photons that arrive from outside. In particular, until a telescope or camera of particular resolution has been invented, would they need to bother generating thousands of years of such photons in more detail than could be captured by devices yet present?
↑ comment by dogiv · 2017-03-31T17:08:31.669Z · LW(p) · GW(p)
If it is the case that we are in a "perfect" simulation, I would consider that no different than being in a non-simulation. The concept of being "in a simulation" is useful only insofar as it predicts some future observation. Given the various multiverses that are likely to exist, any perfect simulation an agent might run is probably just duplicating a naturally-occurring mathematical object which, depending on your definitions, already "exists" in baseline reality.
The key question, then, is not whether some simulation of us exists (nearly guaranteed) but how likely we are to encounter an imperfection or interference that would differentiate the simulation from the stand-alone "perfect" universe. Once that happens, we are tied in to the world one level up and should be able to interact with it.
There's not much evidence about the likelihood of a simulation being imperfect. Maybe imperfect simulations are more common than perfect ones because they're more computationally tractable, but that's not a lot to go on.
Replies from: denimalpaca, Lumifer↑ comment by denimalpaca · 2017-04-01T17:03:53.783Z · LW(p) · GW(p)
I 100% agree that a "perfect simulation" and a non-simulation are essentially the same, noting Lumifer's comment that our programmer(s) are gods by another name in the case of simulation.
My comment is really about your second paragraph, how likely are we to see an imperfection? My reasoning about error propagation in an imperfect simulation would imply a fairly high probability of us seeing an error eventually. This is assuming that we are a near-perfect simulation of the universe "above" ours, with "perfect" simulation being done at small scales around conscious observers.
So I'm not really sure if you just didn't understand what I'm getting at, because we seem to agree, and you just explained back to me what I was saying.
Replies from: dogiv↑ comment by dogiv · 2017-04-03T14:06:24.953Z · LW(p) · GW(p)
I guess where we disagree is in our view of how a simulation would be imperfect. You're envisioning something much closer to a perfect simulation, where slightly incorrect boundary conditions would cause errors to propagate into the region that is perfectly simulated. I consider it more likely that if a simulation has any interference at all (such as rewinding to fix noticeable problems) it will be filled with approximations everywhere. In that case the boundary condition errors aren't so relevant. Whether we see an error would depend mainly on whether there are any (which, like I said, is equivalent to asking whether we are "in" a simulation) and whether we have any mechanism by which to detect them.
Replies from: denimalpaca↑ comment by denimalpaca · 2017-04-03T16:23:07.388Z · LW(p) · GW(p)
Everyone has different ideas of what a "perfectly" or "near perfectly" simulated universe would look like, I was trying to go off of Douglas's idea of it, where I think the boundary errors would have effect.
I still don't see how rewinding would be interference; I imagine interference would be that some part of the "above ours" universe gets inside this one, say if you had some particle with quantum entanglement spanning across the universes (although it would really also just be in the "above ours" universe because it would have to be a superset of our universe, it's just also a particle that we can observe).
↑ comment by Lumifer · 2017-03-31T17:12:57.328Z · LW(p) · GW(p)
If it is the case that we are in a "perfect" simulation, I would consider that no different than being in a non-simulation.
One difference is the OFF switch. Another difference is the (presumed) existence of a debugger.
Recall that the traditional name for the simulation hypothesis is "creationism".
comment by Douglas_Reay · 2017-03-29T13:02:51.490Z · LW(p) · GW(p)
Should such an experiment be carried out, or is persuading an Architect to terminate the simulation you are in, by frustrating her aim of keeping you guessing, not a good idea?