Dissolving the Experience Machine Objection

post by leogao · 2021-10-03T16:56:28.312Z · LW · GW · 10 comments

The experience machine objection is often levied against utilitarian theories that depend on utility being a function of observations or brain states. The version of the argument I'm considering here is a more cruxified version that strips out a bunch of confounding factors and goes something like this: imagine you had a machine you could step into that would perfectly simulate your experience in the real world. The objection goes that since most people would feel at least slightly more willing to stay in reality than go in the machine, there's at least some value to being in the "real" world, therefore we can't accept any branch of utilitarianism that assumes utility is soley a function of observations or brain states.

I think if you accept the premise that the machine somehow magically truly simulates perfectly and indistinguishably from actual reality, in such a way that there is absolutely no way of knowing the difference between the simulation and the outside universe, then the simulated universe is essentially isomorphic to reality, and we should be fully indifferent. I'm not sure it even makes sense to say either universe is more "real", since they're literally identical in every way that matters (for the differences we can't observe even in theory, I appeal to Newton's flaming laser sword). Our intuitions here should be closer to stepping into an identical parallel universe, rather than entering a simulation. 

However, I think it's not actually possible to have such a perfect experience machine, which would explain our intuition for not wanting to step inside. First, if this machine simulates reality using our knowledge of physics at the time, it's entirely possible that there are huge parts of physics you would never be able to find out about inside the machine, since you can never be 100% sure whether you really know the Theory of Everything. Second, this machine would have to be smaller than the universe in some sense, since it's part of the universe. As a result, the simulation would probably have to cut corners or reduce the size of the simulated universe substantially to compensate. 

These things both impact the possible observations you can have inside the machine, which allows you to distinguish between simulation and reality, which means it's totally valid to penalize the utility of living inside a simulation by some amount depending on how strongly you feel about the limitations (and how good the machine is). Just because there's a penalty doesn't mean that other factors can't overcome that, though. Lots of versions of the objection try to sweeten the deal for the world inside the machine further ("you can experience anything you want"/"you get maximum serotonin"/etc); this doesn't really change the core of the argument of whether our utility function should depend on anything other than observations. If the perks are really good and you care less about the limitations than the perks, then it makes perfect sense to go inside the machine; if you care more about the limitations than the perks, it makes perfect sense not to go inside the machine.

The crux of the experience machine thought experiment is that even when all else is held constant, we should assign epsilon more utility to whatever is "real", therefore utility does not depend soley on your observations/brain states. I argue that this epsilon penalty makes sense given practical limitations to any real experience machines, which is probably what informs our intuitions, and that if you somehow handwaved those limitations way then we really truly should be indifferent.

10 comments

Comments sorted by top scores.

comment by Vaughn Papenhausen (Ikaxas) · 2021-10-03T19:17:11.632Z · LW(p) · GW(p)

I think if you accept the premise that the machine somehow magically truly simulates perfectly and indistinguishably from actual reality, in such a way that there is absolutely no way of knowing the difference between the simulation and the outside universe, then the simulated universe is essentially isomorphic to reality, and we should be fully indifferent. I’m not sure it even makes sense to say either universe is more “real”, since they’re literally identical in every way that matters (for the differences we can’t observe even in theory, I appeal to Newton’s flaming laser sword). Our intuitions here should be closer to stepping into an identical parallel universe, rather than entering a simulation.

I see what you're trying to get at here, but as stated I think this begs the question. You're assuming here that the only ways universes could differ that would matter would be ways that have some impact on what we experience. People who accept the experience machine (let's call them "non-experientialists) don't agree. They (usually) think that whether we're deceived, or whether our beliefs are actually true, can have some effect on how good our life is.

For example, consider two people whose lives are experientially identical, call them Ron and Edward. Ron lives in the real world, and has a wife and two children who love him, and whom he loves, and who are a big part of the reason he feels his life is going well. Edward lives in the experience machine. He has exactly the same experiences as Ron, and therefore also thinks he has a wife and children who love him. However, he doesn't actually have a wife and children, just some experiences that make him think he has a wife and children (so of course "his wife and children" feel nothing for him, love or otherwise. Perhaps these experiences are created by simulations, but suppose the simulations are p-zombies who don't feel anything). Non-experientialists would say that Ron's life is better than Edward's, because Edward is wrong about whether his wife and children love him (naturally, Edward would be devastated if he realized the situation he was in; it's important to him that his wife and children love him, so if he found out they didn't, he would be distraught). He won't ever find this out, of course (since his life is experientially identical to Ron's, and Ron will never find this out, since Ron doesn't live in the experience machine). But the fact that, if he did, he would be distraught, and the fact that it's true, seem to make a difference to how well his life goes, even though he will never actually find out. (Or at least, this is the kind of thing non-experientialists would say.)

(Note the difference between the way the experience machine is being used here and the way it's normally used. Normally, the question is "would you plug in?" But here, the question is "are these two experientially-identical lives, one in the experience machine and one in the real world, equally as good as each other? Or is one better, if only ever-so-slightly?" See this paper for more discussion: Lin, "How to Use the Experience Machine")

For a somewhat more realistic (though still pretty out-there) example, imagine Andy and Bob. Once again, Andy has a wife and children who love him. Bob also has a wife and children, and while they pretend to love him while he's around, deep down his wife thinks he's boring and his children think he's tyrannical; they only put on a show so as not to hurt his feelings. Suppose Bob's wife and children are good enough at pretending that they can fool him for his whole life (and don't ever let on to anyone else who might let it slip). It seems like Bob's life is actually pretty shitty, though he doesn't know it.

Ultimately I'm not sure how I feel about these thought experiments. I can get the intuition that Edward and Bob's lives are pretty bad, but if I imagine myself in their shoes, the intuition becomes much weaker (since, of course, if I were in their shoes, I wouldn't know that "my" wife and children don't love "me"). I'm not sure which of these intuitions, if either, is more trustworthy. But this is the kind of thing you have to contend with if you want to understand why people find the experience machine compelling.

Replies from: leogao, samshap, TAG
comment by leogao · 2021-10-04T02:51:15.434Z · LW(p) · GW(p)

I guess in that case I think what I'm doing is identifying the experience machine objection as being implied by Newton's flaming lazer sword, which I have far stronger convictions on. For those who reject NFLS, then I guess my argument doesn't really apply. However, at least I personally was in the category of people who firmly accept NFLS and also had reservations about the experience machine, so I don't think this implication is trivial.

As for the Andy and Bob situation, I think that objections like that can be similarly dissolved, given an acceptance of NFLS. If Bob has literally absolutely no way of finding out whether his wife and children truly love him, if they act exactly in the way they would if they really did, then I would argue that whether or not they "really" love him is equally irrelevant by NFLS. Our intuitions in this case are guided by the fact that in reality, Potemkin villages almost always eventually fall apart.

comment by samshap · 2021-10-04T01:48:09.921Z · LW(p) · GW(p)

I think the Bob example is very informative! I think there's an intuitive and logical reason why we think Bob and Edward are worse off. Their happiness is contingent on the masquerade continuing, which has a probability less than one in any plausible setup.

(The only exception to this would be if we're analyzing their lives after they are dead)

comment by TAG · 2021-10-04T01:03:49.556Z · LW(p) · GW(p)

Upvoted from 0. Why was it downvoted?

comment by Charlie Steiner · 2021-10-03T18:10:31.568Z · LW(p) · GW(p)

If I know ahead of time that the experience machine isn't isomorphic to reality (e.g. the people inside are not simulated, but are instead puppeteered by some nonhuman intelligence optimizing for some objective that I can choose), then that seems like perfectly good grounds for not going in, whether or not I could detect it from inside.

comment by romeostevensit · 2021-10-03T22:54:09.775Z · LW(p) · GW(p)

The machine just simulates you having rejected the machine and that having gone really well.

comment by Dach · 2021-10-05T23:16:25.266Z · LW(p) · GW(p)

Is that your real disagreement [LW · GW] with the experience machine?

I think if you accept the premise that the machine somehow magically truly simulates perfectly and indistinguishably from actual reality, in such a way that there is absolutely no way of knowing the difference between the simulation and the outside universe, then the simulated universe is essentially isomorphic to reality, and we should be fully indifferent. I'm not sure it even makes sense to say either universe is more "real", since they're literally identical in every way that matters (for the differences we can't observe even in theory, I appeal to Newton's flaming laser sword). Our intuitions here should be closer to stepping into an identical parallel universe, rather than entering a simulation. 

The experience machine does not need to be an exact simulation of the entire observable universe, it merely needs to be a passably high quality simulation of your brain and how it would change. You can't see the fundamental particles around you, or the evolution of the universal wavefunction. A superintelligence would thus not need to perfectly fake reality to perfectly deceive you and leave you with the same level of confidence that you're in a simulation as you have right now.

In fact, this is the original construction of the argument- it didn't involve perfectly simulating the universe, it just involved deceiving the subject to such a degree that they wouldn't be smart enough to figure out which was real.

However, I think it's not actually possible to have such a perfect experience machine, which would explain our intuition for not wanting to step inside. First, if this machine simulates reality using our knowledge of physics at the time, it's entirely possible that there are huge parts of physics you would never be able to find out about inside the machine, since you can never be 100% sure whether you really know the Theory of Everything. Second, this machine would have to be smaller than the universe in some sense, since it's part of the universe. As a result, the simulation would probably have to cut corners or reduce the size of the simulated universe substantially to compensate. 

This can't "explain" our intuitions, because the fact that physically simulating the entire universe in exact detail is impossible is evidently not what's causing those intuitions. People who disagree with the experience machine include those who have no idea what a "wave function" is, those who think the earth is flat and the entire universe is in a small crystal dome, leading physicists, philosophers who think building the experience machine as described is possible, and I allege many of the readers of this post before they knew that simulating the universe in exact detail was physically impossible.

Can it explain your disagreement? That's something for you to think about yourself.

I personally would not step in, because I am at least mildly robust to attempts to hijack my senses. Whenever the map could become disentangled from the territory due to the plots of mad philosophers, I will try to avoid this.

comment by Logan Zoellner (logan-zoellner) · 2021-10-05T22:04:57.495Z · LW(p) · GW(p)

I don't think this is a particularly good refutation, since the things utilitarians mostly care about (pleasure and suffering) and the things that the experience machine is likely to be bad at simulating (minute details of quantum physics) have almost no overlap.

I would reject an experience machine even if my (reported) satisfaction after unknowingly entering one was wildly higher than my enjoyment of reality.

comment by ADifferentAnonymous · 2021-10-04T19:17:39.577Z · LW(p) · GW(p)

It sounds like you adhere to a version of NFLS that only counts consequences as observable if you yourself can observe them in practice? So you can't care about the far future if you don't think you'll live to see it? That seems pretty extreme if I'm interpreting it correctly.

comment by JBlack · 2021-10-04T10:16:11.424Z · LW(p) · GW(p)

From self-reflection, I don't think I actually care about whether real experiences matter more than simulated ones. I'm not sure whether it matters to me whether the people I interact with are conscious or not. I can accept that even if the laws of reality emulated inside aren't identical to those outside, there are at least some that are at least as fundamental, discoverable, and interesting as those of reality for the purposes of my lifespan.

One thing that does matter to me is the idea that (at least in principle) in the machine I am deliberately blind to things that affect my future well-being and survival, as well as that of any other simulated things and people I may care about. If there's a hurricane that threatens its power supply, I want to know about it and be able to respond in some manner. So to be a true test, it can't just be a machine subject to external influences as all machines are. It should be as robust as whatever reality underlies it, and that seems extremely unlikely outside a thought experiment.

The other thing that matters to me is that to get in the machine, I also need to trust the person/being/deity offering it with everything that I am and can ever be. Once inside, I have no way whatsoever to know whether I ever leave again.

The other side of this question is that any/all of us may be in a simulation right now, one that even at its worst is much more pleasant than the underlying reality. The true world outside could be unimaginably bad by comparison. Suppose that it is, and your memory of its horrors and the utter non-existence of any possible hope for improvement in reality are returned. You have a choice to exit the simulation forever, or continue with your simulated life on Earth. You will be offered more chances to leave when you next "die" in here. Exit: [Y]/n?