The (Boltzmann) Brain-In-A-Jar

post by PlatypusNinja · 2010-04-19T19:30:39.962Z · LW · GW · Legacy · 1 comments

Contents

1 comment

Response to: Forcing Anthropics: Boltzmann Brains by Eliezer Yudkowsky

There is an argument that goes like this:

"What if you're just a brain in a jar, being fed an elaborate simulation of reality?  Then nothing you do would have any meaning!"


This argument has been reformulated many times.  For example, here is the "Future Simulation" version of the argument:

"After the Singularity, we will develop huge amounts of computing power, enough to simulate past Earths with a very high degree of detail.  You have one lifetime in real life, but many millions of simulated lifetimes.  What if the life you're living right now is one of those simulated ones?"


Here is the "Boltzmann Brain" version of the argument:

"Depending on your priors about the size and chaoticness of the universe, there might be regions of the universe where all sorts of random things are happening.  In one of those regions, a series of particles might assemble itself into a version of you.  Through random chance, that series of particles might have all the same experiences you have had throughout your life.  And, in a large enough universe, there will be lots of these random you-like particle groups.  What if you're just a series of particles observing some random events, and next second after you think this you dissolve into chaos?"


All of these are the same possibility.  And you know what?  All of them are potentially true.  I could be a brain in a jar, or a simulation, or a Boltzmann brain.  And I have no way of calculating the probability of any of this, because it involves priors that I can't even begin to guess.

So how am I still functioning?

My optimization algorithm follows this very simple rule: When considering possible states of the universe, if in a given state S my actions are irrelevant to my utility, then I can safely ignore the possibility of S.

For example, suppose I am on a runaway train that is about to go over a cliff.  I have a button marked "eject" and a button marked "self-destruct painfully".  An omniscient, omnitruthful being named Omega tells me: "With 50% probability, both buttons are fake and you're going to go over the cliff and die no matter what you do."  I can safely ignore this possibility because, if it were true, I would have no way to optimize for it.

Suppose Omega tells me there's actually a 99% probability that both buttons are fake.  Maybe I'm pretty sad about this, but the "eject" button is still good for my utility and the "self-destruct" button is still bad.

Suppose Omega now tells me there's some chance the buttons are fake, but I can't estimate the probability, because it depends on my prior assumptions about the nature of the universe.  Still don't care!  Still pushing the eject button!

That is how I feel about the brain-in-a-jar problem.

The good news is that this pruning heuristic will probably be a part of any AI we build.  In fact, early forms of AI will probably need to use much stronger versions of this heuristic if we want to keep them focused on the task at hand.  So there is no danger of AIs having existential Boltzmann crises.  (Although, ironically, they actually are brains-in-a-jar, for certain definitions of that term...)

1 comments

Comments sorted by top scores.

comment by PlatypusNinja · 2010-03-31T23:18:59.081Z · LW(p) · GW(p)

The good news is that this pruning heuristic will probably be part of any AI we build. (In fact, early forms of this AI will have to use a much stronger version of this heuristic if we want to keep them focused on the task at hand.)

So there is no danger of AIs having existential Boltzmann crises. (Although, ironically, they actually are brains-in-a-jar, for certain definitions of that term...)