Updating towards the simulation hypothesis because you think about AI

post by SoerenMind · 2016-03-05T22:23:49.424Z · LW · GW · Legacy · 21 comments

Contents

21 comments

(This post is both written up in a rush and very speculative so not as rigorous and full of links as a good post on this site should be but I'd rather get the idea out there than not get around to it.)


Here’s a simple argument that could make us update towards the hypothesis that we live in a simulation. This is the basic structure:


1) P(involved in AI* | ¬sim) = very low

2) P(involved in AI | sim) = high


Ergo, assuming that we fully accept this the argument and its premises (ignoring e.g. model uncertainty), we should strongly update in favour of the simulation hypothesis.


Premise 1


Supposed you are a soul who will randomly awaken in one of at least 100 billion beings (the number of homo sapiens that have lived so far), probably many more. What you know about the world of these beings is that at some point there will be a chain of events that leads to the creation of superintelligent AI. This AI will then go on to colonize the whole universe, making its creation the most impactful event the world will see by an extremely large margin.


Waking up, you see that you’re in the body of one of the first 1000 beings trying to affect this momentous event. Would you be surprised? Given that you were randomly assigned a body, you probably would be.


(To make the point even stronger and slightly more complicated: Bostrom suggests to use observer moments, e.g. an observer-second, rather than beings as the fundamental unit of anthropics. You should be even more surprised to find yourself as an observer-second thinking about or even working on AI since most of the observer seconds in people's lives don’t do so. You reading this sentence may be such a second.)


Therefore, P(involved in AI* | ¬sim) = very low.


Premise 2

 

Given that we’re in a simulation, we’re probably in a simulation created by a powerful AI which wants to investigate something.


Why would a superintelligent AI simulate the people (and even more so, the 'moments’) involved in its creation? I have an intuition that there would be many reasons to do so. If I gave it more thought I could probably name some concrete ones, but for now this part of the argument remains shaky.


Another and probably more important motive would be to learn about (potential) other AIs. It may be trying to find out who its enemies are or to figure out ways of acausal trade. An AI created with the 'Hail Mary’ approach would need information about other AIs very urgently. In any case, there are many possible reasons to want to know who else there is in the universe.


Since you can’t visit them, the best way to find out is by simulating how they may have come into being. And since this process is inherently uncertain you’ll want to run MANY simulations in a Monte Carlo way with slightly changing conditions. Crucially, to run these simulations efficiently, you’ll run observer-moments (read: computations in your brain) more often the more causally important they are for the final outcome.


Therefore, the thoughts of people which are more causally connected to the properties of the final AI will be run many times and that includes especially the thoughts of those who got involved first as they may cause path-changes. AI capabilities researchers would not be so interesting to simulate because their work has less effect on the eventual properties of an AI.


If figuring out what other AIs are like is an important convergent instrumental goal for AIs, then a lot of minds created in simulations may be created for this purpose. Under SSA, the assumption that “all other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers [or observer moments] (past, present and future) in their reference class”, it would seem rather plausible that,

P(involved in AI | sim) = high


The closer the causal chain to (capabilities research etc)


If you read this, you’re probably one of those people who could have some influence over the eventual properties of a superintelligent AI and as a result should update towards living some simulation that’s meant to figure out the creation of an AI.


Why could this be wrong?


I could think of four general ways in which this argument could go wrong:


1) Our position in the history of the universe is not that unlikely

2) We would expect to see something else if we were in one of the aforementioned simulations.

3) There are other, more likely, situations we should expect to find ourselves in if we were in a simulation created by an AI

4) My anthropics are flawed


I’m most confused about the first one. Everyone has some things in their life that are very exceptional by pure chance. I’m sure there’s some way to deal with this in statistics but I don’t know it. In the interest of my own time I’m not going to go elaborate further on these failure modes and leave that to the commentators.


Conclusion

Is this argument flawed? Or has it been discussed elsewhere? Please point me to it. Does it make sense? Then what are the implications for those most intimately involved with the creation of superhuman AI?


Appendix


My friend Matiss Apinis (othercenterism) put the first premise like this:


“[…] it's impossible to grasp that in some corner of the Universe there could be this one tiny planet that just happens to spawn replicators that over billions of painful years of natural selection happen to create vast amounts of both increasingly intelligent and sentient beings, some of which happen to become just intelligent enough to soon have one shot at creating this final invention of god-like machines that could turn the whole Universe into either a likely hell or unlikely utopia. And here we are, a tiny fraction of those almost "just intelligent enough" beings, contemplating this thing that's likely to happen within our lifetimes and realizing that the chance of either scenario coming true may hinge on what we do. What are the odds?!"

21 comments

Comments sorted by top scores.

comment by turchin · 2016-03-05T23:10:54.127Z · LW(p) · GW(p)

I agree this the main premises of this text.

That the fact that I am in special position should rise my estimation that I am in simulation. And that any AI would have (as instrumental goal) creation of millions of simulations to solve numerically Fermi paradox by modeling different civilizations near the time of global risks and to model different AI-goal systems near AI creation time.

But now I will try different type of reasoning which may be used against such logic. Let's consider following example: "Given that my name is Alex, what is the probability that my name is Alex?" Of course, 1.

Given that I am interested in AI, what is the probability that I know about simulation argument? Its high, almost 1. And given that I know about simulation argument, what is the probability that I think that I am in simulation - it is also high. So it is not surprising that I estimate it high, if i am already in this field.

The core of this objection is that not only you are special, but that everybody is special, yet in their own believe system. Like "Given that I believe in God Zeus, it makes Zeus more likely to be real". Because we have many people, and everybody think that their believe system is special, so there is nothing special in any believe system.

I am not sure that this line of reasoning cancels our conclusion that we may be inside simulation.

Replies from: Gunnar_Zarncke, SoerenMind, SoerenMind
comment by Gunnar_Zarncke · 2016-03-06T09:16:27.464Z · LW(p) · GW(p)

Also the question arises of what to derive from the result 'I am likely living in a simulation' esp. the 'likely' part. After all some people making these inferences may still be wrong.

Replies from: turchin
comment by turchin · 2016-03-06T09:43:30.239Z · LW(p) · GW(p)

The only predictive thing is that miracles in the simulations are more probable. But not guaranteed. It also means that you are more likely to experience improbable events in the future.

If I am part of AI safety crowd, and live in simulation, it is more likely that I will actually participate in programming first AI.

There are many different types of possible simulations. Me, as usual, have created map of simulations. It is here: "Simulations Map: what is the most probable type of the simulation in which we live?" http://lesswrong.com/lw/mv0/simulations_map_what_is_the_most_probable_type_of/

comment by SoerenMind · 2016-03-22T10:12:04.303Z · LW(p) · GW(p)

I guess an answer to "Given that my name is Alex, what is the probability that my name is Alex?" could be that the hypothesis is highly selected. When you're still the soul that'll be assigned to a body, looking at the world from above, this guy named Alex won't stick out because of his name. But the people who will influence the most consequential event in the history of that world will.

Replies from: turchin
comment by turchin · 2016-03-22T22:08:06.190Z · LW(p) · GW(p)

I think that where are many domains where people think that they are "will influence the most consequential event in the history of that world". So the write question is what is the probability that a random person interested in changing the world think that he is able to participate in it. Almost 1.

comment by SoerenMind · 2016-03-06T17:31:02.366Z · LW(p) · GW(p)

"The core of this objection is that not only you are special, but that everybody is special"

Is your point sort of the same thing I'm saying with this? "Everyone has some things in their life that are very exceptional by pure chance. I’m sure there’s some way to deal with this in statistics but I don’t know it."

Replies from: turchin
comment by turchin · 2016-03-06T18:07:37.217Z · LW(p) · GW(p)

Yes, I meant the same. But the fact that we near AI crowd could overweight that bias.

comment by James_Miller · 2016-03-06T00:44:39.878Z · LW(p) · GW(p)

Consider three types of universes: Those where life never develops, those where life develops and there is no great filter and so paperclip maximizers quickly make it impossible for new life to develop after a short period, and those where life develops and there is a great filter that destroy civilizations before paperclip maximizers get going. Most observers like us will live in the third type of universe. And almost everyone who thinks about anthropics will live at a time close to when the great filter hits.

Replies from: jacob_cannell, turchin
comment by jacob_cannell · 2016-03-07T20:49:26.755Z · LW(p) · GW(p)

Consider three types of universes ...

Your are privileging your hypothesis - there are vastly more types of universes ...

There are universes where life develops and civilizations are abundant, and all of our observations to date are compatible with the universe being filled with advanced civs (which probably become mostly invisible to us given current tech as they approach optimal physical configurations of near zero temperature and tiny size).

The are universes like the above where advanced civs spawn new universes to gain god-like 'magic' anthropic powers, effectively manipulating/rewriting the laws of physics.

Universes in these categories are both more aggressive/capable replicators - they create new universes at a higher rate, so they tend to dominate any anthropic distribution.

And finally, there are considerations where the distribution over simulation observer moments diverges significantly from original observer moments, which tends to complicate these anthropic considerations.

For example, we could live in a universe with lots of civs, but they tend to focus far more simulations on the origins of the first civ or early civs.

comment by turchin · 2016-03-06T09:49:48.046Z · LW(p) · GW(p)

While it is true – it is Katja Grace's Doomsday argument in a nutshell, it doesn't take into account possibility of simulations. But most paperclipers will create many instrumental simulations, and in this case we are in it.

Replies from: James_Miller, Tyrin
comment by James_Miller · 2016-03-06T16:08:18.502Z · LW(p) · GW(p)

"But most paperclipers will create many instrumental simulations,"

I don't see this. They would solve science and almost certainly not make use of biological processes and so have no need to simulate us. The wisdom of nature would offer them nothing of value.

Replies from: turchin
comment by turchin · 2016-03-06T18:05:58.972Z · LW(p) · GW(p)

Each AI need to create at least several millions simulation in order to estimate distribution of other AIs in the universe and their most probable goal system. Probably it will model only part of the ansector history (something like only lesswrong members).

Replies from: James_Miller
comment by James_Miller · 2016-03-06T20:12:57.599Z · LW(p) · GW(p)

Excellent point. I agree. So the more we talk about AIs the greater our mind's measure? My young son has the potential to be an excellent computer programmer. The chance that your theory is true should raise the odds that he will end up working on AI because AIs will make more simulations involving me if my son ends up working on creating AI.

Replies from: turchin
comment by turchin · 2016-03-06T20:35:31.760Z · LW(p) · GW(p)

I think that ultimate reality is more complex, and something like each mind evolves into maximum measure naturally (in his own branch of the universe). I need to write long and controversial post to show it, but it should combine ideas of anthropics, simulation and quantum immortality.

In short: if QI works, the most probable way for me to become immortal is to become a strong AI by self-upgrade. And the fact that I find my self near such possibility is not coincedence, because measure is not evenly distributed between observers, but more complex and conscious observers are more likely. (It is more probable to find one self a human than an ant). This argument itself have two versions: linear, and (less probable) quantum. Some people in MIRI spoke about the same ideas informally, so now I believe that I am not totally crazy )))

comment by Tyrin · 2016-03-06T10:39:05.543Z · LW(p) · GW(p)

I had exactly the same insight as James_Miller a couple of days ago. Are you sure this is Grace's Doomsday argument? Her reasoning seems to be rather along the line that it is more likely that we'll be experiencing a late Great Filter (argued by SIA which I'm not familiar with). The idea here is rather that for life to likely exist for a prolonged time there has to be a late Great Filter (like space travel being extremely difficult or UFAI), because otherwise Paperclippers would quickly conquer the entire space (at least in universes like ours where all points in space can be travelled to in principle).

Replies from: turchin
comment by turchin · 2016-03-06T11:16:03.901Z · LW(p) · GW(p)

Yes, I now see the the difference: "where life develops and there is a great filter that destroy civilizations before paperclip maximizers get going."

But I understand it in the way that great filter is something that usually happens during tech development of a civilization before it creates AI. Like nuclear wars and bio catastrophes are so likely that no civilization survive until creation of strong AI.

It doesn't contradict Katja's version, which only claims that GF is in the future. It still in the future. https://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/

comment by jimrandomh · 2016-03-05T23:23:20.245Z · LW(p) · GW(p)

"We live in a simulation" and "we live in not-a-simulation" are not mutually exclusive.

Replies from: turchin
comment by turchin · 2016-03-05T23:27:46.832Z · LW(p) · GW(p)

because of what?

Something like we don't exist at all, because we are Bolzmann brains?

Replies from: Furcas
comment by Furcas · 2016-03-06T02:21:35.474Z · LW(p) · GW(p)

I think Jim means that if minds are patterns, there could be instances of our minds in a simulation (or more!) as well as in the base reality, so that we exist in both (until the simulation diverges from reality, if it ever does).

comment by plex (ete) · 2016-03-12T13:31:57.885Z · LW(p) · GW(p)

Fun next question: Assuming this line of reasoning holds, what does it mean for EA?

comment by MrMind · 2016-03-08T09:19:11.103Z · LW(p) · GW(p)

P(involved in AI* | ¬sim) = very low

This is the old "choose a number between 1 and googleplex with uniform probability". Given the prior information, even if the probability of the number coming out is very low, nonetheless it is not surprising that something had come up. Indeed: P(anything | -sim) = very low

P(involved in AI | sim) = high

This is the part that I find less convincing. I don't see any reason to waste such an effort in simulating entire inefficient minds in order to investigate anything.