Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife

post by crmflynn · 2015-11-02T23:03:50.582Z · LW · GW · Legacy · 84 comments

This is a bit rough, but I think that it is an interesting and potentially compelling idea. To keep this short, and accordingly increase the number of eyes over it, I have only sketched the bare bones of the idea. 

     1)      Empirically, people have varying intuitions and beliefs about causality, particularly in Newcomb-like problems (http://wiki.lesswrong.com/wiki/Newcomb's_problem, http://philpapers.org/surveys/results.pl, and https://en.wikipedia.org/wiki/Irresistible_grace).

     2)      Also, as an empirical matter, some people believe in taking actions after the fact, such as one-boxing, or Calvinist “irresistible grace”, to try to ensure or conform with a seemingly already determined outcome. This might be out of a sense of retrocausality, performance, moral honesty, etc. What matters is that we know that they will act it out, despite it violating common sense causality. There has been some great work on decision theory on LW about trying to thread this needle well.

     3)      The second disjunct of the simulation argument (http://wiki.lesswrong.com/wiki/Simulation_argument) shows that the decision making of humanity is evidentially relevant in what our subjective credence should be that we are in a simulation. That is to say, if we are actively headed toward making simulations, we should increase our credence of being in a simulation, if we are actively headed away from making simulations, through either existential risk or law/policy against it, we should decrease our credence.

      4)      Many, if not most, people would like for there to be a pleasant afterlife after death, especially if we could be reunited with loved ones.

     5)      There is no reason to believe that simulations which are otherwise nearly identical copies of our world, could not contain, after the simulated bodily death of the participants, an extremely long-duration, though finite, "heaven"-like afterlife shared by simulation participants.

     6)      Our heading towards creating such simulations, especially if they were capable of nesting simulations, should increase credence that we exist in such a simulation and should perhaps expect a heaven-like afterlife of long, though finite, duration.

     7)      Those who believe in alternative causality, or retrocausality, in Newcomb-like situations should be especially excited about the opportunity to push the world towards surviving, allowing these types of simulations, and creating them, as it would potentially suggest, analogously, that if they work towards creating simulations with heaven-like afterlives, that they might in some sense be “causing” such a heaven to exist for themselves, and even for friends and family who have already died. Such an idea of life-after-death, and especially for being reunited with loved ones, can be extremely compelling.

     8)      I believe that people matching the above description, that is, holding both an intuition in alternative causality, and finding such a heaven-like-afterlife compelling, exist. Further, the existence of such people, and their associated motivation to try to create such simulations, should increase the credence even of two-boxing types, that we already live in such a world with a heaven-like afterlife. This is because knowledge of a motivated minority desiring simulations should increase credence in the likely success of simulations. This is essentially showing that “this probably happened before, one level up” from the two-box perspective.

     9)      As an empirical matter, I also think that there are people who would find the idea of creating simulations with heaven-like afterlives compelling, even if they are not one-boxers, from a simply altruistic perspective, both since it is a nice thing to do for the future sim people, who can, for example, probabilistically have a much better existence than biological children on earth can, and as it is a nice thing to do to increase the credence (and emotional comfort) of both one-boxers and two-boxers in our world thinking that there might be a life after death.

     10)   This creates the opportunity for a secular movement in which people work towards creating these simulations, and use this work and potential success in order to derive comfort and meaning from their life. For example, making donations to a simulation-creating or promoting, or existential threat avoiding, think-tank after a loved one’s death, partially symbolically, partially hopefully.

     11)   There is at least some room for Pascalian considerations even for two-boxers who allow for some humility in their beliefs. Nozick believed one-boxers will become two boxers if Box A is raised to 900,000, and two-boxers will become one-boxers if Box A is lowered to $1. Similarly, trying to work towards these simulations, even if you do not find it altruistically compelling, and even if you think that the odds of alternative or retrocausality is infinitesimally small, might make sense in that the reward could be extremely large, including potentially trillions of lifetimes worth of time spent in an afterlife “heaven” with friends and family.

Finally, this idea might be one worth filling in (I have been, in my private notes for over a year, but am a bit shy to debut that all just yet, even working up the courage to post this was difficult) if only because it is interesting, and could be used as a hook to get more people interested in existential risk, including the AI control problem. This is because existential catastrophe is probably the best enemy of credence in the future of such simulations, and accordingly in our reasonable credence in thinking that we have such a heaven awaiting us after death now. A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade, and game theory between higher and lower worlds, as a form of “compulsion” in which they punish worlds for not creating heaven containing simulations (therefore effecting their credence as observers of the simulation), in order to reach an equilibrium in which simulations with heaven-like afterlives are universal, or nearly universal. More on that later if this is received well.

Also, if anyone would like to join with me in researching, bull sessioning, or writing about this stuff, please feel free to IM me. Also, if anyone has a really good, non-obvious pin with which to pop my balloon, preferably in a gentle way, it would be really appreciated. I am spending a lot of energy and time on this if it is fundamentally flawed in some way.

Thank you.

*******************************

November 11 Updates and Edits for Clarification

     1)      There seems to be confusion about what I mean by self-location and credence. A good way to think of this is the Sleeping Beauty Problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem)

If I imagine myself as Sleeping Beauty (and who doesn’t?), and I am asked on Sunday what my credence is that the coin will be tails, I will say 1/2. If I am awakened during the experiment without being told which day it is and am asked what my credence is that the coin was tails, I will say 2/3. If I am then told it is Monday, I will update my credence to ½. If I am told it is Tuesday I update my credence to 1. If someone asks me two days after the experiment about my credence of it being tails, if I somehow do not know the days of the week still, I will say ½. Credence changes with where you are, and with what information you have. As we might be in a simulation, we are somewhere in the “experiment days” and information can help orient our credence. As humanity potentially has some say in whether or not we are in a simulation, information about how humans make decisions about these types of things can and should effect our credence.

Imagine Sleeping Beauty is a lesswrong reader. If Sleeping Beauty is unfamiliar with the simulation argument, and someone asks her about her credence of being in a simulation, she probably answers something like 0.0000000001% (all numbers for illustrative purposes only). If someone shows her the simulation argument, she increases to 1%. If she stumbles across this blog entry, she increases her credence to 2%, and adds some credence to the additional hypothesis that it may be a simulation with an afterlife. If she sees that a ton of people get really interested in this idea, and start raising funds to build simulations in the future and to lobby governments both for great AI safeguards and for regulation of future simulations, she raises her credence to 4%. If she lives through the AI superintelligence explosion and simulations are being built, but not yet turned on, her credence increases to 20%. If humanity turns them on, it increases to 50%. If there are trillions of them, she increases her credence to 60%. If 99% of simulations survive their own run-ins with artificial superintelligence and produce their own simulations, she increases her credence to 95%. 

2)  This set of simulations does not need to recreate the current world or any specific people in it. That is a different idea that is not necessary to this argument. As written the argument is premised on the idea of creating fully unique people. The point would be to increase our credence that we are functionally identical in type to the unique individuals in the simulation. This is done by creating ignorance or uncertainty in simulations, so that the majority of people similarly situated, in a world which may or may not be in a simulation, are in fact in a simulation. This should, in our ignorance, increase our credence that we are in a simulation. The point is about how we self-locate, as discussed in the original article by Bostrom. It is a short 12-page read, and if you have not read it yet, I would encourage it:  http://simulation-argument.com/simulation.html. The point about past loved ones I was making was to bring up the possibility that the simulations could be designed to transfer people to a separate after-life simulation where they could be reunited after dying in the first part of the simulation. This was not about trying to create something for us to upload ourselves into, along with attempted replicas of dead loved ones. This staying-in-one simulation through two phases, a short life, and relatively long afterlife, also has the advantage of circumventing the teletransportation paradox as “all of the person" can be moved into the afterlife part of the simulation.  

 

84 comments

Comments sorted by top scores.

comment by Kyre · 2015-11-03T05:12:01.247Z · LW(p) · GW(p)

A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade ...

Assuming you get good feedback and think that you have an interesting, solid arguments ... please think carefully about whether such publicity helps the existential risk movement more than it harms. On the plus side, you might get people thinking about existential risk that otherwise would not have. On the minus side, most people aren't going to understand what you write, and some of the the ones that half-understand it are going to loudly proclaim it as more evidence that MIRI etc are full of insane apocalyptic cultists.

Replies from: crmflynn
comment by crmflynn · 2015-11-04T01:41:29.812Z · LW(p) · GW(p)

I would not worry about that for three reasons: 1) I am very shy online. Even posting this took several days and I did not look at the comments for almost a day after. 2) I am bringing this here first to see if it is worth considering, and also because I want input not only on the idea, but on the idea of spreading it further. 3) I would never identify myself with MIRI, etc. not because I would not want to be identified that way, but because I have absolutely not earned it. I also give everyone full permission to disavow me as a lone crackpot as needed should that somehow become a problem. That said, thank you for bringing this up as a concern. I had already thought about it, which is one of the reasons I was mentioning it as a tentative consideration for more deliberation by other people. That said, had I not, it could have been a problem. A lot of stuff in this area is really sensitive, and needs to be handled carefully. That is also why I am nervous to even post it.

All of that said, I think I might make another tentative proposal for further consideration. I think that some of these ideas ARE worth getting out there to more people. I have been involved in International NGO work for over a decade, studied it at university, and have lived and worked in half a dozen countries doing this work, and had no exposure to Effective Altruism, FHI, Existential Risk, etc. I hang out in policy/law/NGO circles, and none of my friends in these circles talk about it either. These ideas are not really getting out to those who should be exposed to them. I found EA/MIRI/Existential Risk through the simulation argument, which I read about on a blog I found off of reddit while clicking around on the internet about a year ago. That is kind of messed up. I really wish I had stumbled onto it earlier, and I tentatively think there is a lot of value in making it easier for others to stumble onto it into the future. Especially policy/law types, who are going to be needed at some point in the near future anyway.

I also feel that the costs of people thinking that people have “weird ideas” should probably be weighed against the benefits of flying the flag for other like-minded people to see. For the most part, people not liking other people is not much different than them not knowing about them, but having allies and fellow-travelers adds value. It is more minds to attack difficult problems at more angles, more policy makers listening when it is time to make some proposals, and it is more money finding its way into MIRI/FHI/etc. It might be worth trying to make existential risk a more widely known concern, a bit like climate change. It would not necessarily even have to water down LW, as it could be that those interested in the LW approach will come here, and those from other backgrounds, especially less technical backgrounds, find lateral groups. In climate change now, there are core scientists, scientists who dabble, and a huge group of activist types/policy people/regulators with little to no interest in the science who are sort of doing their own thing laterally to the main guys.

comment by SodaPopinski · 2015-11-06T17:30:42.860Z · LW(p) · GW(p)

This is a really fascinating idea, particularly the aspect that we can influence the likelihood we are in a simulation by making it more likely that simulations happen.

To boil it down to a simple thought experiment. Suppose I am in the future where we have a ton of computing power and I know something bad will happen tomorrow (say I'll be fired) barring some 1/1000 likelihood quantum event. No problem, I'll just make millions of simulations of the world with me in my current state so that tomorrow the 1/1000 event happens and I'm saved since I'm almost certainly in one of these simulations I'm about to make!

Replies from: crmflynn
comment by crmflynn · 2015-11-10T09:27:38.113Z · LW(p) · GW(p)

This is a really fascinating idea, particularly the aspect that we can influence the likelihood we are in a simulation by making it more likely that simulations happen.

Maybe? We can increase our credence, but I think whether or not it increases the likelihood is an open question. The intuitions seem to split between two-boxers and a subset of one-boxers.

That said, thank you for the secondary thought experiment, which is really interesting.

comment by turchin · 2015-11-04T14:05:16.313Z · LW(p) · GW(p)

I agree with your logic and have been thinking about simulation after life and put it in my simulations map. The main problem here is Copernican principle. If heavenly simulations dominate simulation landscape, I will more probably find my self already in heaven, not in real life. But may be I am already in heaven and just play role game about Singularity.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-06T00:58:36.392Z · LW(p) · GW(p)

If heavenly simulations dominate simulation landscape, I will more probably find my self already in heaven, not in real life

That's a good thing. There is one copy of us in the basement universe which actually creates the heaven for the rest of us. As we can never know which version we are, it doesn't really matter which version is in the basement universe.

comment by jacob_cannell · 2015-11-04T00:45:40.292Z · LW(p) · GW(p)

This has basically been my belief system for a while - we could call it simulism perhaps. These memes are also old. Tipler proposed the whole 'simulation implementing afterlife' idea a few decades ago, although his particular implementation ideas involved emulations at the end of time and questionable physics. Despite that, the general idea of mind uploading into virtual afterlife appears to be pretty mainstream now in transhumanist thought (ie Turing Church).

I think it's fun stuff to discuss, but it has a certain stigma and is politically unpopular to some extent with the x-risk folks. I suspect this may have to do with Tipler's heavily Christian religious spin on the whole thing. Many futurists were atheists first and don't much like the suspicious overlap with Christian memes (resurrection, supernatural creators. 'saving' souls, etc)

A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going.

This could be a good conversational hook, but technically I am not so certain this is true. In general the key to afterlife is more likely something like "do that which your future descendants/simulators would most reward you for", which has much in common with "do god's will". If you believe that global x-risks are large and you could have a large impact there, then sure that has very high value. But assessing global x-risks is difficult.

Also, minimizing x-risk is not the same as maximizing future utility. For example, there are many potential scenarios where very little of the potential sim capacity is used, even though they aren't x-risk style disasters. There are also local considerations which may dominate for most people - resurrection depends on future generosity which is highly unlikely to be uniform and instead will follow complex economics. "Be a good, interesting, and future important person" may trump x-risk for many people that can't contribute to x-risk much directly.

Replies from: crmflynn
comment by crmflynn · 2015-11-04T02:19:09.712Z · LW(p) · GW(p)

When you say you believe this, do you mean you believe it to be the case, or you believe it to be a realistic possibility?

I stumbled across Tipler when reading up on the simulation argument, and it inspired further “am I being a crackpot” self-doubt, but I don’t think this argument looks much like his. Also, I am not really trying to promote it so much as to feel it out. I have not yet found any reason to think I am wrong about it being a possibility, though I myself do not “feel” it to be likely. That said, with stuff like this, I have no sense that intuitions would tell me anything useful either.

“Despite that, the general idea of mind uploading into virtual afterlife appears to be pretty mainstream now in transhumanist thought (ie Turing Church).”

Yeah, it comes up in “Superintelligence” and some other things I have read too. The small difference, if there is one, is that this looks backwards, and could be a way to collect those who have already died, and also could be a way to hedge bets for those of us who may not live long enough for transhumanism. It also circumvents the teletransportation paradox and other issues in the philosophy of identity. Also, even when not being treated as a goal, it seems to have evidential value. Finally, there are some acausal trade considerations, and considerations with “watering down” simulations through AI “thought crimes,” that can be considered once this is brought in. I will probably post more of my tentative thoughts on that later.

“I think it's fun stuff to discuss, but it has a certain stigma and is politically unpopular to some extent with the x-risk folks. I suspect this may have to do with Tipler's heavily Christian religious spin on the whole thing. Many futurists were atheists first and don't much like the suspicious overlap with Christian memes (resurrection, supernatural creators. 'saving' souls, etc)”

The idea of posting about something that is unpopular on such an open-minded site is one of the things that makes me scared to post online. Transhumanism, AI risk (“like the Terminator?”), one-boxing the Newcomb Paradox, LW seems pretty good at getting past some initial discomfort to dig deeper. I had actually once heard a really short thing about “The Singularity” on the radio, which could have been a much earlier introduction to all this, but I sort of blew it off. Stuff like my past flippancy makes me inclined to try to avoid trusting my gut, and superficial reasons to ignore something, and to try to take a really careful approach to deconstructing argument. I am also atheist, and grew up very religiously Christian, so I think I also have a strong suspicion and aversion to its approach. But again, I try not to let superficial or familial similarity to things interrupt a systematic approach to reality. I am currently trying to transition from doing one-the-ground NGO work in developing countries in order to work on this stuff. My gut hates this, and my availability bias is doing backflips, but I think that this stuff might be too important to take the easy way out of it.

Also, your point about the hook is absolutely correct. I was sort of trying to imitate the “catchy” salon/huffpost/buzzfeed headline that would try to draw people in. “Ten Ways Atheists Go to Heaven, You Won’t Believe #6!” It was also meant a bit self-deprecatingly.

“There are also local considerations which may dominate for most people - resurrection depends on future generosity which is highly unlikely to be uniform and instead will follow complex economics. "Be a good, interesting, and future important person" may trump x-risk for many people that can't contribute to x-risk much directly.”

Yeah, there is a lot here. What is so weird about the second disjunct is that it means that we sort of do this or fail at this as a group. And it means that, while laying on my deathbed, my evaluation of how well we are doing as a species is going to play directly on my credence of what, if anything, comes next. It’s strange isn’t it? That said, it is also interesting that, even if we somehow knew that existential risk would not be a problem in our lifetime, with this, there is a purely selfish reason to donate to FHI/MIRI. In fact, with the correct sense of scale, with high enough odds and marginal benefit to donations, it could be the economically rational thing to do.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-05T19:15:55.759Z · LW(p) · GW(p)

When you say you believe this, do you mean you believe it to be the case, or you believe it to be a realistic possibility?

Well naturally I believe the latter, but I also believe the former in the sense of being more likely true than not.

I stumbled across Tipler when reading up on the simulation argument, and it inspired further “am I being a crackpot” self-doubt, but I don’t think this argument looks much like his.

Tipler isn't a full crackpot. His earlier book with Barrow - the Anthropic Cosmological Principle - was important in a number of respects and influenced later thinkers such as Kurzweil and Bostrom.

Tipler committed to his particular physical cosmology which is now out of date in light of new observations. Cosmological artificial selection (evolution of physics over deep time via creation of new 'baby' universes by superintelligences) is far more likely. In any kind of multiverse, universes which reproduce will dominate in terms of observer measures.

considerations with “watering down” simulations through AI “thought crimes,” that can be considered once this is brought in.

Not sure what you mean by this.

The idea of posting about something that is unpopular on such an open-minded site is one of the things that makes me scared to post online.

Don't let that stop you. You can post it on your blog then discuss it here and elsewhere. LW discussion is more open minded these days.

I am also atheist, and grew up very religiously Christian, so I think I also have a strong suspicion and aversion to its approach.

I was an atheist until I heard the sim argument and I then updated immediately.

It is interesting to look at the various world religions in light of Simulism and the Singularity. Some of the beliefs end up being inadvertently correct or even prescient.

For example, consider beliefs concerning burial vs cremation. It's roughly 50/50 split across cultures/religions over time. Both are effective from a health/sanitation point of view, but burial is somewhat more expensive. Judeo-christian religions all strongly believe in burial (cremation was actually outlawed in medieval europe). Hinduism on the other hand strongly supports cremation.

In the standard (pre-singularity) atheist worldview, these are are just arbitrary rituals.

However, we now know that this couldn't be farther from the truth. Burial preserves DNA for thousands, if not tens of thousands of years. So at some point in the near future robots can extract all of that DNA and use it to help in resurrection simulations. Obviously having someone's DNA is just the beginning of the information that you need for mind reconstruction, but it's a very important first step.

There are number of other features/beliefs like this that western religions (and xtianity strains in particular) probably got right. The general idea of a future hard eschatology (end of human history - rapture/singularity), resurrection of the dead, afterlife reward judgement by future super-intelligence, divinization/deification) (humans becoming gods) ...

comment by Lumifer · 2015-11-03T18:34:13.434Z · LW(p) · GW(p)

I am not sure what is the take-away from this idea. If it is

should increase credence that we exist in such a simulation and should perhaps expect a heaven-like afterlife of long, though finite, duration

then, well, increasing credence from 0.0...001% to 0.0...01% is a jump by an order of magnitude, but it still doesn't move the needle leaving the probability in the "vanishingly small" realm.

If it is that we should strive to build such simulations, there are a few issues with this call to action, starting with the observation that at our technological level there isn't much we can do right now, and ending with warning that if many people will want to build Heavens, some people will want to build Hells as well.

Replies from: crmflynn, jacob_cannell
comment by crmflynn · 2015-11-04T01:16:42.308Z · LW(p) · GW(p)

Thank you for your comment, and for taking a skeptical approach towards this. I think that trying to punch holes in it is how we figure out if it is worth considering further. I honestly am not sure myself.

I think that my own thoughts on this are a bit like Bostrom's skepticism of the simulation hypothesis, where I do not think it is likely, but I think it is interesting, and it has some properties I like. In particular, I like the “feedback loop” aspect of it being tied into metaphysical credence. The idea that the more people buy into an idea, the more likely it seems that it “has already happened” shows some odd properties of evidence. It is a bit like if I was standing outside of the room where people go to pick up the boxes that Omega dropped off. If I see someone walk out with two unopened boxes, I expect their net wealth has increased ~$1000, if I see someone walk out with one unopened box, I expect them to have increased their wealth ~$1,000,000. That is sort of odd isn’t it? If I see a small, dedicated group of people working on how they would structure simulations, and raising money and trusts to push it a certain political way in the future (laws requiring all simulated people get a minimum duration of afterlife meeting certain specifications, no AIs simulating human civilization for information gathering purposes without “retiring” the people to a heaven afterward, etc.) I have more reason to think I might get a heaven after I die.

As far as the “call to action” I hope that my post was not really read that way. I might have been clearer, and apologize. I think that running simulations followed by afterlife might be a worthwhile thing to do in the future, but I am not even sure it should be done for many reasons. It is worth discussing. One could also imagine that it might be determined, if we overcome and survive the AI intelligence explosion with a good outcome, that it is a worthwhile goal to create more human lives, which are pleasant, throughout our cosmological endowment. Sending off von Neumann probes to build simulations like this might be a live option. Honestly, it is an important question to figure out what we might want from a superintelligent AI, and especially if we might want to not just hand it the question. Coherent extrapolated volition sounds like a best tentative idea, but one we need to be careful with. For example, AI might only be able to produce such a “model” of what we want by running a large number of simulated worlds (to determine what we are all about). If we want simulated worlds to end with a “retirement” for the simulated people in a pleasant afterlife, we might want to specify it in advance, otherwise we are inadvertently reducing the credence we have of our own afterlife as well. Also, if there is an existent acausal trade regime on heaven simulations (this will be another post later) we might get in trouble for not conforming in advance.

As far as simulated hell, I think that fear of this as a possibility keeps the simulated heaven issue even more alive. Someone who would like a pleasant afterlife… which is probably almost all of us, might want to take efforts early to secure that such an afterlife is the norm in cases of simulation, and “hell” absolutely not permitted. Also, the idea that some people might run bad afterlives should probably further motivate people to try to also create as many good simulations as possible, to increase credence that “we” are in one of the good ones. This is like pouring white marbles into the urn to reduce the odds of drawing the black one. You see why the “loop” aspect of this can be kind of interesting. Especially for one-boxer-types, who try to “act out” the correct outcome after-the-fact. For one-boxers, this could be, from a purely and exclusively selfish perspective, the best thing they could possibly do with their life. Increasing the odds of a trillion-life-duration afterlife of extreme utility from 0.001 to 0.01 might be very selfishly rational.

I am not trying to "sell" this, as I have not even bought it myself, I am just sort of playing with it as a live idea. If nothing else, this seems like it might have some importance on considerations going forward. I think that people’s attitudes and approaches to religion suggest that this might be a powerful force for human motivation, and the second disjunct of the simulation argument shows that human motivation might have significant bearing both on our current reality, and on our anticipated future.

Replies from: Lumifer
comment by Lumifer · 2015-11-04T15:55:04.990Z · LW(p) · GW(p)

I do not think it is likely, but I think it is interesting

Keep in mind that the "simulation hypothesis" is also known as "creationism". In particular it implies that there are beings who constructed the simulation, who are not bound by its rules, and who can change it at will. The conventional name for such beings is "gods".

idea ... shows some odd properties of evidence.

I would treat is as a category error: ideas are not evidence. Even if they look "evidence-like".

Also, the idea that some people might run bad afterlives should probably further motivate people to try to also create as many good simulations as possible, to increase credence that “we” are in one of the good ones.

Why would future superpowerful people be interested in increasing your credence?

Remember, this is ground well-trodden by theology. There the question is formulated as "Why doesn't God just reveal Himself to us instead leaving us in doubt?".

Replies from: crmflynn
comment by crmflynn · 2015-11-05T12:46:33.090Z · LW(p) · GW(p)

I think you and I might be missing one another. Or that I am at least missing your point. Accordingly, my responses below might be off point. Hopefully they are not.

“Keep in mind that the "simulation hypothesis" is also known as "creationism". In particular it implies that there are beings who constructed the simulation, who are not bound by its rules, and who can change it at will. The conventional name for such beings is "gods".”

I don’t think that necessarily follows. Creationism implies divinity, and gods implies something bigger than people who build a machine. Are your parents gods for creating you? In my own estimate, creating a simulation is like founding a sperm bank; you are not really “creating” anything, you are just moving pieces around in a way that facilitates more lives. You can mess around with the life and the world, but so can anyone in real life, especially if they have access to power, or guns, or a sperm bank, again, for that matter. It is different in scale, but not in type. Then again, I might be thinking too highly of “gods”?

Also, I get the impression, and apologies if I am wrong, that you are mostly trying to show “family resemblance” with something many of us are skeptical of or dislike. I am atheist myself, and from a very religious background which leaves me wary. However, I think it is worth avoiding a “clustering” way of thinking. If you don’t want to consider something because of who said it, or because it vaguely or analogously resembles something you dislike, you can miss out on some interesting stuff. I think I avoided AI, etc. too long because I thought I did not really like “computer things” which was a mistake that cost me some great time in some huge, wide open, intellectual spaces I now love to run around in.

“I would treat is as a category error: ideas are not evidence. Even if they look "evidence-like"”

I might be missing what you are saying, but I do not think I was saying that ideas were evidence. I was saying a group of people rallying around an idea could be a form of evidence. In this case, the “evidence” is that a lot of people might want something. What this is evidence of is that them wanting something makes it more likely that it will come about. I am not sure how this would fail as evidence.

“Why would future superpowerful people be interested in increasing your credence?”

Two things: 1) They are not interested in the credence of people in the simulations, they are interested in their own credence. So if I live in a world that creates simulations, it makes me think it is more likely that I am in a simulation. If I know that 99% of all simulations are good ones, it makes me think I am more likely in a world with good simulations. If I know that 90% of simulations are terrible, I am more likely to think that I am in a terrible simulation. The odd thing, is that people are sort of creating their own evidence. This is why I mentioned Calvinism and “irresistible grace” as analogy. Also Newcomb. Creating nice simulations in the hopes of being in one is like taking one box, or attending Calvinist church regularly and abiding by the doctrines. More to the point for people who two-box and roll their eyes at Calvinists, knowing that there are Calvinists means that we know that some people might try to make simulations in order to try to be in one.

2) I am not sure where “superpowerful” comes from here. I think you might be making assumptions about my assumptions. These simulations might be left unobserved. They might be made by von Neumann probes on distant Dyson spheres. I actually think that people motivated by one-boxing/Calvinist type interpretations are more likely to try to keep simulations unmolested.

“Remember, this is ground well-trodden by theology. There the question is formulated as "Why doesn't God just reveal Himself to us instead leaving us in doubt?".”

I don’t think the question is the same. In particular, I am not solving for “why has god not revealed himself” or even “why haven’t I been told I am in a simulation.” I am just pulling at the second disjunct and its implications. In particular I am looking at what happens if one-boxer types decide they want a simulated alterlife.

Why would people run simulations? Maybe research or entertainment (suggested in the original article). Maybe to fulfill (potentially imaginary) acausal trade conditions (I will probably post on this later). Maybe altruism. Maybe because they want to believe they are in a simulation, and so they make the simulation look just like their world looks, but add an afterlife. They do this in the hopes that it was done “above” them the same way, and they are in such a simulation. They do it in the hopes of being self-fulfilling, or performative, or for whatever reason people one-box and believe in Calvinism.

Replies from: Lumifer
comment by Lumifer · 2015-11-05T16:48:10.208Z · LW(p) · GW(p)

Creationism implies divinity, and gods implies something bigger than people who build a machine.

Not for the sims who live inside the machine. Let me recount once again the relevant features:

  • Beings who created the world and are not of this world
  • Beings who are not bound by the rules of this world (from the inside view they are not bound by physics and can do bona fide miracles)
  • Beings who can change this world at will.

These beings look very much like gods to me. The "not bound by our physics", in particular, decisively separates them from sims who, of course, do affect their world in many ways.

In this case, the “evidence” is that a lot of people might want something. What this is evidence of is that them wanting something makes it more likely that it will come about.

That it will come about, yes. That it is this way, no. But that's the whole causality/Newcomb issue.

So if I live in a world that creates simulations, it makes me think it is more likely that I am in a simulation.

Makes you think so, but doesn't make me think so. Again,this is the core issue here.

I am looking at what happens if one-boxer types decide they want a simulated alterlife.

One-boxers want it today, right now? Um, nothing happens.

Replies from: crmflynn
comment by crmflynn · 2015-11-10T10:06:58.352Z · LW(p) · GW(p)

I think that this sort of risks being an argument about a definition of a word, as we can mostly agree on the potential features of the set-up. But because I have a sense that this claim comes with an implicit charge of fideism, I’ll take another round at clarifying my position. Also, I have written a short update to my original post to clarify some things that I think I was too vague on in the original post. There is a trade-off between being short enough to encourage people to read it, and being thorough enough to be clear, and I think I under-wrote it a bit initially.

Beings who created the world and are not of this world

They did not really “create” this world so much as organized certain aspects of the environment. Simulated people are still existent in a physical world, albeit as things in a computer. The fact that the world as the simulated people conceive of it is not what it appears to be occurs happens to us as well when we dig into physics and everything becomes weird and unfamiliar. If I am in the environment of a video game, I do not think that anyone has created a different world, I just think that they have created a different environment by arranging bits of pre-existing world.

Beings who are not bound by the rules of this world (from the inside view they are not bound by physics and can do bona fide miracles)

Is something a miracle if it can be clear in physical terms how it happened? If there is a simulation, than the physics is a replica of physics, and “defying” it is not really any more miraculous than me breaking the Mars off of a diorama of the solar system.

Beings who can change this world at will.

Everyone can do that. I do that by moving a cup of coffee from one place to another. In a more powerful sense, political philosophers have dramatically determined how humans have existed over the last 150 years. Human will shapes our existences a great deal already.

These beings look very much like gods to me. The "not bound by our physics", in particular, decisively separates them from sims who, of course, do affect their world in many ways.

I think that for you, “gods” emerge as a being grows in power, whereas I tend to think that divinity implies something different not just in scale, but in type. This might just be a trivial difference in opinion or definition or approach to something with no real relevance.

So if I live in a world that creates simulations, it makes me think it is more likely that I am in a simulation.

Makes you think so, but doesn't make me think so. Again,this is the core issue here.

I agree with you that this is the core issue. What I think you might be missing, though I could be wrong, is that I am agnostic on this point in the post. Being careful to keep my own intuition out of it. I am not saying that one-boxers believing this necessarily have any effect on our current, existent, reality. What I am saying is two things: 1) Some one-boxers think that it does, and accordingly will be more likely to push for simulations and 2) Knowing that some people will be likely to push for simulations should make even two-boxers think that it is more likely we are in one. If the world was made up exclusively of two-boxers, it would be less likely that people would try to create simulations with heaven-like afterlives. If the world was all one-boxers, it would be more likely. As we are somewhere in between, our credence should be somewhere in between. This is just about making an educated guess about human nature based on how people interact with similar problems. Since human nature is potentially causal on whether or not there are simulations, information that changes our views on the likelihood of a decision one way or another on simulations is relevant to our credence.

I am looking at what happens if one-boxer types decide they want a simulated alterlife.

One-boxers want it today, right now? Um, nothing happens.

If one boxers here, today, want it, is not really the relevant consideration, especially to a two-boxer. However, if there are a lot of one-boxers, who make a lot of simulations, it should increase the two-boxers credence that he or she is in a simulation created by a one-boxer “one level up.” As a two-boxer, the relevant thing is not that THESE one-boxers are causing anything, but that the existence of people who do this might suggest the existence of people who have done this before, “one level up.”

Replies from: Lumifer
comment by Lumifer · 2015-11-10T17:36:16.373Z · LW(p) · GW(p)

They did not really “create” this world so much as organized certain aspects of the environment. ... If I am in the environment of a video game, I do not think that anyone has created a different world, I just think that they have created a different environment by arranging bits of pre-existing world.

That's what creation is. The issue here is inside view / outside view. Take Pac-Man. From the outside, you arranged bits of existing world to make the Pac-Man world. From the inside, you have no idea that such things as clouds, or marmosets, or airplanes exist: your world consists of walls, dots, and ghosts.

and “defying” it is not really any more miraculous than me breaking the Mars off of a diorama of the solar system

Outside/inside view again. If I saw Mars arbitrarily breaking out of its orbit and go careening off to somewhere, that would look pretty miraculous to me.

I think that for you, “gods” emerge as a being grows in power, whereas I tend to think that divinity implies something different not just in scale, but in type.

I agree about the difference in type. It is here: these beings are not of this world. The difference between you and a character in a MMORG is a difference in type.

Re one/two-boxers, see my answer to the other post...

Replies from: crmflynn
comment by crmflynn · 2015-11-12T06:16:11.638Z · LW(p) · GW(p)

I agree with you about the inside / outside view. I also think I agree with you about the characteristics of the simulators in relationship to the simulation.

I think I just have a vaguely different, and perhaps personal, sense of how I would define "divine" and "god." If we are in a simulation, I would not consider the simulators gods. Very powerful people, but not gods. If they tried to argue with me that they were gods because they were made of a lot of organic molecules whereas I was just information in a machine, I would suggested it was a distinction without a difference. Show me the uncaused cause or something outside of physics and we can talk

Replies from: Lumifer
comment by Lumifer · 2015-11-12T17:22:46.564Z · LW(p) · GW(p)

I would suggested it was a distinction without a difference.

There is a classic answer to this :-/

Show me the uncaused cause or something outside of physics and we can talk

In the context of the simulated world uncaused causes and breaking physics are easy. Hack the simulation, write directly to the memory, and all things are possible.

It's just the inside/outside view again.

comment by jacob_cannell · 2015-11-04T00:48:39.627Z · LW(p) · GW(p)

starting with the observation that at our technological level there isn't much we can do right now,

We live in a very special time - right on the cusp of AGI - so there is much that one can do right now. ;)

Replies from: Lumifer
comment by Lumifer · 2015-11-04T01:25:20.877Z · LW(p) · GW(p)

We live in a very special time - right on the cusp of AGI

AGI has been 20 years away for the past 50 years or so. I see no reason to believe the pattern will break any time now :-/

Replies from: jacob_cannell, crmflynn
comment by jacob_cannell · 2015-11-04T19:12:21.960Z · LW(p) · GW(p)

AGI has been 20 years away for the past 50 years or so.

No - AGI's arrival can be expected around the end of conventional Moore's Law, as that is naturally when we can expect to have brain level hardware performance. Before that AGI is impractical, shortly after that it becomes inevitable.

There are a large number of people making predictions, almost all of them have no idea what they are talking about. It is the logic behind the predictions that matter.

Replies from: Lumifer
comment by Lumifer · 2015-11-04T19:34:12.052Z · LW(p) · GW(p)

when we can expect to have brain level hardware performance

I don't think our progress in creating an AGI is constrained by hardware at this point. It's a software problem and you can't solve it by building larger and more densely packed supercomputers.

almost all of them have no idea what they are talking about

Yep :-)

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-05T18:27:11.703Z · LW(p) · GW(p)

I don't think our progress in creating an AGI is constrained by hardware at this point

That is now possibly arguably just becoming true for the first time - as we approach the end of Moore's Law and device geometry shrinks to synapse comparable sizes, densities, etc.

Still, current hardware/software is not all that efficient for the computations that intelligence requires - which is namely enormous amounts of low precision/noisy approximate computing.

t's a software problem and you can't solve it by building larger and more densely packed supercomputers.

Of course you can - it just wouldn't be economical. AGI running on a billion dollar super computer is not practical AGI, as AGI is AI that can do everything a human can do but better - which naturally must include cost.

It isn't a problem of what math to implement - we have that figured out. It's a question of efficiency.

Replies from: Lumifer
comment by Lumifer · 2015-11-05T18:45:52.222Z · LW(p) · GW(p)

AGI running on a billion dollar super computer is not practical

Why not? AGI doesn't involve emulating Fred the janitor, the first AGI is likely to have a specific purpose and so will likely have huge advantages over meatbags in the particular domain it was made for.

If people were able to build an AGI on a billion-dollar chunk of hardware right now they would certainly do so, if only as a proof of concept. A billion isn't that much money to a certain class of organizations and people.

It isn't a problem of what math to implement - we have that figured out.

Oh, really? I'm afraid I find that hard to believe.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-05T19:22:09.780Z · LW(p) · GW(p)

AGI running on a billion dollar super computer is not practical

Why not?

Say you have the code/structure for an AGI all figured out, but it runs in real-time on a billion dollar/year supercomputer. You now have to wait decades to train/educate it up to an adult.

Furthermore, the probability that you get the seed code/structure right on the first try is essentially zero. So rather obviously - to even get AGI in the first place you need enough efficiency to run one AGI mind in real-time on something far far less than a supercomputer.

It isn't a problem of what math to implement - we have that figured out.

Oh, really? I'm afraid I find that hard to believe.

Hard to believe only for those outside ML.

Replies from: V_V, Lumifer
comment by V_V · 2015-11-09T12:27:41.490Z · LW(p) · GW(p)

Hard to believe only for those outside ML

I don't think that even in ML the school of "let's just make a bigger neural network" is taken seriously.

Neural networks are prone to overfitting. All the modern big neural networks that are fashionable these days require large amounts of training data. Scale up these networks to the size of the human brain, and, even assuming that you have the hardware resources to run them, you will get something that just memorizes the training set and doesn't perform any useful generalization.

Humans can learn from comparatively small amounts of data, and in particular from very little and very indirectly supervised data: you don't have to show a child a thousand apples and push each time an "apple" button on their head for them to learn what an apple looks like.

There is currently lots of research in ML in how to make use of unsupervised data, which is cheaper and more abundant than supervised data, but this is still definitely an open problem, so much that it isn't even clear what properties we want to model and how to evaluate these models (e.g. check out this recent paper).
Therefore, the math relevant to ML has definitely not been all worked out.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-09T18:33:28.102Z · LW(p) · GW(p)

I don't think that even in ML the school of "let's just make a bigger neural network" is taken seriously.

That's not actually what I meant when I said we have the math figured out. The math behind general learning is just general bayesian inference in it's various forms. The difficulty is not so much in the math, it is in scaling up efficiently.

To a first approximation the recent surge in progress in AI is entirely due to just making bigger neural networks. As numerous DL researchers have admitted - the new wave of DL is basically just techniques from the 80's scaled up on modern GPUs.

Regarding unsupervised learning - I wholeheartedly agree. However one should also keep in mind that UL and SL are just minor variations of the same theme in a bayesian framework. If you have accurate labeled data, you might as well use it.

In order to recognize and verbally name apples, a child must first have years of visual experience. Supervised DL systems trained from scratch need to learn everything from scratch, even the lowest level features. The object in these systems is not to maximize learning from small amounts of training data.

In the limited training data domain and more generally for mixed datasets where there is a large amount of unlabeled data, transfer learning and mixed UL/SL can do better.

properties we want to model and how to evaluate these models (e.g. check out this recent paper).

Just discussing that here.

The only real surprising part of that paper is the "good model, poor sampling" section. It's not clear how often their particular pathological special case actually shows up in practice. In general a solomonoff learner will not have that problem.

I suspect that a more robust sampling procedure could fix the mismatch. A robust sampler would be one that outputs samples according to their total probability as measured by encoding cost. This corrects the mismatch between the encoder and the sampler. Naively implemented this makes the sampling far more expensive, perhaps exponentially so, but nonetheless it suggests the problem is not fundamental.

Replies from: V_V
comment by V_V · 2015-11-09T19:47:00.909Z · LW(p) · GW(p)

That's not actually what I meant when I said we have the math figured out. The math behind general learning is just general bayesian inference in it's various forms. The difficulty is not so much in the math, it is in scaling up efficiently.

Ok, but this is even more vague then. At least neural networks are a coherent class of algorithms, with lots of architectural variations and hyperparameters to tune, but still functionally similar. General Bayesian inference, on the other hand, is a broad framework with dozens types of algorithms for different tasks, based on different assumptions and with different functional structure.

You could as well say that once we formulated the theory of universal computation and we had the first digital computers up and running, then we had all the math figured out and it was just a matter of scaling up things. This was probably the sentiment at the famous Dartmouth conference in 1956 where they predicted that ten smart people brainstorming for two months could make significant advancements in multiple fundamental AI problems. I think that we know better now.

Regarding unsupervised learning - I wholeheartedly agree. However one should also keep in mind that UL and SL are just minor variations of the same theme in a bayesian framework. If you have accurate labeled data, you might as well use it.

Supervised learning may be a special case of unsupervised learning but not the other way round. Currently we can only do supervised learning well, at least when when big data is available. There have been attempts to reduce unsupervised learning to supervised learning, which had some practical success in textual NLP (with neural language models and word vectors) but not in other domains such as vision and speech.

The paper I linked, IMHO, may shed some light on why this happened: one of the most popular evaluation measure and training objective, the negative log-likelihood (aka empirical cross-entropy), which captures well our intuition of what a good model must do in binary (or low-dimensional) classification tasks, may break down in the high-dimensional regime, typical of some unsupervised tasks such as sampling.

It's not clear how often their particular pathological special case actually shows up in practice.

I've never seen a modern generative model generate realistic samples of natural images or speech. Text generation fares somewhat better, but it's still far from anything able to pass a Turing test. By contrast, discriminative models for classification or regression trained on large supervised data can often achieve human-level or even super-human performances.

In general a solomonoff learner will not have that problem.

Well, duh, but a Solomonoff learner is uncomputable. Inside a Solomonoff learner there would be a simulation of every possible human looking at the samples, among an infinite number of other things.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-09T20:11:45.440Z · LW(p) · GW(p)

At least neural networks are a coherent class of algorithms, with lots of architectural variations and hyperparameters to tune, but still functionally similar. General Bayesian inference, on the other hand, is a broad framework with dozens types of algorithms for different tasks, based on different assumptions and with different functional structure.

I don't agree with this memetic taxonomy. I consider neural networks to be mostly synonymous with algebraic tensor networks - general computational graphs over tensors. As such ANN describes a modeling language family, equivalent in expressibility to binary circuit models (and thus Turing universal) but considerably more computationally efficient. The tensor algebra abstraction more closely matches physical hardware reality.

So as a general computing paradagim or circuit model, ANNs can be combined with any approximate inference technique. Backprop on log-likelihood is just one obvious approx method.

You could as well say that once we formulated the theory of universal computation and we had the first digital computers up and running, then we had all the math figured out

Not quite, because it took longer for the math for inference/learning to be worked out, and even somewhat longer for efficient approximations - and indeed that work is still ongoing.

Regardless, even if all the math was available in 1956 it wouldn't of mattered, as they still would have had to wait 60 years or so for efficient implementations (hardware + software).

The paper I linked, IMHO, may shred some light on why this happened: one of the most popular evaluation measure and training objective, the negative log-likelihood (aka empirical cross-entropy), which captures well our intuition of what a good model must do in binary (or low-dimensional) classification tasks, may break down in the high-dimensional regime, typical of some unsupervised tasks such as sampling.

To the extant that this is a problem in practice, it's a problem with typical sampling, not the measure itself. As I mentioned earlier, I believe it can be solved by more advanced sampling techniques that respect total KC/Solomonoff probability. Using these hypothetical correct samplers, good models should always produce good samples.

That being said I agree that generative modelling and realistic sampling in particular is an area ripe for innovation.

I've never seen a modern generative model generate realistic samples of natural images or speech.

You actually probably have seen this in the form of CG in realistic video games or films. Of course those models are hand crafted rather than learned probabilistic generative models. I believe that cross-fertilization of ideas/techniques from graphics and ML will transform both in the near future.

The current image generative models in ML are extremely weak when viewed as procedural graphics engines - for the most part they are just 2D image blenders.

comment by Lumifer · 2015-11-05T19:34:18.268Z · LW(p) · GW(p)

Say you have the code/structure for an AGI all figured out

How would you know that you have it "all figured out"?

the probability that you get the seed code/structure right on the first try is essentially zero

Err... didn't you just say that it's not a software issue and we have already figured out what math to implement? What's the problem?

Hard to believe only for those outside ML.

Right... build a NN a mile wide and a mile deep and let 'er rip X-/

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-06T00:31:00.434Z · LW(p) · GW(p)

Say you have the code/structure for an AGI all figured out

How would you know that you have it "all figured out"?

[Furthermore], the probability that you get the seed code/structure right on the first try is essentially zero

Err... didn't you just say that it's not a software issue and we have already figured out what math to implement? What's the problem?

No, I never said it is not a software issue - because the distinction between software/hardware issues is murky at best, especially in the era of ML where most of the 'software' is learned automatically.

You are trolling now - cutting my quotes out of context.

comment by crmflynn · 2015-11-04T02:46:12.751Z · LW(p) · GW(p)

I am not sure it matters when it comes. Presumably, unless we find some other way to extinction, it will come at some point. When it comes, it is likely that the technology will not be a problem for it. Once the technology exists, and probably before, we may need to figure out if and how we want to do simulations. If people have a clear, well developed, and strong preference going into it (including potentially putting it into the AI as a requirement for its modeling of humanity, or it being a big enough “movement” to show up in our CEV) that will likely have a large effect on the odds of it happening. Also, I know some people who sincerely think belief in god is based almost exclusively on fear of death. I am skeptical of this, but if it is true, or even partially true, if even a fraction of the fervor/energy/dedication that is put into religion was put into pushing for this, I think it might be a serious force.

The point about credence is just a point about it being interesting, decision making aside, that something as fickle as collective human will, might determine if I “survive” death, and if all my dead loved ones will as well. So, for example, if this post, or someone building off of my post, but doing it better, were to explode on LW and pour out into reddit and the media, it should increase our credence in an afterlife. If its reception is lukewarm, decrease it. There is something really weird about that, and worth chewing on.

Also, I think that people’s motivation to have an afterlife seems like a more compelling reason to create simulations than experimentation/entertainment, so it helps shift credence around among the four disjuncts of the simulation argument.

Replies from: Lumifer
comment by Lumifer · 2015-11-04T16:02:38.497Z · LW(p) · GW(p)

Once the technology exists, and probably before, we may need to figure out if and how we want to do simulations.

Simulations of long-ago ancestors..?

Imagine that you have the ability to run a simulation now. Would you want to populate it by people like you, that is, fresh people de novo and possibly people from your parents and grandparents generations -- or would you want to populate it with Egyptian peasants from 3000 B.C.? Homo habilis, maybe? How far back do you want to go?

it should increase our credence in an afterlife

No, I don't think so. You're engaging in magical thinking. What you -- or everyone -- believes does not change the reality.

Replies from: gjm, crmflynn
comment by gjm · 2015-11-04T16:50:25.705Z · LW(p) · GW(p)

What you -- or everyone -- believes does not change the reality.

It can give evidence, though. Consider Hypothesis A: "Societies like ours will generally not decide, as their technological capabilities grow, to engage in massive simulation of their forebears" and Hypothesis B which omits the word "not". Then:

  • The decisions made by, and ideas widely held in, our society, can be evidence favouring A or B.
  • We are more likely simulations if B is right than if A is right.

Similarly if the hypotheses are "... to engage in massive simulation of their forebears, including blissful afterlives", in which case we are more likely to have blissful simulated afterlives if B is right than if A is right. (Not necessarily more likely to have blissful afterlives simpliciter, though -- perhaps, e.g., the truth of B would somehow make it less likely that we get blissful afterlives provided by gods.)

My opinion, for what it's worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.

Replies from: V_V, crmflynn, Lumifer
comment by V_V · 2015-11-09T15:33:21.392Z · LW(p) · GW(p)

I think that the problem with this sort of arguments is that it's like cooperating in prisoner's dilemma hoping that superrationality will make the other player cooperate: It doesn't work.

It seems that lots of people here conflate Newcomb's problem, which is a very unusual single-player decision problem, with prisoner's dilemma, which is the prototypical competitive game from game theory.

Also, I don't see why I should consider an accurate simulation of me, from my birth to my death, ran after my real death as a form of afterlife. How would it be functionally different than screening a movie of my life?

Replies from: gjm, crmflynn
comment by gjm · 2015-11-09T20:23:14.455Z · LW(p) · GW(p)

My understanding is that the proposal here isn't that an accurate simulation of your life should be counted as an afterlife; it's that a somewhat-accurate simulation of lots of bits of your life might be a necessary preliminary to providing you with an afterlife (because they'd be needed to figure out what your brain, or at least your mind, was like in order to recreate it in whatever blissful -- or for that matter torturous -- afterlife might be provided for you).

As for Newcomb versus prisoners' dilemma, see my comments elsewhere in the thread: I am not proposing that our decision whether to engage in large-scale ancestor simulation has any power to affect our past, only that it may provide some evidence bearing on what's likely to have been in our past.

Replies from: crmflynn, V_V
comment by crmflynn · 2015-11-10T11:21:44.175Z · LW(p) · GW(p)

the proposal here

I just want to clarify in case you mean my proposal, as opposed to the proposal by jacobcannell. This is my reading of what jacobcannell said as well, but it is not at all a part of my argument. In fact, while I would be interested in reading jacobcannell’s thoughts on identity and the self, I share the same skeptical intuitions as other posters in this thread about this. I am open to being wrong, but on first impression I have an extremely difficult time imagining that it will be at all possible to simulate a person after they have died. I suspect that it would be a poor replica, and certainly would not contain the same internal life as the person. Again, I am open to being convinced, but nothing about that makes sense to me at the moment.

I think that I did a poor job of making this clear in my first post, and have added a short note at the end to clarify this. You might consider reading it as it should make my argument clearer.

My proposal is far less interesting, original, or involved then this, and drafts off of Nick Bostrom’s simulation argument in its entirety. What I was discussing was making simulations of new and unique individuals. These individuals would then have an afterlife after dying in which they would be reunited with the other sims from their world to live out a subjectively long, pleasant existence in their simulation computer. There would not be any attempt to replicate anyone in particular or to “join” the people in their simulation through a brain upload or anything else. The interesting and relevant feature would be that the creation of a large number of simulations like this, especially if these simulations could and did create their own simulations like this too, would increase our credence that we were not actually at the “basement level” and instead were ourselves in a simulation like the ones we made. This would increase our credence that dead loved ones had already been shifted over into the afterlife just as we shift people in the sims over into an afterlife after they die. This also circumvents teletransportation concerns (which would still exist if we were uploading ourselves into a simulation of our own!) since everything we are now would just be brought over to the afterlife part of the simulation fully intact.

comment by V_V · 2015-11-09T22:43:54.492Z · LW(p) · GW(p)

My understanding is that the proposal here isn't that an accurate simulation of your life should be counted as an afterlife; it's that a somewhat-accurate simulation of lots of bits of your life might be a necessary preliminary to providing you with an afterlife (because they'd be needed to figure out what your brain, or at least your mind, was like in order to recreate it in whatever blissful -- or for that matter torturous -- afterlife might be provided for you).

Or they are just interested in the password needed to access the cute cat pictures on my phone. Seriously, we are in the realm of wild speculation, we can't say that evidence points any particular way.

comment by crmflynn · 2015-11-10T10:59:27.620Z · LW(p) · GW(p)

I hope I am not intercepting a series of questions when you were only interested in gjm’s response but I enjoyed your comment and wanted to add my thoughts.

I think that the problem with this sort of arguments is that it's like cooperating in prisoner's dilemma hoping that superrationality will make the other player cooperate: It doesn't work.

I am not sure it is settled that it does not work, but I also do not think that most, or maybe any, of my argument relies on an assumption that it does. The first part of it does not even rely on an assumption that one-boxing is reasonable, let alone correct. All it says is that so long as some people play the game this way, as an empirical, descriptive reality of how they actually play, that we are more likely to see certain outcomes in situations that look like Newcomb. This looks like Newcomb.

There is also a second argument further down that suggests that under some circumstances with really high reward, and relatively little cost, that it might be worth trying to “cooperate on the prisoner’s dilemma” as a sort of gamble. This is more susceptible to game theoretic counterpoints, but it is also not put up as an especially strong argument so much as something worth considering more.

It seems that lots of people here conflate Newcomb's problem, which is a very unusual single-player decision problem, with prisoner's dilemma, which is the prototypical competitive game from game theory.

I am pretty sure I am not doing that, but if you wanted to expand on that, especially if you can show that I am, that would be fantastic.

Also, I don't see why I should consider an accurate simulation of me, from my birth to my death, ran after my real death as a form of afterlife. How would it be functionally different than screening a movie of my life?

So, just to be clear, this is not my point at all. I think I was not nearly clear enough on this in the initial post, and I have updated it with a short-ish edit that you might want to read. I personally find the teletransportation paradox pretty paralyzing, enough so that I would have sincere brain-upload concerns. What I am talking about is simulations of non-specific, unique, people in the simulation. After death, these people would be “moved” fully intact into the afterlife component of the simulation. This circumvents teletransportation. Having the vast majority of people “like us” exist in simulations should increase our credence that we are in a simulation just as they are (especially if they can run simulations of their own, or think they are running simulations of their own). The idea is that we will have more reason to think that it is likely one-boxer/altruist/acausal trade types “above” us have similarly created many simulations, of which we are one. Us doing it here should increase our sense that people “like us” have done it “above” us.

comment by crmflynn · 2015-11-05T13:37:35.860Z · LW(p) · GW(p)

My opinion, for what it's worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.

I wonder if you might expand on your thoughts on this a bit more. I tend to think that the odds of being in a simulation are quite low as well, but for me the issue is more the threat of extinction than a lack of will.

I can think of some reasons why, even if we could build such simulations, we might not, but I feel that this area is a bit fuzzy in my mind. Some ideas I already have: 1) Issues with the theory of identity 2) Issues with theory of mind 3) Issues with theory of moral value (creating lots high quality lives not seen as valuable, antinatalism, problem of evil) 4) Self-interest (more resources for existing individuals to upload into and utilize) 5) The existence of a convincing two-boxer “proof” of some sort

I also would like to know why an “enthusiastic takeup of the ideas in this post” would not increase your credence significantly? I think there is a very large chance of these ideas not being taken up enthusiastically, but if they were, I am not sure what, aside from extinction, would undermine them. If we get to the point where we can do it, and we want to do it, why would we not do it?

Thank you in advance for any insight, I have spent too long chewing on this without much detailed input, and I would really value it.

Replies from: gjm
comment by gjm · 2015-11-05T20:50:57.849Z · LW(p) · GW(p)

I'm not sure I have much to say that you won't have thought of already. But: First of all, there seem to be lots of ways in which we might fail to develop such technology. We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely). It might turn out that computational superpowers just aren't really available -- that there's only so much processing power we have any realistic way of harnessing. It might turn out that such things are possible but we simply aren't smart enough to find our way to them.

Second, if we (or more precisely our successors, whoever or whatever they are) develop such computational superpowers, why on earth use them for ancestor simulations? In this sort of scenario, maybe we're all living in some kind of virtual universe; wouldn't it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world? Someone else -- entirelyuseless? -- observed earlier in the thread that some such simulation might be necessary in order to figure out enough about our ancestors' minds to simulate them anywhere else, so it's just possible that grotty 21st-century ancestor sims might be a necessary precursor to glorious 25th-century ancestor sims; but why ancestors anyway? What's so special about them, compared with all the other possible minds?

Third, supposing that we have computational superpowers and want to simulate our ancestors, I see no good reason to think it's possible. The information it would take to simulate my great-great-grandparents is dispersed and tangled up with other information, and figuring out enough about my great-great-grandparents to simulate them will be no easier than locating the exact oxygen atoms that were in Julius Caesar's last breath. All the relevant systems are chaotic, measurement is imprecise, and surely there's just no reconstructing our ancestors at this point.

Fourth, it seems quite likely that our superpowered successors, if we have them, will be no more like us than we are like chimpanzees. Perhaps you find it credible that we might want to simulate our ancestors; do you think we would be interested in simulating our ancestors 5 million years ago who were as much like chimps as like us?

Replies from: crmflynn, jacob_cannell
comment by crmflynn · 2015-11-10T12:31:55.979Z · LW(p) · GW(p)

First of all, there seem to be lots of ways in which we might fail to develop such technology. We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely). It might turn out that computational superpowers just aren't really available -- that there's only so much processing power we have any realistic way of harnessing. It might turn out that such things are possible but we simply aren't smart enough to find our way to them.

Absolutely. I think this is where this thing most likely fails. Somewhere in the first disjunct. My gut does not think I am in a simulation, and while that is not at all a valid way to acquire knowledge, it is the case that it leans me heavily into this.

Second, if we (or more precisely our successors, whoever or whatever they are) develop such computational superpowers, why on earth use them for ancestor simulations? In this sort of scenario, maybe we're all living in some kind of virtual universe; wouldn't it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world?

So I am not saying that they WOULD do it, I actually can think of a lot of pretty compelling reasons why they MIGHT. If the people who are around then are at all like us, then I think that a subset of them would likely do it for the one-boxer reasons I mentioned in the first post (which I have since updated with a note at the bottom to clarify some things I should have included in the post originally.) Whether or not their intuitions are valid, there is an internal logic, based on these intuitions, which would push for this. Reasons include hedging against the teletransportation paradox (which also applies to self-uploading) and hoping to increase their credence of an afterlife in which those already dead can join in. This is clearer I think in my update. The main confusion is that I am not talking about attempting to simulate or recreate specific dead people, which I do not think is possible. The key to my argument is to create self-locating doubt.

Also, in my argument, the people who create the simulation are never joined with the people in the simulation. These people stay in their simulation computer. The idea is that we are “hoping” we are similarly in a simulation computer, and have been the whole time, and that when we die, we will be transferred (whole) into the simulations afterlife component along with everyone who died before us in our world. Should we be in a simulation, and yet develop some sort of “glorious virtual universe” that we upload into, there are several options. Two ones that quickly come to mind: 1) We might stay in it until we die, then go into the afterlife component, 2) We might at some point be “raptured” by the simulation out of our virtual universe into the existent “glorious virtual afterlife” of the simulation computer we are in.

As it is likely that the technology for simulations will come about at about the same time as for a “glorious virtual universe” we could even treat it as our last big hurrah before we upload ourselves. This makes sense as the people who exist when this technology becomes available will know a large number of loved ones who just missed it. They will also potentially be in especially imminent fear of the teletransportation paradox. I do not think there is any inherent conflict between doing both of these things.

Someone else -- entirelyuseless? -- observed earlier in the thread that some such simulation might be necessary in order to figure out enough about our ancestors' minds to simulate them anywhere else, so it's just possible that grotty 21st-century ancestor sims might be a necessary precursor to glorious 25th-century ancestor sims; but why ancestors anyway? What's so special about them, compared with all the other possible minds?

Just to be clear, I am not talking about our actual individual ancestors. I actually avoided using the term intentionally as I think it is a bit confusing. I am pretty sure this is how Bostrom meant it as well in the original paper, with the word “ancestor” being used in the looser sense, like how we say “homo erectus where our ancestors.” That might be my misinterpretation, but I do not think so. While I could be convinced, I am personally, currently, very skeptical that it would be possible to do any meaningful sort of replication of a person after they die. I think the only way that someone who has already died has any chance of an afterlife is if we are already in a simulation. This is also why my personal, atheistic mind could be susceptible to donating to such a cause when in grief. I wrote an update to my original post at the bottom where I clarify this. The point of the simulation is to change our credence regarding our self-location. If the vast majority of “people like us” (which can be REALLY broadly construed) exist in simulations with afterlives, and do not know it, we have reason to think we might also exist in such a simulation. If this is still not clear after the update, please let me know, as I am trying to pin down something difficult and am not sure if I am continuing to privilege brevity to the detriment of clarity.

Third, supposing that we have computational superpowers and want to simulate our ancestors, I see no good reason to think it's possible. The information it would take to simulate my great-great-grandparents is dispersed and tangled up with other information, and figuring out enough about my great-great-grandparents to simulate them will be no easier than locating the exact oxygen atoms that were in Julius Caesar's last breath. All the relevant systems are chaotic, measurement is imprecise, and surely there's just no reconstructing our ancestors at this point.

I agree with your point so strongly that I am a little surprised to have been interpreted as meaning this. I think that it seems theoretically feasible to simulate a world full of individual people as they advance their way up from simple stone tools onward, each with their own unique life and identity, each existing in a unique world with its own history. Trying to somehow make this the EXACT SAME as ours does not seem at all possible. I also do not see what the advantage of it would be, as it is not more informative or helpful for our purposes to know that we are the same or not as the people above us, so why would be try to “send that down” below us. We do not care about that as a feature of our world, and so would have no reason to try to instill it in the worlds below us. There is sort of a “golden rule” aspect to this in that you do to the simulation below you the best feasible, reality-conforming version of what you want done to you.

Fourth, it seems quite likely that our superpowered successors, if we have them, will be no more like us than we are like chimpanzees. Perhaps you find it credible that we might want to simulate our ancestors; do you think we would be interested in simulating our ancestors 5 million years ago who were as much like chimps as like us?

Maybe? I think that one of the interesting parts about this is where we would choose to draw policy lines around it. Do dogs go to the afterlife? How about fetuses? How about AI? What is heaven like? Who gets to decide this? These are all live questions. It could be that they take a consequential hedonistic approach that is mostly neutral between “who” gets the heaven. It could be that they feel obligated to go back further in gratitude of all those (“types”) who worked for advancement as a species and made their lives possible. It could be that we are actually not too far from superintelligent AI, and that this is going to become a live question in the next century or so, in which case “we” are that class of people they want to simulate in order to increase their credence of others similar to us (their relatives, friends who missed the revolution) being simulated.

As far as how far back you bother to simulate people, it might actually be easier to start off with some very small bands of people in a very primitive setting then to try to go through and make a complex world for people to “start” in without the benefit of cultural knowledge or tradition. It might even be that the “first people” are based on some survivalist hobby back-to-basics types who volunteered to be emulated, copied, and placed in different combinations in primitive earth environments in order to live simple hunter-gatherer lives and have their children go on to populate an earth (possible date of start? https://en.wikipedia.org/wiki/Population_bottleneck). That said, this is deep into the weeds of extremely low-probability speculation. Fun to do, but increasingly meaningless.

comment by jacob_cannell · 2015-11-06T00:52:53.454Z · LW(p) · GW(p)

We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely).

Yes, but that it isn't enough to defeat simulations. One successful future can create a huge number of sims. Observational selection effects thus make survival fare more likely than otherwise expected.

It might turn out that computational superpowers just aren't really available -- that there's only so much processing power we have any realistic way of harnessing.

Even without quantum computing or reversible computing, even just using sustainable resources on earth (solar) - even with those limitations - there are plenty of resources to create large numbers of sims.

In this sort of scenario, maybe we're all living in some kind of virtual universe; wouldn't it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world

The cost is about the same either way. So the question is one of economic preferences. When people can use their wealth to create either new children or bring back the dead, what will they do? You are thus assuming there will be very low demand for resurrecting the dead vs creating new children. This is rather obviously unlikely.

This technology probably isn't that far away - it is a 21st century tech, not 25th. It almost automatically follows AGI, as AGI is actually just the tech to create minds - nothing less. Many people alive today will still be alive when these sims are built. They will bring back their loved ones, who then will want to bring back theirs, and so on.

I see no good reason to think it's possible.

Most people won't understand or believe it until it happens. But likewise very few people actually understand how modern advanced rendering engines work - which would seem like magic to someone from just 50 years ago.

It's an approximate inference problem. The sim never needs anything even remotely close to atomic information. In terms of world detail levels it only requires a little more than current games. The main new tech required is just the large scale massive inference supercomputing infrastructure that AGI requires anyway.

It's easier to understand if you just think of a human brain sim growing up in something like the Matrix, where events are curiously staged and controlled behind the scenes by AIs.

Replies from: gjm
comment by gjm · 2015-11-06T02:33:41.817Z · LW(p) · GW(p)

The opinion-to-reasons ratio is quite high in both your comment and mine to which it's replying, which is probably a sign that there's only limited value in exploring our disagreements, but I'll make a few comments.

One future civilization could perhaps create huge numbers of simulations. But why would it want to? (Note that this is not at all the same question as "why would it create any?".)

The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations. You have to figure out exactly what the dead were like, which (despite your apparent confidence that it's easy to see how easy it is if you just imagine the Matrix) I think is likely to be completely infeasible, and monstrously expensive if it's possible at all. But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before? Where's the value in that? (And if the answer is, as proposed by entirelyuseless, that to figure out who and what they were we need to do lots of simulations of their earthly existence, then note that that's one more reason to think that resurrecting them is terribly expensive.)

(If we can resurrect the dead, then indeed I bet a lot of people will want to do it. But it seems to me they'll want to do it for reasons incompatible with leaving the resurrected dead in simulations of the mundane early 21st century.)

You say with apparent confidence that "this technology probably isn't that far away". Of course that could be correct, but my guess is that you're wronger than a very wrong thing made of wrong. We can't even simulate C. elegans yet, even though that only has about 1k neurons and they're always wired up the same way (which we know).

Yes, it's an approximate inference problem. With an absolutely colossal number of parameters and, at least on the face of it, scarcely any actual information to base the inferences on. I'm unconvinced that "the sim never needs anything even remotely close to atomic information" given that the (simulated or not) world we're in appears to contain particle accelerators and the like, but let's suppose you're right and that nothing finer-grained than simple neuron simulations is needed; you're still going to need at the barest minimum a parameter per synapse, which is something like 10^15 per person. But it's worse, because there are lots of people and they all interact with one another and those interactions are probably where our best hope of getting the information we need for the approximate inference problems comes from -- so now we have to do careful joint simulations of lots of people and optimize all their parameters together. And if the goal is to resurrect the dead (rather than just make new people a bit like our ancestors) then we need really accurate approximate inference, and it's all just a colossal challenge and I really don't think waving your hands and saying "just think of a human brain sim growing up in something like the Matrix" is on the same planet as the right ballpark for justifying a claim that it's anywhere near within reach.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-06T17:53:40.759Z · LW(p) · GW(p)

One future civilization could perhaps create huge numbers of simulations. But why would it want to?

I've already answered this - because living people have a high interest in past dead people, and would like them to live again. It's that simple.

The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations.

True, but most of the additional cost boils down to a constant factor once you amortize at large scale. Recreating a single individual - very expensive. Recreating billions? Reduces down to closer to the scaling costs of simulating that many minds.

You have to figure out exactly what the dead were like

No, you don't. For example the amount of information remaining about my grandfather who died in the 1950's is pretty small. We could recover his DNA, and we have a few photos. We have some poetry he wrote, and letters. The total amount of information contained in the memories of living relatives is small, and will be even less by the time the tech is available.

So from my perspective the target is very wide. Personal identity is subjectively relative.

But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before?

You wouldn't. I think you misunderstand. You need the historical sims to recreate the dead in the first place. But once that is running, you can copy out their minds at any point. However you always need one copy to remain in the historical sim for consistency (until they die in the hist-sim).

We can't even simulate C. elegans yet, even though that only has about 1k neurons and they're always wired up the same way (which we know).

You could also say we can't simulate bacteria, but neither is relevant. I'm not familiar enough with C. Elegans sims to evaluate your claim that the current sims are complete failures, but even if this is true it doesn't tell us much because only a tiny amount of resources have been spent on that.

Just to be clear - the historical ress-sims under discussion will be created by large-scale AGI (superintelligence). When I say this tech isn't that far away, it's because AGI isn't that far away, and this follows shortly thereafter.

you're still going to need at the barest minimum a parameter per synapse, which is something like 10^15 per person

Hardly. You are assuming naive encoding without compression. Neural nets - especially large biological brains -are enormously redundant and highly compressible.

Look - it's really hard to accurately estimate the resources for things like this, unless you actually know how to build it. 10^15 is a a reasonable upper bound, but the lower bound is much lower.

For the lower bound, consider compressing the inner monologue - which naturally includes everything a person has ever read, heard, and said (even to themselves).

200 wpm 500k words/year 8bits/word ~ 100 MB/year

So that gives a lower bound of 10^10 for a 100 year old. This doesn't include visual information, but the visual cortex is also highly compressible due to translational invariance.

And if the goal is to resurrect the dead (rather than just make new people a bit like our ancestors) then we need really accurate approximate inference, and it's all just a colossal challenge and I really don't think waving your hands and saying "just think of a human brain sim growing up in something like the Matrix"

No - again naysayers will always be able to claim "these aren't really the same people". But their opinions are worthless. The only opinions that matter are those who actually knew the relevant people, and the turing test for resurrection is entirely subjective, relative to their limited knowledge of the resurrectee.

Replies from: gjm
comment by gjm · 2015-11-06T21:07:30.506Z · LW(p) · GW(p)

I've already answered this

But the answer you go on to repeat is one I already explained wasn't relevant, in the sentence after the one you quoted.

most of the additional cost boils down to a constant factor once you amortize at large scale.

I'm not sure what you're arguing. I agree that the additional cost is basically a (large) constant factor; that is, if it costs X to simulate a freshly made new mind, maybe it costs 1000X to recover the details of a long-dead one and simulate that instead. (The factor might well be much more than 1000.) I don't understand how this is any sort of counterargument to my suggestion that it's a reason to simulate new minds rather than old.

the amount of information remaining about my grandfather who died in the 1950's is pretty small.

You say that like it's a good thing, but what it actually means is that almost certainly we can't bring your grandfather back to life, no matter what technology we have. Perhaps we could make someone who somewhat resembles your grandfather, but that's all. Why would you prefer that over making new minds so much as to justify the large extra expense of getting the best approximation we can?

you always need one copy to remain in the historical sim for consistency

I'm not sure what that means. I'd expect that you use the historical simulation in the objective function for the (enormous) optimization problem of determining all the parameters that govern their brain, and then you throw it away and plug the resulting mind into your not-historical simulation. It will always have been the case that at one point you did the historical simulation, but the other simulation won't start going wrong just because you shut down the historical one.

Anyway: as I said before, if you expect lots of historical simulation just to figure out what to put into the non-historical simulation, then that's another reason to think that ancestor simulation is very expensive (because you have to do all that historical simulation). On the other hand, if you expect that a small amount of historical simulation will suffice then (1) I don't believe you (if you're estimating the parameters this way, you'll need to do a lot of it; any optimization procedure needs to evaluate the objective function many times) and (2) in that case surely there are anthropic reasons to find this scenario unlikely, because then we should be very surprised to find ourselves in the historical sim rather than the non-historical one that's the real purpose.

When I say this tech isn't that far away, it's because AGI isn't that far away, and this follows shortly thereafter.

Perhaps I am just misinterpreting your tone (easily done with written communication) but it seems to me that you're outrageously overconfident about what's going to happen on what timescales. We don't know whether, or when, AGI will be achieved. We don't know whether when it is it will rapidly turn into way-superhuman intelligence, or whether that will happen much slower (e.g., depending on hardware technology development which may not be sped up much by slightly-superhuman AGI), or even whether actually the technological wins that would lead to very-superhuman AGI simply aren't possible for some kind of fundamental physical reason we haven't grasped. We don't know whether, if we do make a strongly superhuman AGI, it will enable us to achieve anything resembling our current goals, or whether it will take us apart to use our atoms for something we don't value at all.

You are assuming naive encoding without compression

No, I am assuming that smarter encoding doesn't buy you more than the outrageous amount by which I shrank the complexity by assuming only one parameter per synapse.

that gives a lower bound of 10^10 for a 100 year old

Tried optimizing a function of 10^10 parameters recently? It tends to take a while and converge to the wrong local optimum.

naysayers will always be able to claim "these aren't really the same people". But their opinions are worthless. The only opinions that matter are those who actually knew the relevant people

What makes you think those are different people's opinions? If you present me with a simulated person who purports to be my dead grandfather, and I learn that he's reconstructed from as little information as (I think) we both expect actually to be available, then I will not regard it as the same person as my grandfather. Perhaps I will have no way of telling the difference (since my own reactions on interacting with this simulated person can be available to the optimization process -- if I don't mind hundreds of years of simulated-me being used for that purpose) but there's a big difference between "I can't prove it's not him" and "I have good reason to think it's him".

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-06T23:01:30.511Z · LW(p) · GW(p)

I don't really have a great deal of time to explain this so I"ll be brief. Basically this is something I've thought a great deal about and I have a rather detailed technical vision on how to achieve (At least to the extant that anyone can today. I'm an expert in the relevant fields - computer simulation/graphics and machine learning, and this is my long term life goal.). Fully explaining a rough roadmap would require a small book or long paper, so just keep that in mind.

most of the additional cost boils down to a constant factor once you amortize at large scale.

I'm not sure what you're arguing. I agree that the additional cost is basically a (large) constant factor; that is, if it costs X to simulate a freshly made new mind, maybe it costs 1000X to recover the details of a long-dead one and simulate that instead.

Sorry - I meant a large constant, not a constant multiplier. Simulating a mind costs the same - doesn't matter whether it's in a historical sim world or a modern day sim or a futuristic sim or a fantasy sim ... the cost of simulating the world to (our very crude ) sensory perception limits is always about the same.

The extra cost for an h-sim vs others is in the initial historical research/setup (a constant) and consistency guidance. The consistency enforcement can be achieved by replacing standard forward inference with a goal-directed hierarchical bidirectional inference. The cost ends up asymptotically about the same.

Instead of just a physical sim, or it's more like a very deep hierarchy where at the highest levels of abstraction historical events are compressed down to text like form in some enormous evolving database written and rewritten by an army of historian AIs. Lower more detailed levels in the graph eventually resolve down into 3D objects and physical simulation sparsely as needed.

You say that like it's a good thing, but what it actually means is that almost certainly we can't bring your grandfather back to life, no matter what technology we have. Perhaps we could make someone who somewhat resembles your grandfather, but that's all.

As I said earlier - you do not determine who is or is not my grandfather. Your beliefs have zero weight on that matter. This is such an enormously different perspective that it isn't worth discussing more until you actually understand what I mean when I say personal identity is relative and subjective. Do you grok it?

Perhaps I am just misinterpreting your tone (easily done with written communication) but it seems to me that you're outrageously overconfident about what's going to happen on what timescales. We don't know whether, or when, AGI will be achieved.

Perhaps, but I'm not a random sample - not part of your 'we'. I've spent a great deal of time researching the road to AGI. I've written a little about related issues in the past.

AGI will be achieved shortly after we have brain-scale machine learning models (such as ANNs) running on affordable (< 10K) machines. This is at most only about 5 years away. Today we can simulate a few tens of billions of synapses in real time on a single GPU, and another 1000x performance improvement is on the table in the near future - from some mix of software and hardware advances. In fact, it could very well happen in just a year. (I happen to be working on this directly, I know more about it than just about anyone).

AGI can mean many different things, so consider before arguing with the above.

We don't know whether, if we do make a strongly superhuman AGI, it will enable us to achieve anything resembling our current goals, or whether it will take us apart to use our atoms for something we don't value at all.

Sure, but this whole conversation started with the assumption that we avoid such existential risks.

No, I am assuming that smarter encoding doesn't buy you more than the outrageous amount by which I shrank the complexity by assuming only one parameter per synapse.

The number of parameters in the compressed model needs to be far less than the number of synapses - otherwise the model will overfit. Compression does not hurt performance, it improves it - enormously. More than that, it's actually required at a fundamental level due to the connection between compression and prediction.

Tried optimizing a function of 10^10 parameters recently? It tends to take a while and converge to the wrong local optimum.

Obviously a model fitting a dataset of size 10^10 would need to compress that down even further to learn anything- so that's an upper bound for the parameter bitsize.

If you present me with a simulated person who purports to be my dead grandfather, and I learn that he's reconstructed from as little information as (I think) we both expect actually to be available, then I will not regard it as the same person as my grandfather.

Say you die tomorrow from some accident. You wakeup in 'heaven' - which you find out is really a sim in the year 2046. You discover that you are a sim (an AI really) recreated in a historical sim from the biological original. You have all the same memories, and your friends and family (or sims of them? admittedly confusing) still call you by the same name and consider you the same. Who are you?

Do you really think that in this situation you would say - "I'm not the same person! I'm just an AI simulacra. I don't deserve to inherit any of my original's wealth, status, or relationships! Just turn me off!"

Replies from: Lumifer, gjm
comment by Lumifer · 2015-11-07T01:10:03.028Z · LW(p) · GW(p)

I'm an expert in the relevant fields - computer simulation/graphics and machine learning

Can you provide some links to your publications on the topic of machine learning?

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-07T01:28:22.110Z · LW(p) · GW(p)

Not yet. :) I meant expert only in "read up on the field", not recognized academic expert. Besides, much industrial work is not published in academic journals for various reasons (time isn't justified, secrecy, etc).

comment by gjm · 2015-11-07T00:19:36.399Z · LW(p) · GW(p)

Historical versus other sims: I agree that if the simulation runs for infinitely long then the relevant difference is an additive rather than a multiplicative constant. But in practice it won't do.

Yes, of course I understand your point that I don't get to decide what counts as your grandfather; neither do you get to decide what counts as mine. You apparently expect that our successors will attach a lot of value to simulating people who for all they know (on the basis of a perhaps tiny amount of information) might as well be copies of their ancestors. I do not expect that. Not because I think I get to decide what counts as your grandfather, but because I don't expect our successors to think in the way that you apparently expect them to think.

Yes, you'll have terrible overfitting problems if you have too many parameters. But the relevant comparison isn't between the number of parameters in the model and the number of synapses; it's between the number of parameters in the model and the amount of information we have to nail the model down. If it takes more than (say) a gigabyte of maximally-compressed information to describe how one person differs from others, then it will take more than (something on the order of) 10^9 parameters to specify a person that accurately. I appreciate that you think something far cruder will suffice. I hope you appreciate that I disagree. (I also hope you don't think I disagree because I'm an idiot.) Anyway, my point here is this: specifying a person accurately enough requires whatever amount of information it does (call it X), and our successors will have whatever amount of usable information they do (call it Y), and if Y<<X then the correct conclusion isn't "excellent, our number of parameters[1] will be relatively small to avoid overfitting, so we don't need to worry that the fitting process will take for ever", it's "damn, it turns out we can't reconstruct this person".

[1] It would be better to say something like "number of independent parameters", of course; the right thing might be lots of parameters + regularization rather than few parameters.

I would expect a sim whose opinions resemble mine to say, on waking up in heaven, something like "well, gosh, this is nice, and I certainly don't want it turned off, but do you really have good reason to think that I'm an accurate model of the person whose memories I think I have?". Perhaps not out loud, since no doubt that sim would prefer not to be turned off. But the relevant point here isn't about what the sim would want (and particularly not about whether the sim would want to be turned off, which I bet would generally not be the case even if they were convinced they weren't an accurate model) but about whether for the people responsible for creating the sim a crude approximation was close enough to their ancestor for it to be worth a lot of extra trouble to create that sim rather than a completely new one.

(I could not, in the situation you describe, actually know that I had "all the same memories". That's a large part of the point.)

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-07T01:46:41.153Z · LW(p) · GW(p)

You apparently expect that our successors will attach a lot of value to simulating people who for all they know (on the basis of a perhaps tiny amount of information) might as well be copies of their ancestors.

AGI will change our world in many ways, one of which concerns our views on personal identity. After AGI people will become accustomed to many different versions or branches of the same mind, mind forking, merging, etc.

Copy implies a version that is somehow lesser, which is not the case. Indeed in a successful sim scenario, almost everyone is technically a copy.

But the relevant comparison isn't between the number of parameters in the model and the number of synapses; it's between the number of parameters in the model and the amount of information we have to nail the model down.

The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.

If it takes more than (say) a gigabyte of maximally-compressed information to describe how one person differs from others, then it will take more than (something on the order of) 10^9 parameters to specify a person that accurately.

Right - again we know that it can't be much more than 10^14 (number of synapses in human adult, it's not 10^15 BTW), and it could be as low as 10^10. The average synapse stores only a bit or two at most (you can look it up, it's been measured - the typical median synapse is tiny and has an extremely low SNR corresponding to a small number of bits.) We can argue about numbers in between, but it doesn't really matter because either way it isn't that much.

Anyway, my point here is this: specifying a person accurately enough requires whatever amount of information it does (call it X), and our successors will have whatever amount of usable information they do (call it Y),

No - it just doesn't work that way, because identity is not binary. It is infinite shades of grey. Different levels of success require only getting close enough in mindspace, and is highly relative to one's subjective knowledge of the person.

What matters most is consistency. It's not like the average person remembers everything they said a few years ago, so that 10^10 figure is extremely generous. Our memory is actually fairly poor.

There will be multiple versions of past people - just as we have multiple biographies today. Clearly there is some objective sense in which some versions are more authentic, but this isn't nearly as important as you seem to think - and it is far less important than historical consistency with the rest of the world.

(I could not, in the situation you describe, actually know that I had "all the same memories". That's a large part of the point.)

We are in the same situation today. For all I know all of my past life is a fantasy created on the fly. What actually matters is consistency - that my memories match the memories of others and recorded history. And in fact due to the malleability of memory, consistency is often imperfect in human memories.

We really don't remember that much at all - not accurately.

Replies from: gjm
comment by gjm · 2015-11-07T13:32:42.661Z · LW(p) · GW(p)

AGI will change our world in many ways, one of which concerns our views on personal identity.

I agree, but evidently we disagree about how our views on personal identity will change if and when AGI (and, which I think is what actually matters here, large-scale virtualization) comes along.

Copy implies a version that is somehow lesser

That's not how I was intending to use the word.

The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.

You've been arguing that we need substantially less information than "exactly the amount of compressed information encoded in the synapses".

identity is not binary

I promise, I do understand this, and I don't see that anything I wrote requires that identity be binary. (In particular, at no point have I been intending to claim that what's required is the exact same neurons, or anything like that.)

[...] What matters most [...] this isn't nearly as important [...] far less important [...] What actually matters [...]

These are value judgements, or something like them. My values are apparently different from yours, which is fair enough. But the question actually at issue wasn't one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI). So far you've offered no grounds for thinking that they will feel the same way about this as you do, you've just stated your own position as if it's a matter of objective fact (albeit about matters of not-objective-fact).

We are in the same situation today

Only if you don't distinguish between what's possible and what's likely. Sure, I could have been created ten seconds ago with completely made-up memories. Or I could be in the hands of a malevolent demon determined to deceive me about everything. Or I could be suffering from some disastrous mental illness. But unless I adopt a position of radical skepticism (which I could; it would be completely irrefutable and completely useless) it seems reasonable not to worry about such possibilities until actual reason for thinking them likely comes along.

I will (of course!) agree that our situation has a thing or two in common with that one, because our perception and memory and inference are so limited and error-prone, and because even without simulation people change over time in ways that make identity a complicated and fuzzy affair. But for me -- again, this involves value judgements and yours may differ from mine, and the real question is what our successors will think -- the truer this is, the less attractive ancestor-simulation becomes for me. If you tell me you can simulate my great-great-great-great-great-aunt Olga about whom I know nothing at all, then I have absolutely no way of telling how closely the simulation resembles Olga-as-she-was, but that means that the simulation has little extra value for me compared with simulating some random person not claimed to be my great^5-aunt. As for whether I should be glad of it for Olga's sake -- well, if you mean new-Olga's then an ancestor-sim is no better in this respect than a non-ancestor-sim; and if you mean old-Olga's sake then the best I can do is to think how much it would please me to learn that 200 years from now someone will make a simulation that calls itself by my name and has a slightly similar personality and set of memories, but no more than that; the answer is that I couldn't care less whether anyone does.

(It feels like I'm repeating myself, for which I apologize. But I'm doing so largely because it seems like you're completely ignoring the main points I'm making. Perhaps you feel similarly, in which case I'm sorry; for what it's worth, I'm not aware that I'm ignoring any strong or important point you're making.)

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-07T16:36:38.199Z · LW(p) · GW(p)

You've been arguing that we need substantially less information than "exactly the amount of compressed information encoded in the synapses".

That was misworded - I meant the amount of information actually encoded in the synapses, after advanced compression. As I said before, synapses in NNs are enormously redundant, such that trivial compression dramatically reduces the storage requirements. For the amount of memory/storage to represent a human mind level sim, we get that estimate range between 10^10 to 10^14, as discussed earlier. However a great deal of this will be redundant across minds, so the amount required to specify the differences of one individual will be even less.

But the question actually at issue wasn't one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI).

Right. Well I have these values, and I am not alone. Most people's values will also change in the era of AGI, as most people haven't thought about this clearly. And finally, for a variety of reasons, I expect that people like me will have above average influence and wealth.

Your side discussion about your distant relatives suggests you don't foresee how this is likely to come about in practice (which really is my fault as I haven't explained it in this thread, although I have discussed bits of it previously).

It isn't about distant ancestors. It starts with regular uploading. All these preserved brains will have damage of various kinds - some arising from the process itself, some from normal aging or disease. AI then steps in to fill in the gaps, using large scale inference. This demand just continues to grow, and it ties into the pervasive virtual world heaven tech that uploads want for other reasons.

In short order everyone in the world has proof that virtual heaven is real, and that uploading works. The world changes, and uploading becomes the norm. We become an em society.

Someone creates a real Harry Potter sim, and when Harry enters the 'real' world above he then wants to bring back his fictional parents. So it goes.

Then the next step is insurance for the living. Accidents can destroy or damage your brain - why risk that? So the AIs can create a simulated copy of the earth, kept up to date in real time through the ridiculous pervasive sensor monitoring of the future.

Eventually everyone realizes that they are already sims created by the AI.

It sucks to be an original - because there is no heaven if you die. It is awesome to be a sim, because we get a guaranteed afterlife.

comment by Lumifer · 2015-11-04T16:57:55.945Z · LW(p) · GW(p)

We are more likely simulations if B is right than if A is right.

And how does that follow?

Replies from: gjm
comment by gjm · 2015-11-05T00:13:29.931Z · LW(p) · GW(p)

"Follow" is probably an exaggeration since this is pretty handwavy, but:

First of all, a clarification: I should really have written something like "We are more likely accurate ancestor-simulations ..." rather than "We are more likely simulations". I hope that was understood, given that the actually relevant hypothesis is one involving accurate ancestor-simulations, but I apologize for not being clearer. OK, on with the show.

Let W be the world of our non-simulated ancestors (who may or may not actually be us, depending on whether we are ancestor-sims). W is (at least as regards the experiences of our non-simulated ancestors) like our world, either because it is our world or because our world is an accurate simulation of W. In particular, if A then W is such as generally not to lead to large-scale ancestor sims, and if B then W is such as generally to lead to large-scale ancestor sims.

So, if B then in addition to W there are probably ancestor-sims of much of W; but if A then there are probably not.

So, if B then some instances of us are probably ancestor-sims, and if A then probably not.

So, Pr(we are ancestor-sims | B) > Pr(we are ancestor-sims | A).

Extreme case: if we somehow know not A but the much stronger A': "A society just like ours will never lead to any sort of ancestor-sims" then we can be confident of not being accurate ancestor-sims.

(I repeat that of course we could still be highly inaccurate ancestor-sims or non-ancestor sims, and A versus B doesn't tell us much about that, but that the question at issue was specifically about accurate ancestor-sims since those are what might be required for our (non-simulated forebears') descendants to give us (or our non-simulated forebears) an afterlife, if they were inclined to do so.)

Replies from: Lumifer
comment by Lumifer · 2015-11-05T05:04:55.665Z · LW(p) · GW(p)

Consider a different argument.

Our world is either simulated or not.

If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us.

If our world is simulated, we are already simulated and there's nothing we can do to increase our chance of being simulated because it's already so.

Replies from: gjm, crmflynn, jacob_cannell, jacob_cannell
comment by gjm · 2015-11-05T10:02:47.021Z · LW(p) · GW(p)

That might be highly relevant[1] if I'd made any argument of the form "If we do X, we make it more likely that we are simulated". But I didn't make any such argument. I said "If societies like ours tend to do X, then it is more likely that we are simulated". That differs in two important ways.

[1] Leaving aside arguments based on exotic decision theories (which don't necessarily deserve to be left aside but are less obvious than the fact that you've completely misrepresented what I said).

Replies from: Lumifer
comment by Lumifer · 2015-11-05T16:28:53.116Z · LW(p) · GW(p)

the fact that you've completely misrepresented what I said

You might want to think about downsizing that chip on your shoulder. My comment asks you to consider my argument. It says nothing -- literally, not a single word -- about what you have said.

But so as not to waste your righteous indignation, let me ask you a couple of questions that will surely completely misrepresent what you said. Those "societies like ours" that you mentioned, can you tell me a bit more about them? How many did you observe, on the basis of which features did you decide they are "like ours", what did the ones that are not "like ours" look like?

Oh, and your comment seems to be truncated, did you lose the second part somewhere?

Replies from: gjm
comment by gjm · 2015-11-05T20:35:47.663Z · LW(p) · GW(p)

No chip so far as I can see. If you think your comment says nothing at all about what I said, go and look up conversational implicatures.

You can define "societies like ours" in lots of ways. Any reasonable way is likely to have the properties (1) that observing what our society does gives us (probabilistic) information about what societies like ours tend to do and (2) that information about what societies like ours tend to do gives (probabilistic) information about our future.

(Not very much information, so any argument of this sort is weak. But I already said that.)

did you lose the second part somewhere?

Nope. Why do you think I might have? Because I didn't say what the "two important ways" are? I thought that would be obvious, but I'll make it explicit. (1) "If we do ..." versus "If societies like ours tend to do ..." (hence, since some of those societies may be in the past, no need for reverse causation etc.) (2) "we make it more likely that ..." versus "it is more likely that ..." (hence, since not a claim about what "we" do, no question about what we have power to do).

comment by crmflynn · 2015-11-05T13:12:08.292Z · LW(p) · GW(p)

Consider a different argument.

Our world is either simulated or not.

If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us.

If our world is simulated, we are already simulated and there's nothing we can do to increase our chance of being simulated because it's already so.

I am guessing you two-box in the Newcomb paradox as well, right? If you don’t then you might take a second to realize you are being inconsistent.

If you do two-box, realize that a lot of people do not. A lot of people on LW do not. A lot of philosophers who specialize in decision theory do not. It does not mean they are right, it just means that they do not follow your reasoning. They think that the right answer is to one box. They take an action, later in time, which does not seem causally determinative (at least as we normally conceive of causality). They may believe in retrocausality, the may believe in a type of ethics in which two-boxing would be a type of cheating or free-riding, they might just be superstitious, or they might just be humbling themselves in the face of uncertainty. For purposes of this argument, it does not matter. What matters, as an empirical matter, is that they exist. Their existence means that they will ignore or disbelieve that “there’s nothing we can do to increase our chance of being simulated” like they ignore the second box.

If we want to belong to the type of species where the vast majority of the species exists in a simulations with a long-duration, pleasant afterlife, we need to be the “type of species” who builds large numbers of simulations with long-duration, pleasant afterlives. And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one. Pending acausal trade considerations (probably for another post), two-boxers, and likely some one-boxers, will not think that their actions are causing anything, but it will have evidential value still.

Replies from: Lumifer
comment by Lumifer · 2015-11-05T16:58:24.859Z · LW(p) · GW(p)

I am guessing you two-box in the Newcomb paradox as well, right?

Yes, of course.

a lot of people do not

I don't think this is true. The correct version is your following sentence:

A lot of people on LW do not

People on LW, of course, are not terribly representative of people in general.

What matters, as an empirical matter, is that they exist.

I agree that such people exist.

If we want to belong to the type of species

Hold on, hold on. What is this "type of species" thing? What types are there, what are our options?

And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one.

Nope, sorry, I don't find this reasoning valid.

it will have evidential value still.

Still nope. If you think that people wishing to be in a simulation has "evidential value" for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have "evidential value"? Are you going to cherry-pick "right" beliefs and "wrong" beliefs?

Replies from: crmflynn
comment by crmflynn · 2015-11-10T13:29:17.325Z · LW(p) · GW(p)

I don't think this is true. The correct version is your following sentence:

A lot of people on LW do not

People on LW, of course, are not terribly representative of people in general.

LW is not really my personal sample for this. I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2/3 of people two box. Nozick, who popularized this, said he thought it was about 50/50. While it is again not representative, of the thousand people who answered the question in this survey, it was about equal (http://philpapers.org/surveys/results.pl). For people with PhD’s in Philosophy it was 458 two-boxers to 348 one-boxers. While I do not know what the actual number would be if there was a Pew Survey, I suspect, especially given the success of Calvinism, magical thinking, etc. that there are a substantial minority of people who would one-box.

What matters, as an empirical matter, is that they exist.

I agree that such people exist.

Okay. Can you see how they might take the approach I have suggested they might? And if yes, can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?

If we want to belong to the type of species

Hold on, hold on. What is this "type of species" thing? What types are there, what are our options?

As a turn of phrase, I was referring two types. One that makes simulations meeting this description, and one that does not. It is like when people advocate for colonizing Mars, they are expressing a desire to be “that type of species.” Not sure what confused you here….

And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one.

Nope, sorry, I don't find this reasoning valid.

If you are in the Sleeping Beauty problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem), and are woken up during the week, what is your credence that the coin has come up tails? How do you decide between the doors in the Monty Hall problem?

I am not asking you to think that the actual odds have changed in real time, I am asking you to adjust your credence based on new information. The order of cards has not changed in the deck, but now you know which ones have been discarded.

If it turns out simulations are impossible, I will adjust my credence about being in one. If a program begins plastering trillions of simulations across the cosmological endowment with von Neumann probes, I will adjust my credence upward. I am not saying that your reality changes, I am saying that the amount of information you have about the location of your reality has changed. If you do not find this valid, what do you not find valid? Why should your credence remain unchanged?

it will have evidential value still.

Still nope. If you think that people wishing to be in a simulation has "evidential value" for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have "evidential value"? Are you going to cherry-pick "right" beliefs and "wrong" beliefs?

Beliefs can cause people to do things, whether that be go to war or build expensive computers. Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq? How can their “belief” in such a thing have any evidential value?

One-boxers wishing to be in a simulation are more likely to create a large number of simulations. The existence of a large number of simulations (especially if they can nest their own simulations) make it more likely that we are not at a “basement level” but instead are in a simulation, like the ones we create. Not because we are creating our own, but because it suggests the realistic possibility that our world was created a “level” above us. This is just about self-locating belief. As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated. However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.” Same as if you were currently living in Western Iraq, you should update your credence from “why should I possibly leave my house, why would it not be safe” to “right, because there are people who are inspired by belief to take actions which make it unsafe.” Your knowledge about others’ beliefs can provide information about certain things that they may have done or may plan to do.

Replies from: Lumifer
comment by Lumifer · 2015-11-10T17:59:45.126Z · LW(p) · GW(p)

I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2/3 of people two box. Nozick, who popularized this, said he thought it was about 50/50.

Interesting. Not what I expected, but I can always be convinced by data. I wonder to which degree the religiosity plays a part -- Omega is basically God, so do you try to contest His knowledge..?

can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?

Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster -- so what?

As a turn of phrase, I was referring two types.

My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about "types" -- one can certainly imagine them, but that has nothing to do with reality.

I am asking you to adjust your credence based on new information.

Which new information?

Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish?

Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq?

You are conflating here two very important concepts, that is, "present" and "future".

People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah.

As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated.

Correct.

However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.”

My belief is that it IS possible that we live in a simulation but it has the same status as believing it IS possible that Jesus (or Allah, etc.) is actually God. The probability is non-zero, but it's not affecting any decisions I'm making. I still don't see why the number of one-boxers around should cause me to update this probability to anything more significant.

Replies from: crmflynn
comment by crmflynn · 2015-11-12T06:04:25.404Z · LW(p) · GW(p)

Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster -- so what?

By analogy, what are some things that decrease my credence in thinking that humans will survive to a “post-human stage.” For me, some are 1) We seem terrible at coordination problems at a policy level, 2) We are not terribly cautious in developing new, potentially dangerous, technology, 3) some people are actively trying to end the world for religious/ideological reasons. So as I learn more about ISIS and its ideology and how it is becoming increasingly popular, since they are literally trying to end the world, it further decreases my credence that we will make it to a post-human stage. I am not saying that my learning information about them is actually changing the odds, just that it is giving me more information with which to make my knowledge of the already-existing world more accurate. It’s Bayesianism.

For another analogy, my credence for the idea that “NYC will be hit by a dirty bomb in the next 20 years” was pretty low until I read about the ideology and methods of radical Islam and the poor containment of nuclear material in the former Soviet Union. My reading about these people’s ideas did not change anything, however, their ideas are causally relevant, and my knowledge of this factor increase my credence of that as a possibility.

For one final analogy, if there is a stack of well-shuffled playing cards in front of me, what is my credence that the bottom card is a queen of hearts? 1/52. Now let’s say I flip the top two cards, and they are a 5 and a king. What is my credence now that the bottom card is a queen of hearts? 1/50. Now let’s say I go through the next 25 cards and none of them are the queen of hearts. What is my credence now that the bottom card is the queen of hearts? 1 in 25. The card at the bottom has not changed. The reality is in place. All I am doing is gaining information which helps me get a sense of location. I do want to clarify though, that I am reasoning with you as a two-boxer. I think one-boxers might view specific instances like this differently. Again, I am agnostic on who is correct for these purposes.

Now to bring it back to the point, what are some obstacles to your credence to thinking you are in a simulation? For me, the easy ones that come to mind are: 1) I do not know if it is physically possible, 2) I am skeptical that we will survive long enough to get the technology, 3) I do not know why people would bother making simulations.

One and two or unchanged by the one-box/Calvinism thing, but when we realize both that there are a lot of one-boxers, and that these one-boxers, when faced with an analogous decision, would almost certainly want to create simulations with pleasant afterlives, then I suddenly have some sense of why #3 might not be an obstacle.

My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about "types" -- one can certainly imagine them, but that has nothing to do with reality.

I think you are reading something into what I said that was not meant. That said, I am still not sure what that was. I can say the exact thing in different language if it helps. “If some humans want to make simulations of humans, it is possible we are in a simulation made by humans. If humans do not want to make simulations of humans, there is no chance that we are in a simulation made by humans.” That was the full extent of what I was saying, with nothing else implied about other species or anything else.

Which new information?

Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish?

Second point first. How could we be in a petri dish? How could we be NPCs in a video game? How would that fit with other observations and existing knowledge? My current credence is near zero, but I am open to new information. Hit me.

Now the first point. The new information is something like: “When we use what we know about human nature, we have reason to believe that people might make simulations. In particular, the existence of one-boxers who are happy to ignore our ‘common sense’ notions of causality, for whatever reason, and the existence of people who want an afterlife, when combined, suggests that there might be a large minority of people who will ‘act out’ creating simulations in the hope that they are in one.” A LW user sent me a message directing me to this post, which might help you understand my point: http://lesswrong.com/r/discussion/lw/l18/simulation_argument_meets_decision_theory/

People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah.

The weird thing about trying to determine good self-locating beliefs when looking at the question of simulations is that you do not get the benefit of self-locating in time like that. We are talking about simulations of worlds/civilizations as they grow and develop into technological maturity. This is why Bostrom called them “ancestor simulations” in the original article (which you might read if you haven’t, it is only 12 pages, and if Bostrom is Newton, I am like a 7th grader half-assing an essay due tomorrow after reading the Wikipedia page.)

As for people believing in Allah making it more likely that he exists, I fully agree that that is nonsense. The difference here is that part of the belief in “Am I in a simulation made by people” relies CAUSALLY on whether or not people would ever make simulations. If they would not, the chance is zero. If they would, whether or not they should, the chance is something higher.

For an analogy again, imagine I am trying to determine my credence that the (uncontacted) Sentinelese people engage in cannibalism. I do not know anything about them specifically, but my credence is going to be something much higher than zero because I am aware that lots of human civilizations practice cannibalism. I have some relevant evidence about human nature and decision making that allows other knowledge of how people act to put some bounds on my credence about this group. Now imagine I am trying to determine my credence that the Sentinelese engage in widespread coprophagia. Again, I do not know anything about them. However, I do know that no other recorded human society has ever been recorded to do this. I can use this information about other peoples’ behavior and thought processes, to adjust my credence about the Sentinelese. In this case, giving me near certainty that they do not.

If we know that a bunch of people have beliefs that will lead to them trying to create “ancestor” simulations of humans, then we have more reason to think that a different set of humans have done this already, and we are in one of the simulations.

The probability is non-zero, but it's not affecting any decisions I'm making. I still don't see why the number of one-boxers around should cause me to update this probability to anything more significant.

Do you still not think this after reading this post? Please let me know. I either need to work on communicating this a different way or try to pin down where this is wrong and what I am missing….

Also, thank you for all of the time you have put into this. I sincerely appreciate the feedback. I also appreciate why and how this has been frustrating, re: “cult,” and hope I have been able to mitigate the unpleasantness of this at least a bit.

Replies from: Lumifer
comment by Lumifer · 2015-11-12T17:52:44.045Z · LW(p) · GW(p)

my credence

Why do you talk in terms of credence? In Bayesianism your belief of how likely something is is just a probability, so we're talking about probabilities, right?

I am not saying that my learning information about them is actually changing the odds, just that it is giving me more information with which to make my knowledge of the already-existing world more accurate.

Sure, OK.

Now to bring it back to the point, what are some obstacles to your credence to thinking you are in a simulation?

Aren't you doing some rather severe privileging of the hypothesis?

The world has all kinds of people. Some want to destroy the world (and that should increase my credence that the world will get destroyed); some want electronic heavens (and that should increase my credence that there will be simulated heavens); some want break out of the circle of samsara (and that should increase my credence that any death will be truly final); some want a lot of beer (and that should increase my credence that the future will be full of SuperExtraSpecialBudLight), etc. etc. And as the Egan's Law says, "It all adds up to normality".

want to create simulations with pleasant afterlives

I think you're being very Christianity-centric and Christians are only what, about a third of the world's population? I still don't know why people would create imprecise simulations of those who lived and died long ago.

If some humans want to make simulations of humans, it is possible we are in a simulation made by humans. If humans do not want to make simulations of humans, there is no chance that we are in a simulation made by humans.

Locate this statement on a timeline. Let's go back a couple of hundred years: do humans want to make simulations of humans? No, they don't.

Things change and eternal truths are rare. Future is uncertain and judgements of what people of far future might want to do or not to do are not reliable.

How could we be in a petri dish? How could we be NPCs in a video game? How would that fit with other observations and existing knowledge?

Easily enough. You assume -- for no good reason known to me -- that a simulation must mimic the real world to the best of its ability. I don't see why this should be so. A petri dish, in way, is a controlled simulation of, say, the growth and competition between different strains of bacteria (or yeast, or mold, etc.). Imagine an advanced (post-human or, say, alien) civilization doing historical research through simulations, running A/B tests on the XXI-century human history. If we change X, will the history go in the Y direction? Let's see. That's a petri dish -- or a video game, take your pick.

When we use what we know about human nature, we have reason to believe that people might make simulations.

That's not a comforting thought. From what I know about human nature, people will want to make simulations where the simulation-makers are Gods.

that there might be a large minority of people who will ‘act out’ creating simulations in the hope that they are in one

And since I two-box, I still say that they can "act out" anything they want, it's not going to change their circumstances.

The difference here is that part of the belief in “Am I in a simulation made by people” relies CAUSALLY on whether or not people would ever make simulations.

Nope, not would ever make, but have ever made. The past and the future are still different. If you think you can reverse the time arrow, well, say so explicitly.

because I am aware that lots of human civilizations

Yes, you have many known to you examples so you can estimate the probability that one more, unknown to you, has or does not have certain features. But...

more reason to think that a different set of humans have done this already

...you can't do this here. You know only a single (though diverse) set of humans. There is nothing to derive probabilities from. And if you want to use narrow sub-populations, well, we're back to privileging the hypothesis again. Lots of humans believe and intend a lot of different things. Why pick this one?

Do you still not think this after reading this post?

Yep, still. If what the large number of people around believe affected me this much, I would be communing with my best friend Jesus instead :-P

why and how this has been frustrating

Hasn't been frustrating at all. I like intellectual exercises in twisting, untwisting, bending, folding, etc.. :-) I don't find this conversation unpleasant.

re: “cult,”

Nah, it's not you who is Exhibit A here :-/

comment by jacob_cannell · 2015-11-07T22:48:41.440Z · LW(p) · GW(p)

Our world is either simulated or not.

Not quite. In the sim case, we along with our world exist as multiple copies - one original along with some number of sims. It's really important to make this distinction, it totally changes the relevant decision theory.

If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us.

No - because we exist as a set of copies which always takes the same actions. If we (in the future) create simulations of our past selves, then we are already today (also) those simulations.

Replies from: Lumifer
comment by Lumifer · 2015-11-08T05:34:10.820Z · LW(p) · GW(p)

Not quite.

Whether it's not quite or yes quite depends on whether one accepts you idea of the identity as relative, fuzzy, and smeared out over a lot of copies. I don't.

we exist as a set of copies

Do you state this as a fact?

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-08T06:07:09.850Z · LW(p) · GW(p)

Actually the sim argument doesn't depend on fuzzy smeared out identity. The copy issue is orthogonal and it arises in any type of multiverse.

we exist as a set of copies

Do you state this as a fact?

It is given in the sim scenario. I said this in reply to your statement "there's nothing we do can make it simulated".

The statement is incorrect because we are uncertain on our true existential state. And moreover, we have the power to change that state. The first original version of ourselves can create many other copies.

Replies from: Lumifer
comment by Lumifer · 2015-11-08T06:31:10.553Z · LW(p) · GW(p)

Actually the sim argument doesn't depend on fuzzy smeared out identity.

If the identity isn't smeared then our world -- our specific world -- is either simulated or not.

The statement is incorrect because we are uncertain on our true existential state.

Uncertainty doesn't grant the power to change the status from not-simulated to simulated.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-08T16:28:34.850Z · LW(p) · GW(p)

If the identity isn't smeared then our world -- our specific world -- is either simulated or not.

Sure. But we don't know which copy we are, and all copies make the same decisions.

Uncertainty doesn't grant the power to change the status from not-simulated to simulated.

Each individual copy is either simulated or not, and nothing each individual copy does can change that - true. However, all of the copies output the same decisions, and each copy can not determine it's true existential status.

So the uncertainty is critically important - because the distribution itself can be manipulated by producing more copies. By creating simulations in the future, you alter the distribution by creating more sim copies such that it is thus more likely that one has been a sim the whole time.

Draw out the graph and perhaps it will make more sense.

It doesn't actually violate physical causality - the acuasality is only relative - an (intentional) illusion due to lack of knowledge.

Replies from: Lumifer
comment by Lumifer · 2015-11-09T16:17:46.625Z · LW(p) · GW(p)

all copies make the same decisions ... all of the copies output the same decisions

All copies might make the same decisions, but the originals make different decisions.

Remember how upthread you talked about copies being relative and imperfect images of the originals? This means that the set of copies and the singleton of originals are different.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-09T18:40:11.451Z · LW(p) · GW(p)

As individual variants they may have slight differences (less so for more advanced sims constructed later), but that doesn't matter.

The 'decision' we are talking about here is an abstract high level decision or belief concerning whether one will support the construction of historical sims (financially, politically, etc). The numerous versions of a person might occasionally make different decisions here and there for exactly what word to use or what not, but they will (necessarily by design) agree on major life decisions.

Remember how upthread you talked about copies being relative and imperfect images of the originals?

I never said "imperfect images" - that's your beef.

Replies from: Lumifer
comment by Lumifer · 2015-11-09T19:14:40.456Z · LW(p) · GW(p)

I never said "imperfect images" - that's your beef.

Let me quote you:

Different levels of success require only getting close enough in mindspace, and is highly relative to one's subjective knowledge of the person.

What matters most is consistency. It's not like the average person remembers everything they said a few years ago, so that 10^10 figure is extremely generous. Our memory is actually fairly poor.

There will be multiple versions of past people - just as we have multiple biographies today. Clearly there is some objective sense in which some versions are more authentic, but this isn't nearly as important as you seem to think - and it is far less important than historical consistency with the rest of the world.

Given all this I can't see how you insist that copies make the same decisions as originals. In fact, in your quote you even have different copies making different decisions ("multiple versions").

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-09T19:34:28.422Z · LW(p) · GW(p)

The different versions arise from multiverse considerations. The obvious basic route to sim capture is recreating very close copies that experience everything we remember having experienced - a recreation of our exact specific historical timeline/branch.

But even recreating other versions corresponding to other nearby branches in the multiverse could work and is potentially more computationally efficient. The net effect is the same: it raises the probabillity that we exist in a sim created by some other version/branch.

So there are two notions of historical 'accuracy'. The first being accuracy in terms of exact match with a specific timeline, the other being accuracy in terms of matching only samples from the overall multiverse distribution.

Success only requires a high total probability that we are in a sim. It doesn't matter much which specific historical timeline creates the sim.

The idea of decision agreement still applies across different versions in the multiverse. It doesn't require exact agreement with every micro decision, only general agreement on the key decisions involving sim creation.

comment by jacob_cannell · 2015-11-06T00:55:53.263Z · LW(p) · GW(p)

Knowledge of which decisions we actually make is information which we can update our worldviews on.

Acausal reasoning seems wierd, but it works in practice and dominates classical causal reasoning.

Replies from: Lumifer
comment by Lumifer · 2015-11-06T03:05:51.667Z · LW(p) · GW(p)

Acausal reasoning seems wierd, but it works in practice and dominates classical causal reasoning.

What do you mean, "works in practice"?

comment by crmflynn · 2015-11-05T12:55:57.004Z · LW(p) · GW(p)

Simulations of long-ago ancestors..?

Imagine that you have the ability to run a simulation now. Would you want to populate it by people like you, that is, fresh people de novo and possibly people from your parents and grandparents generations -- or would you want to populate it with Egyptian peasants from 3000 B.C.? Homo habilis, maybe? How far back do you want to go?

What the simulation would be like depends entirely on the motivation for running it. That is actually sort of the point of the post. If people want to be in a certain kind of simulation, they should run simulations that conform with that.

No, I don't think so. You're engaging in magical thinking. What you -- or everyone -- believes does not change the reality.

What the people “above” us, if they exist, believe absolutely does change reality.

What Omega believes changes reality. People one-box anyway.

Who the Calvinist God has allegedly predestined determines reality. People go to church, pray, etc. anyway.

If we are “the type of species” who builds simulations that we would like to be in, we are much more likely to be a species by-and-large who inhabits simulations which we want to be in.

Replies from: Lumifer
comment by Lumifer · 2015-11-05T16:50:21.696Z · LW(p) · GW(p)

What the people “above” us, if they exist, believe absolutely does change reality.

And so we are back to the idea of gods.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-06T00:56:11.852Z · LW(p) · GW(p)

Sure - and nothing wrong with that.

comment by Lumifer · 2015-11-07T16:58:38.689Z · LW(p) · GW(p)

This thread did much to clarify to me why some people consider LW a cult.

Replies from: DanArmak
comment by DanArmak · 2015-11-29T09:43:48.991Z · LW(p) · GW(p)

That observation isn't useful to others unless you share your insights.

Replies from: Lumifer
comment by Lumifer · 2015-11-30T17:46:07.356Z · LW(p) · GW(p)

In this case I prefer to wave my hand in the general direction of and let readers find (or not) their own evidence.

comment by driusa1 · 2015-11-04T09:58:30.704Z · LW(p) · GW(p)

Call & WhatsApp+27784916490

Instant Death Spell, Revenge Spell, Divorce Spell, Fertility/Pregnancy Spell, Spells for Divorce, Marriage Spell, Divorce Psychic, Love spells, lost love spells, breakup spells, protection, reunite us, attraction spells, psychic, herbalist, traditional healers, sangoma, native healers, herbalist healers, herbalist doctors, witch doctor,

Online Revenge on Ex, Get Exlover Back Online, Love Spell Revenge Spell Death Spell Fertility Spell EX Removal Break-Up Spell Reach Out For Help via: quicksoluctionspell@yahoo.com Website: http://www.quicksolutionspell.com Call & WhatsApp+27784916490

comment by driusa1 · 2015-11-04T09:57:03.252Z · LW(p) · GW(p)

Call & WhatsApp+27784916490

Instant Death Spell, Revenge Spell, Divorce Spell, Fertility/Pregnancy Spell, Spells for Divorce, Marriage Spell, Divorce Psychic, Love spells, lost love spells, breakup spells, protection, reunite us, attraction spells, psychic, herbalist, traditional healers, sangoma, native healers, herbalist healers, herbalist doctors, witch doctor,

Online Revenge on Ex, Get Exlover Back Online, Love Spell Revenge Spell Death Spell Fertility Spell EX Removal Break-Up Spell Reach Out For Help via: quicksoluctionspell@yahoo.com Website: http://www.quicksolutionspell.com Call & WhatsApp+27784916490