Posts

Personal thoughts on careers in AI policy and strategy [x-post EA Forum] 2017-09-27T17:09:34.496Z
Research Assistant positions at the Future of Humanity Institute and Centre for Effective Altruism 2017-06-08T17:20:48.383Z
Stuart Russell's Center for Human Compatible AI is looking for an Assistant Director 2017-04-25T10:21:50.546Z
"On the Impossibility of Supersized Machines" 2017-03-31T23:32:49.988Z
The Future of Humanity Institute is hiring a project manager 2017-01-26T18:19:20.541Z
FHI is accepting applications for internships in the area of AI Safety and Reinforcement Learning 2016-11-07T16:33:00.500Z
Rebuttal piece by Stuart Russell and FHI Research Associate Allan Dafoe: "Yes, the experts are worried about the existential risk of artificial intelligence." 2016-11-03T17:54:07.377Z
The University of Cambridge Centre for the Study of Existential Risk (CSER) is hiring! 2016-10-06T16:53:26.841Z
The Global Catastrophic Risk Institute (GCRI) seeks a media engagement volunteer/intern 2016-09-14T16:42:43.740Z
The Future of Humanity Institute is hiring! 2016-08-18T13:09:13.522Z
Paid research assistant position focusing on artificial intelligence and existential risk 2016-05-02T18:27:01.969Z
Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife 2015-11-02T23:03:50.582Z

Comments

Comment by crmflynn on Paid research assistant position focusing on artificial intelligence and existential risk · 2016-05-04T17:56:37.428Z · LW · GW

My sense from talking with Professor Dafoe is that he is primarily interested in recruiting people based on their general aptitude, interest, and dedication to the issue rather than relying heavily on specific educational credentials.

Comment by crmflynn on Paid research assistant position focusing on artificial intelligence and existential risk · 2016-05-04T17:52:43.460Z · LW · GW

https://www.fhi.ox.ac.uk/vacancies-for-research-assistants/ It was not up on the website at the time you asked, but it is up now.

Comment by crmflynn on Open thread, Nov. 09 - Nov. 15, 2015 · 2015-11-12T09:18:38.812Z · LW · GW

There was some confusion in the comments to my original post “Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife” (http://lesswrong.com/r/discussion/lw/mxu/newcomb_bostrom_calvin_credence_and_the_strange/) which makes me think I was not nearly clear enough in the original. I am sincerely sorry for this. I am also really appreciative to everyone who left such interesting comments despite this. I have added some notes in an update to clarify my argument. I also responded to comments in a way that I hope will further illustrate some of the trickier bits. This should make it more interesting to read and perhaps inspires some more discussion. I was vaguely tempted to repost the original with the update, but thought that was probably bad etiquette. My hope is that anyone who was turned off by it being unclear initially might take a second look at it and the discussion if it seems like an interesting topic to them. Thank you.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-12T06:16:11.638Z · LW · GW

I agree with you about the inside / outside view. I also think I agree with you about the characteristics of the simulators in relationship to the simulation.

I think I just have a vaguely different, and perhaps personal, sense of how I would define "divine" and "god." If we are in a simulation, I would not consider the simulators gods. Very powerful people, but not gods. If they tried to argue with me that they were gods because they were made of a lot of organic molecules whereas I was just information in a machine, I would suggested it was a distinction without a difference. Show me the uncaused cause or something outside of physics and we can talk

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-12T06:04:25.404Z · LW · GW

Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster -- so what?

By analogy, what are some things that decrease my credence in thinking that humans will survive to a “post-human stage.” For me, some are 1) We seem terrible at coordination problems at a policy level, 2) We are not terribly cautious in developing new, potentially dangerous, technology, 3) some people are actively trying to end the world for religious/ideological reasons. So as I learn more about ISIS and its ideology and how it is becoming increasingly popular, since they are literally trying to end the world, it further decreases my credence that we will make it to a post-human stage. I am not saying that my learning information about them is actually changing the odds, just that it is giving me more information with which to make my knowledge of the already-existing world more accurate. It’s Bayesianism.

For another analogy, my credence for the idea that “NYC will be hit by a dirty bomb in the next 20 years” was pretty low until I read about the ideology and methods of radical Islam and the poor containment of nuclear material in the former Soviet Union. My reading about these people’s ideas did not change anything, however, their ideas are causally relevant, and my knowledge of this factor increase my credence of that as a possibility.

For one final analogy, if there is a stack of well-shuffled playing cards in front of me, what is my credence that the bottom card is a queen of hearts? 1/52. Now let’s say I flip the top two cards, and they are a 5 and a king. What is my credence now that the bottom card is a queen of hearts? 1/50. Now let’s say I go through the next 25 cards and none of them are the queen of hearts. What is my credence now that the bottom card is the queen of hearts? 1 in 25. The card at the bottom has not changed. The reality is in place. All I am doing is gaining information which helps me get a sense of location. I do want to clarify though, that I am reasoning with you as a two-boxer. I think one-boxers might view specific instances like this differently. Again, I am agnostic on who is correct for these purposes.

Now to bring it back to the point, what are some obstacles to your credence to thinking you are in a simulation? For me, the easy ones that come to mind are: 1) I do not know if it is physically possible, 2) I am skeptical that we will survive long enough to get the technology, 3) I do not know why people would bother making simulations.

One and two or unchanged by the one-box/Calvinism thing, but when we realize both that there are a lot of one-boxers, and that these one-boxers, when faced with an analogous decision, would almost certainly want to create simulations with pleasant afterlives, then I suddenly have some sense of why #3 might not be an obstacle.

My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about "types" -- one can certainly imagine them, but that has nothing to do with reality.

I think you are reading something into what I said that was not meant. That said, I am still not sure what that was. I can say the exact thing in different language if it helps. “If some humans want to make simulations of humans, it is possible we are in a simulation made by humans. If humans do not want to make simulations of humans, there is no chance that we are in a simulation made by humans.” That was the full extent of what I was saying, with nothing else implied about other species or anything else.

Which new information?

Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish?

Second point first. How could we be in a petri dish? How could we be NPCs in a video game? How would that fit with other observations and existing knowledge? My current credence is near zero, but I am open to new information. Hit me.

Now the first point. The new information is something like: “When we use what we know about human nature, we have reason to believe that people might make simulations. In particular, the existence of one-boxers who are happy to ignore our ‘common sense’ notions of causality, for whatever reason, and the existence of people who want an afterlife, when combined, suggests that there might be a large minority of people who will ‘act out’ creating simulations in the hope that they are in one.” A LW user sent me a message directing me to this post, which might help you understand my point: http://lesswrong.com/r/discussion/lw/l18/simulation_argument_meets_decision_theory/

People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah.

The weird thing about trying to determine good self-locating beliefs when looking at the question of simulations is that you do not get the benefit of self-locating in time like that. We are talking about simulations of worlds/civilizations as they grow and develop into technological maturity. This is why Bostrom called them “ancestor simulations” in the original article (which you might read if you haven’t, it is only 12 pages, and if Bostrom is Newton, I am like a 7th grader half-assing an essay due tomorrow after reading the Wikipedia page.)

As for people believing in Allah making it more likely that he exists, I fully agree that that is nonsense. The difference here is that part of the belief in “Am I in a simulation made by people” relies CAUSALLY on whether or not people would ever make simulations. If they would not, the chance is zero. If they would, whether or not they should, the chance is something higher.

For an analogy again, imagine I am trying to determine my credence that the (uncontacted) Sentinelese people engage in cannibalism. I do not know anything about them specifically, but my credence is going to be something much higher than zero because I am aware that lots of human civilizations practice cannibalism. I have some relevant evidence about human nature and decision making that allows other knowledge of how people act to put some bounds on my credence about this group. Now imagine I am trying to determine my credence that the Sentinelese engage in widespread coprophagia. Again, I do not know anything about them. However, I do know that no other recorded human society has ever been recorded to do this. I can use this information about other peoples’ behavior and thought processes, to adjust my credence about the Sentinelese. In this case, giving me near certainty that they do not.

If we know that a bunch of people have beliefs that will lead to them trying to create “ancestor” simulations of humans, then we have more reason to think that a different set of humans have done this already, and we are in one of the simulations.

The probability is non-zero, but it's not affecting any decisions I'm making. I still don't see why the number of one-boxers around should cause me to update this probability to anything more significant.

Do you still not think this after reading this post? Please let me know. I either need to work on communicating this a different way or try to pin down where this is wrong and what I am missing….

Also, thank you for all of the time you have put into this. I sincerely appreciate the feedback. I also appreciate why and how this has been frustrating, re: “cult,” and hope I have been able to mitigate the unpleasantness of this at least a bit.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-10T13:29:17.325Z · LW · GW

I don't think this is true. The correct version is your following sentence:

A lot of people on LW do not

People on LW, of course, are not terribly representative of people in general.

LW is not really my personal sample for this. I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2/3 of people two box. Nozick, who popularized this, said he thought it was about 50/50. While it is again not representative, of the thousand people who answered the question in this survey, it was about equal (http://philpapers.org/surveys/results.pl). For people with PhD’s in Philosophy it was 458 two-boxers to 348 one-boxers. While I do not know what the actual number would be if there was a Pew Survey, I suspect, especially given the success of Calvinism, magical thinking, etc. that there are a substantial minority of people who would one-box.

What matters, as an empirical matter, is that they exist.

I agree that such people exist.

Okay. Can you see how they might take the approach I have suggested they might? And if yes, can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?

If we want to belong to the type of species

Hold on, hold on. What is this "type of species" thing? What types are there, what are our options?

As a turn of phrase, I was referring two types. One that makes simulations meeting this description, and one that does not. It is like when people advocate for colonizing Mars, they are expressing a desire to be “that type of species.” Not sure what confused you here….

And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one.

Nope, sorry, I don't find this reasoning valid.

If you are in the Sleeping Beauty problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem), and are woken up during the week, what is your credence that the coin has come up tails? How do you decide between the doors in the Monty Hall problem?

I am not asking you to think that the actual odds have changed in real time, I am asking you to adjust your credence based on new information. The order of cards has not changed in the deck, but now you know which ones have been discarded.

If it turns out simulations are impossible, I will adjust my credence about being in one. If a program begins plastering trillions of simulations across the cosmological endowment with von Neumann probes, I will adjust my credence upward. I am not saying that your reality changes, I am saying that the amount of information you have about the location of your reality has changed. If you do not find this valid, what do you not find valid? Why should your credence remain unchanged?

it will have evidential value still.

Still nope. If you think that people wishing to be in a simulation has "evidential value" for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have "evidential value"? Are you going to cherry-pick "right" beliefs and "wrong" beliefs?

Beliefs can cause people to do things, whether that be go to war or build expensive computers. Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq? How can their “belief” in such a thing have any evidential value?

One-boxers wishing to be in a simulation are more likely to create a large number of simulations. The existence of a large number of simulations (especially if they can nest their own simulations) make it more likely that we are not at a “basement level” but instead are in a simulation, like the ones we create. Not because we are creating our own, but because it suggests the realistic possibility that our world was created a “level” above us. This is just about self-locating belief. As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated. However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.” Same as if you were currently living in Western Iraq, you should update your credence from “why should I possibly leave my house, why would it not be safe” to “right, because there are people who are inspired by belief to take actions which make it unsafe.” Your knowledge about others’ beliefs can provide information about certain things that they may have done or may plan to do.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-10T12:31:55.979Z · LW · GW

First of all, there seem to be lots of ways in which we might fail to develop such technology. We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely). It might turn out that computational superpowers just aren't really available -- that there's only so much processing power we have any realistic way of harnessing. It might turn out that such things are possible but we simply aren't smart enough to find our way to them.

Absolutely. I think this is where this thing most likely fails. Somewhere in the first disjunct. My gut does not think I am in a simulation, and while that is not at all a valid way to acquire knowledge, it is the case that it leans me heavily into this.

Second, if we (or more precisely our successors, whoever or whatever they are) develop such computational superpowers, why on earth use them for ancestor simulations? In this sort of scenario, maybe we're all living in some kind of virtual universe; wouldn't it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world?

So I am not saying that they WOULD do it, I actually can think of a lot of pretty compelling reasons why they MIGHT. If the people who are around then are at all like us, then I think that a subset of them would likely do it for the one-boxer reasons I mentioned in the first post (which I have since updated with a note at the bottom to clarify some things I should have included in the post originally.) Whether or not their intuitions are valid, there is an internal logic, based on these intuitions, which would push for this. Reasons include hedging against the teletransportation paradox (which also applies to self-uploading) and hoping to increase their credence of an afterlife in which those already dead can join in. This is clearer I think in my update. The main confusion is that I am not talking about attempting to simulate or recreate specific dead people, which I do not think is possible. The key to my argument is to create self-locating doubt.

Also, in my argument, the people who create the simulation are never joined with the people in the simulation. These people stay in their simulation computer. The idea is that we are “hoping” we are similarly in a simulation computer, and have been the whole time, and that when we die, we will be transferred (whole) into the simulations afterlife component along with everyone who died before us in our world. Should we be in a simulation, and yet develop some sort of “glorious virtual universe” that we upload into, there are several options. Two ones that quickly come to mind: 1) We might stay in it until we die, then go into the afterlife component, 2) We might at some point be “raptured” by the simulation out of our virtual universe into the existent “glorious virtual afterlife” of the simulation computer we are in.

As it is likely that the technology for simulations will come about at about the same time as for a “glorious virtual universe” we could even treat it as our last big hurrah before we upload ourselves. This makes sense as the people who exist when this technology becomes available will know a large number of loved ones who just missed it. They will also potentially be in especially imminent fear of the teletransportation paradox. I do not think there is any inherent conflict between doing both of these things.

Someone else -- entirelyuseless? -- observed earlier in the thread that some such simulation might be necessary in order to figure out enough about our ancestors' minds to simulate them anywhere else, so it's just possible that grotty 21st-century ancestor sims might be a necessary precursor to glorious 25th-century ancestor sims; but why ancestors anyway? What's so special about them, compared with all the other possible minds?

Just to be clear, I am not talking about our actual individual ancestors. I actually avoided using the term intentionally as I think it is a bit confusing. I am pretty sure this is how Bostrom meant it as well in the original paper, with the word “ancestor” being used in the looser sense, like how we say “homo erectus where our ancestors.” That might be my misinterpretation, but I do not think so. While I could be convinced, I am personally, currently, very skeptical that it would be possible to do any meaningful sort of replication of a person after they die. I think the only way that someone who has already died has any chance of an afterlife is if we are already in a simulation. This is also why my personal, atheistic mind could be susceptible to donating to such a cause when in grief. I wrote an update to my original post at the bottom where I clarify this. The point of the simulation is to change our credence regarding our self-location. If the vast majority of “people like us” (which can be REALLY broadly construed) exist in simulations with afterlives, and do not know it, we have reason to think we might also exist in such a simulation. If this is still not clear after the update, please let me know, as I am trying to pin down something difficult and am not sure if I am continuing to privilege brevity to the detriment of clarity.

Third, supposing that we have computational superpowers and want to simulate our ancestors, I see no good reason to think it's possible. The information it would take to simulate my great-great-grandparents is dispersed and tangled up with other information, and figuring out enough about my great-great-grandparents to simulate them will be no easier than locating the exact oxygen atoms that were in Julius Caesar's last breath. All the relevant systems are chaotic, measurement is imprecise, and surely there's just no reconstructing our ancestors at this point.

I agree with your point so strongly that I am a little surprised to have been interpreted as meaning this. I think that it seems theoretically feasible to simulate a world full of individual people as they advance their way up from simple stone tools onward, each with their own unique life and identity, each existing in a unique world with its own history. Trying to somehow make this the EXACT SAME as ours does not seem at all possible. I also do not see what the advantage of it would be, as it is not more informative or helpful for our purposes to know that we are the same or not as the people above us, so why would be try to “send that down” below us. We do not care about that as a feature of our world, and so would have no reason to try to instill it in the worlds below us. There is sort of a “golden rule” aspect to this in that you do to the simulation below you the best feasible, reality-conforming version of what you want done to you.

Fourth, it seems quite likely that our superpowered successors, if we have them, will be no more like us than we are like chimpanzees. Perhaps you find it credible that we might want to simulate our ancestors; do you think we would be interested in simulating our ancestors 5 million years ago who were as much like chimps as like us?

Maybe? I think that one of the interesting parts about this is where we would choose to draw policy lines around it. Do dogs go to the afterlife? How about fetuses? How about AI? What is heaven like? Who gets to decide this? These are all live questions. It could be that they take a consequential hedonistic approach that is mostly neutral between “who” gets the heaven. It could be that they feel obligated to go back further in gratitude of all those (“types”) who worked for advancement as a species and made their lives possible. It could be that we are actually not too far from superintelligent AI, and that this is going to become a live question in the next century or so, in which case “we” are that class of people they want to simulate in order to increase their credence of others similar to us (their relatives, friends who missed the revolution) being simulated.

As far as how far back you bother to simulate people, it might actually be easier to start off with some very small bands of people in a very primitive setting then to try to go through and make a complex world for people to “start” in without the benefit of cultural knowledge or tradition. It might even be that the “first people” are based on some survivalist hobby back-to-basics types who volunteered to be emulated, copied, and placed in different combinations in primitive earth environments in order to live simple hunter-gatherer lives and have their children go on to populate an earth (possible date of start? https://en.wikipedia.org/wiki/Population_bottleneck). That said, this is deep into the weeds of extremely low-probability speculation. Fun to do, but increasingly meaningless.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-10T11:21:44.175Z · LW · GW

the proposal here

I just want to clarify in case you mean my proposal, as opposed to the proposal by jacobcannell. This is my reading of what jacobcannell said as well, but it is not at all a part of my argument. In fact, while I would be interested in reading jacobcannell’s thoughts on identity and the self, I share the same skeptical intuitions as other posters in this thread about this. I am open to being wrong, but on first impression I have an extremely difficult time imagining that it will be at all possible to simulate a person after they have died. I suspect that it would be a poor replica, and certainly would not contain the same internal life as the person. Again, I am open to being convinced, but nothing about that makes sense to me at the moment.

I think that I did a poor job of making this clear in my first post, and have added a short note at the end to clarify this. You might consider reading it as it should make my argument clearer.

My proposal is far less interesting, original, or involved then this, and drafts off of Nick Bostrom’s simulation argument in its entirety. What I was discussing was making simulations of new and unique individuals. These individuals would then have an afterlife after dying in which they would be reunited with the other sims from their world to live out a subjectively long, pleasant existence in their simulation computer. There would not be any attempt to replicate anyone in particular or to “join” the people in their simulation through a brain upload or anything else. The interesting and relevant feature would be that the creation of a large number of simulations like this, especially if these simulations could and did create their own simulations like this too, would increase our credence that we were not actually at the “basement level” and instead were ourselves in a simulation like the ones we made. This would increase our credence that dead loved ones had already been shifted over into the afterlife just as we shift people in the sims over into an afterlife after they die. This also circumvents teletransportation concerns (which would still exist if we were uploading ourselves into a simulation of our own!) since everything we are now would just be brought over to the afterlife part of the simulation fully intact.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-10T10:59:27.620Z · LW · GW

I hope I am not intercepting a series of questions when you were only interested in gjm’s response but I enjoyed your comment and wanted to add my thoughts.

I think that the problem with this sort of arguments is that it's like cooperating in prisoner's dilemma hoping that superrationality will make the other player cooperate: It doesn't work.

I am not sure it is settled that it does not work, but I also do not think that most, or maybe any, of my argument relies on an assumption that it does. The first part of it does not even rely on an assumption that one-boxing is reasonable, let alone correct. All it says is that so long as some people play the game this way, as an empirical, descriptive reality of how they actually play, that we are more likely to see certain outcomes in situations that look like Newcomb. This looks like Newcomb.

There is also a second argument further down that suggests that under some circumstances with really high reward, and relatively little cost, that it might be worth trying to “cooperate on the prisoner’s dilemma” as a sort of gamble. This is more susceptible to game theoretic counterpoints, but it is also not put up as an especially strong argument so much as something worth considering more.

It seems that lots of people here conflate Newcomb's problem, which is a very unusual single-player decision problem, with prisoner's dilemma, which is the prototypical competitive game from game theory.

I am pretty sure I am not doing that, but if you wanted to expand on that, especially if you can show that I am, that would be fantastic.

Also, I don't see why I should consider an accurate simulation of me, from my birth to my death, ran after my real death as a form of afterlife. How would it be functionally different than screening a movie of my life?

So, just to be clear, this is not my point at all. I think I was not nearly clear enough on this in the initial post, and I have updated it with a short-ish edit that you might want to read. I personally find the teletransportation paradox pretty paralyzing, enough so that I would have sincere brain-upload concerns. What I am talking about is simulations of non-specific, unique, people in the simulation. After death, these people would be “moved” fully intact into the afterlife component of the simulation. This circumvents teletransportation. Having the vast majority of people “like us” exist in simulations should increase our credence that we are in a simulation just as they are (especially if they can run simulations of their own, or think they are running simulations of their own). The idea is that we will have more reason to think that it is likely one-boxer/altruist/acausal trade types “above” us have similarly created many simulations, of which we are one. Us doing it here should increase our sense that people “like us” have done it “above” us.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-10T10:06:58.352Z · LW · GW

I think that this sort of risks being an argument about a definition of a word, as we can mostly agree on the potential features of the set-up. But because I have a sense that this claim comes with an implicit charge of fideism, I’ll take another round at clarifying my position. Also, I have written a short update to my original post to clarify some things that I think I was too vague on in the original post. There is a trade-off between being short enough to encourage people to read it, and being thorough enough to be clear, and I think I under-wrote it a bit initially.

Beings who created the world and are not of this world

They did not really “create” this world so much as organized certain aspects of the environment. Simulated people are still existent in a physical world, albeit as things in a computer. The fact that the world as the simulated people conceive of it is not what it appears to be occurs happens to us as well when we dig into physics and everything becomes weird and unfamiliar. If I am in the environment of a video game, I do not think that anyone has created a different world, I just think that they have created a different environment by arranging bits of pre-existing world.

Beings who are not bound by the rules of this world (from the inside view they are not bound by physics and can do bona fide miracles)

Is something a miracle if it can be clear in physical terms how it happened? If there is a simulation, than the physics is a replica of physics, and “defying” it is not really any more miraculous than me breaking the Mars off of a diorama of the solar system.

Beings who can change this world at will.

Everyone can do that. I do that by moving a cup of coffee from one place to another. In a more powerful sense, political philosophers have dramatically determined how humans have existed over the last 150 years. Human will shapes our existences a great deal already.

These beings look very much like gods to me. The "not bound by our physics", in particular, decisively separates them from sims who, of course, do affect their world in many ways.

I think that for you, “gods” emerge as a being grows in power, whereas I tend to think that divinity implies something different not just in scale, but in type. This might just be a trivial difference in opinion or definition or approach to something with no real relevance.

So if I live in a world that creates simulations, it makes me think it is more likely that I am in a simulation.

Makes you think so, but doesn't make me think so. Again,this is the core issue here.

I agree with you that this is the core issue. What I think you might be missing, though I could be wrong, is that I am agnostic on this point in the post. Being careful to keep my own intuition out of it. I am not saying that one-boxers believing this necessarily have any effect on our current, existent, reality. What I am saying is two things: 1) Some one-boxers think that it does, and accordingly will be more likely to push for simulations and 2) Knowing that some people will be likely to push for simulations should make even two-boxers think that it is more likely we are in one. If the world was made up exclusively of two-boxers, it would be less likely that people would try to create simulations with heaven-like afterlives. If the world was all one-boxers, it would be more likely. As we are somewhere in between, our credence should be somewhere in between. This is just about making an educated guess about human nature based on how people interact with similar problems. Since human nature is potentially causal on whether or not there are simulations, information that changes our views on the likelihood of a decision one way or another on simulations is relevant to our credence.

I am looking at what happens if one-boxer types decide they want a simulated alterlife.

One-boxers want it today, right now? Um, nothing happens.

If one boxers here, today, want it, is not really the relevant consideration, especially to a two-boxer. However, if there are a lot of one-boxers, who make a lot of simulations, it should increase the two-boxers credence that he or she is in a simulation created by a one-boxer “one level up.” As a two-boxer, the relevant thing is not that THESE one-boxers are causing anything, but that the existence of people who do this might suggest the existence of people who have done this before, “one level up.”

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-10T09:27:38.113Z · LW · GW

This is a really fascinating idea, particularly the aspect that we can influence the likelihood we are in a simulation by making it more likely that simulations happen.

Maybe? We can increase our credence, but I think whether or not it increases the likelihood is an open question. The intuitions seem to split between two-boxers and a subset of one-boxers.

That said, thank you for the secondary thought experiment, which is really interesting.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-05T13:37:35.860Z · LW · GW

My opinion, for what it's worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.

I wonder if you might expand on your thoughts on this a bit more. I tend to think that the odds of being in a simulation are quite low as well, but for me the issue is more the threat of extinction than a lack of will.

I can think of some reasons why, even if we could build such simulations, we might not, but I feel that this area is a bit fuzzy in my mind. Some ideas I already have: 1) Issues with the theory of identity 2) Issues with theory of mind 3) Issues with theory of moral value (creating lots high quality lives not seen as valuable, antinatalism, problem of evil) 4) Self-interest (more resources for existing individuals to upload into and utilize) 5) The existence of a convincing two-boxer “proof” of some sort

I also would like to know why an “enthusiastic takeup of the ideas in this post” would not increase your credence significantly? I think there is a very large chance of these ideas not being taken up enthusiastically, but if they were, I am not sure what, aside from extinction, would undermine them. If we get to the point where we can do it, and we want to do it, why would we not do it?

Thank you in advance for any insight, I have spent too long chewing on this without much detailed input, and I would really value it.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-05T13:12:08.292Z · LW · GW

Consider a different argument.

Our world is either simulated or not.

If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us.

If our world is simulated, we are already simulated and there's nothing we can do to increase our chance of being simulated because it's already so.

I am guessing you two-box in the Newcomb paradox as well, right? If you don’t then you might take a second to realize you are being inconsistent.

If you do two-box, realize that a lot of people do not. A lot of people on LW do not. A lot of philosophers who specialize in decision theory do not. It does not mean they are right, it just means that they do not follow your reasoning. They think that the right answer is to one box. They take an action, later in time, which does not seem causally determinative (at least as we normally conceive of causality). They may believe in retrocausality, the may believe in a type of ethics in which two-boxing would be a type of cheating or free-riding, they might just be superstitious, or they might just be humbling themselves in the face of uncertainty. For purposes of this argument, it does not matter. What matters, as an empirical matter, is that they exist. Their existence means that they will ignore or disbelieve that “there’s nothing we can do to increase our chance of being simulated” like they ignore the second box.

If we want to belong to the type of species where the vast majority of the species exists in a simulations with a long-duration, pleasant afterlife, we need to be the “type of species” who builds large numbers of simulations with long-duration, pleasant afterlives. And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one. Pending acausal trade considerations (probably for another post), two-boxers, and likely some one-boxers, will not think that their actions are causing anything, but it will have evidential value still.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-05T12:55:57.004Z · LW · GW

Simulations of long-ago ancestors..?

Imagine that you have the ability to run a simulation now. Would you want to populate it by people like you, that is, fresh people de novo and possibly people from your parents and grandparents generations -- or would you want to populate it with Egyptian peasants from 3000 B.C.? Homo habilis, maybe? How far back do you want to go?

What the simulation would be like depends entirely on the motivation for running it. That is actually sort of the point of the post. If people want to be in a certain kind of simulation, they should run simulations that conform with that.

No, I don't think so. You're engaging in magical thinking. What you -- or everyone -- believes does not change the reality.

What the people “above” us, if they exist, believe absolutely does change reality.

What Omega believes changes reality. People one-box anyway.

Who the Calvinist God has allegedly predestined determines reality. People go to church, pray, etc. anyway.

If we are “the type of species” who builds simulations that we would like to be in, we are much more likely to be a species by-and-large who inhabits simulations which we want to be in.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-05T12:46:33.090Z · LW · GW

I think you and I might be missing one another. Or that I am at least missing your point. Accordingly, my responses below might be off point. Hopefully they are not.

“Keep in mind that the "simulation hypothesis" is also known as "creationism". In particular it implies that there are beings who constructed the simulation, who are not bound by its rules, and who can change it at will. The conventional name for such beings is "gods".”

I don’t think that necessarily follows. Creationism implies divinity, and gods implies something bigger than people who build a machine. Are your parents gods for creating you? In my own estimate, creating a simulation is like founding a sperm bank; you are not really “creating” anything, you are just moving pieces around in a way that facilitates more lives. You can mess around with the life and the world, but so can anyone in real life, especially if they have access to power, or guns, or a sperm bank, again, for that matter. It is different in scale, but not in type. Then again, I might be thinking too highly of “gods”?

Also, I get the impression, and apologies if I am wrong, that you are mostly trying to show “family resemblance” with something many of us are skeptical of or dislike. I am atheist myself, and from a very religious background which leaves me wary. However, I think it is worth avoiding a “clustering” way of thinking. If you don’t want to consider something because of who said it, or because it vaguely or analogously resembles something you dislike, you can miss out on some interesting stuff. I think I avoided AI, etc. too long because I thought I did not really like “computer things” which was a mistake that cost me some great time in some huge, wide open, intellectual spaces I now love to run around in.

“I would treat is as a category error: ideas are not evidence. Even if they look "evidence-like"”

I might be missing what you are saying, but I do not think I was saying that ideas were evidence. I was saying a group of people rallying around an idea could be a form of evidence. In this case, the “evidence” is that a lot of people might want something. What this is evidence of is that them wanting something makes it more likely that it will come about. I am not sure how this would fail as evidence.

“Why would future superpowerful people be interested in increasing your credence?”

Two things: 1) They are not interested in the credence of people in the simulations, they are interested in their own credence. So if I live in a world that creates simulations, it makes me think it is more likely that I am in a simulation. If I know that 99% of all simulations are good ones, it makes me think I am more likely in a world with good simulations. If I know that 90% of simulations are terrible, I am more likely to think that I am in a terrible simulation. The odd thing, is that people are sort of creating their own evidence. This is why I mentioned Calvinism and “irresistible grace” as analogy. Also Newcomb. Creating nice simulations in the hopes of being in one is like taking one box, or attending Calvinist church regularly and abiding by the doctrines. More to the point for people who two-box and roll their eyes at Calvinists, knowing that there are Calvinists means that we know that some people might try to make simulations in order to try to be in one.

2) I am not sure where “superpowerful” comes from here. I think you might be making assumptions about my assumptions. These simulations might be left unobserved. They might be made by von Neumann probes on distant Dyson spheres. I actually think that people motivated by one-boxing/Calvinist type interpretations are more likely to try to keep simulations unmolested.

“Remember, this is ground well-trodden by theology. There the question is formulated as "Why doesn't God just reveal Himself to us instead leaving us in doubt?".”

I don’t think the question is the same. In particular, I am not solving for “why has god not revealed himself” or even “why haven’t I been told I am in a simulation.” I am just pulling at the second disjunct and its implications. In particular I am looking at what happens if one-boxer types decide they want a simulated alterlife.

Why would people run simulations? Maybe research or entertainment (suggested in the original article). Maybe to fulfill (potentially imaginary) acausal trade conditions (I will probably post on this later). Maybe altruism. Maybe because they want to believe they are in a simulation, and so they make the simulation look just like their world looks, but add an afterlife. They do this in the hopes that it was done “above” them the same way, and they are in such a simulation. They do it in the hopes of being self-fulfilling, or performative, or for whatever reason people one-box and believe in Calvinism.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-04T02:46:12.751Z · LW · GW

I am not sure it matters when it comes. Presumably, unless we find some other way to extinction, it will come at some point. When it comes, it is likely that the technology will not be a problem for it. Once the technology exists, and probably before, we may need to figure out if and how we want to do simulations. If people have a clear, well developed, and strong preference going into it (including potentially putting it into the AI as a requirement for its modeling of humanity, or it being a big enough “movement” to show up in our CEV) that will likely have a large effect on the odds of it happening. Also, I know some people who sincerely think belief in god is based almost exclusively on fear of death. I am skeptical of this, but if it is true, or even partially true, if even a fraction of the fervor/energy/dedication that is put into religion was put into pushing for this, I think it might be a serious force.

The point about credence is just a point about it being interesting, decision making aside, that something as fickle as collective human will, might determine if I “survive” death, and if all my dead loved ones will as well. So, for example, if this post, or someone building off of my post, but doing it better, were to explode on LW and pour out into reddit and the media, it should increase our credence in an afterlife. If its reception is lukewarm, decrease it. There is something really weird about that, and worth chewing on.

Also, I think that people’s motivation to have an afterlife seems like a more compelling reason to create simulations than experimentation/entertainment, so it helps shift credence around among the four disjuncts of the simulation argument.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-04T02:19:09.712Z · LW · GW

When you say you believe this, do you mean you believe it to be the case, or you believe it to be a realistic possibility?

I stumbled across Tipler when reading up on the simulation argument, and it inspired further “am I being a crackpot” self-doubt, but I don’t think this argument looks much like his. Also, I am not really trying to promote it so much as to feel it out. I have not yet found any reason to think I am wrong about it being a possibility, though I myself do not “feel” it to be likely. That said, with stuff like this, I have no sense that intuitions would tell me anything useful either.

“Despite that, the general idea of mind uploading into virtual afterlife appears to be pretty mainstream now in transhumanist thought (ie Turing Church).”

Yeah, it comes up in “Superintelligence” and some other things I have read too. The small difference, if there is one, is that this looks backwards, and could be a way to collect those who have already died, and also could be a way to hedge bets for those of us who may not live long enough for transhumanism. It also circumvents the teletransportation paradox and other issues in the philosophy of identity. Also, even when not being treated as a goal, it seems to have evidential value. Finally, there are some acausal trade considerations, and considerations with “watering down” simulations through AI “thought crimes,” that can be considered once this is brought in. I will probably post more of my tentative thoughts on that later.

“I think it's fun stuff to discuss, but it has a certain stigma and is politically unpopular to some extent with the x-risk folks. I suspect this may have to do with Tipler's heavily Christian religious spin on the whole thing. Many futurists were atheists first and don't much like the suspicious overlap with Christian memes (resurrection, supernatural creators. 'saving' souls, etc)”

The idea of posting about something that is unpopular on such an open-minded site is one of the things that makes me scared to post online. Transhumanism, AI risk (“like the Terminator?”), one-boxing the Newcomb Paradox, LW seems pretty good at getting past some initial discomfort to dig deeper. I had actually once heard a really short thing about “The Singularity” on the radio, which could have been a much earlier introduction to all this, but I sort of blew it off. Stuff like my past flippancy makes me inclined to try to avoid trusting my gut, and superficial reasons to ignore something, and to try to take a really careful approach to deconstructing argument. I am also atheist, and grew up very religiously Christian, so I think I also have a strong suspicion and aversion to its approach. But again, I try not to let superficial or familial similarity to things interrupt a systematic approach to reality. I am currently trying to transition from doing one-the-ground NGO work in developing countries in order to work on this stuff. My gut hates this, and my availability bias is doing backflips, but I think that this stuff might be too important to take the easy way out of it.

Also, your point about the hook is absolutely correct. I was sort of trying to imitate the “catchy” salon/huffpost/buzzfeed headline that would try to draw people in. “Ten Ways Atheists Go to Heaven, You Won’t Believe #6!” It was also meant a bit self-deprecatingly.

“There are also local considerations which may dominate for most people - resurrection depends on future generosity which is highly unlikely to be uniform and instead will follow complex economics. "Be a good, interesting, and future important person" may trump x-risk for many people that can't contribute to x-risk much directly.”

Yeah, there is a lot here. What is so weird about the second disjunct is that it means that we sort of do this or fail at this as a group. And it means that, while laying on my deathbed, my evaluation of how well we are doing as a species is going to play directly on my credence of what, if anything, comes next. It’s strange isn’t it? That said, it is also interesting that, even if we somehow knew that existential risk would not be a problem in our lifetime, with this, there is a purely selfish reason to donate to FHI/MIRI. In fact, with the correct sense of scale, with high enough odds and marginal benefit to donations, it could be the economically rational thing to do.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-04T01:41:29.812Z · LW · GW

I would not worry about that for three reasons: 1) I am very shy online. Even posting this took several days and I did not look at the comments for almost a day after. 2) I am bringing this here first to see if it is worth considering, and also because I want input not only on the idea, but on the idea of spreading it further. 3) I would never identify myself with MIRI, etc. not because I would not want to be identified that way, but because I have absolutely not earned it. I also give everyone full permission to disavow me as a lone crackpot as needed should that somehow become a problem. That said, thank you for bringing this up as a concern. I had already thought about it, which is one of the reasons I was mentioning it as a tentative consideration for more deliberation by other people. That said, had I not, it could have been a problem. A lot of stuff in this area is really sensitive, and needs to be handled carefully. That is also why I am nervous to even post it.

All of that said, I think I might make another tentative proposal for further consideration. I think that some of these ideas ARE worth getting out there to more people. I have been involved in International NGO work for over a decade, studied it at university, and have lived and worked in half a dozen countries doing this work, and had no exposure to Effective Altruism, FHI, Existential Risk, etc. I hang out in policy/law/NGO circles, and none of my friends in these circles talk about it either. These ideas are not really getting out to those who should be exposed to them. I found EA/MIRI/Existential Risk through the simulation argument, which I read about on a blog I found off of reddit while clicking around on the internet about a year ago. That is kind of messed up. I really wish I had stumbled onto it earlier, and I tentatively think there is a lot of value in making it easier for others to stumble onto it into the future. Especially policy/law types, who are going to be needed at some point in the near future anyway.

I also feel that the costs of people thinking that people have “weird ideas” should probably be weighed against the benefits of flying the flag for other like-minded people to see. For the most part, people not liking other people is not much different than them not knowing about them, but having allies and fellow-travelers adds value. It is more minds to attack difficult problems at more angles, more policy makers listening when it is time to make some proposals, and it is more money finding its way into MIRI/FHI/etc. It might be worth trying to make existential risk a more widely known concern, a bit like climate change. It would not necessarily even have to water down LW, as it could be that those interested in the LW approach will come here, and those from other backgrounds, especially less technical backgrounds, find lateral groups. In climate change now, there are core scientists, scientists who dabble, and a huge group of activist types/policy people/regulators with little to no interest in the science who are sort of doing their own thing laterally to the main guys.

Comment by crmflynn on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-04T01:16:42.308Z · LW · GW

Thank you for your comment, and for taking a skeptical approach towards this. I think that trying to punch holes in it is how we figure out if it is worth considering further. I honestly am not sure myself.

I think that my own thoughts on this are a bit like Bostrom's skepticism of the simulation hypothesis, where I do not think it is likely, but I think it is interesting, and it has some properties I like. In particular, I like the “feedback loop” aspect of it being tied into metaphysical credence. The idea that the more people buy into an idea, the more likely it seems that it “has already happened” shows some odd properties of evidence. It is a bit like if I was standing outside of the room where people go to pick up the boxes that Omega dropped off. If I see someone walk out with two unopened boxes, I expect their net wealth has increased ~$1000, if I see someone walk out with one unopened box, I expect them to have increased their wealth ~$1,000,000. That is sort of odd isn’t it? If I see a small, dedicated group of people working on how they would structure simulations, and raising money and trusts to push it a certain political way in the future (laws requiring all simulated people get a minimum duration of afterlife meeting certain specifications, no AIs simulating human civilization for information gathering purposes without “retiring” the people to a heaven afterward, etc.) I have more reason to think I might get a heaven after I die.

As far as the “call to action” I hope that my post was not really read that way. I might have been clearer, and apologize. I think that running simulations followed by afterlife might be a worthwhile thing to do in the future, but I am not even sure it should be done for many reasons. It is worth discussing. One could also imagine that it might be determined, if we overcome and survive the AI intelligence explosion with a good outcome, that it is a worthwhile goal to create more human lives, which are pleasant, throughout our cosmological endowment. Sending off von Neumann probes to build simulations like this might be a live option. Honestly, it is an important question to figure out what we might want from a superintelligent AI, and especially if we might want to not just hand it the question. Coherent extrapolated volition sounds like a best tentative idea, but one we need to be careful with. For example, AI might only be able to produce such a “model” of what we want by running a large number of simulated worlds (to determine what we are all about). If we want simulated worlds to end with a “retirement” for the simulated people in a pleasant afterlife, we might want to specify it in advance, otherwise we are inadvertently reducing the credence we have of our own afterlife as well. Also, if there is an existent acausal trade regime on heaven simulations (this will be another post later) we might get in trouble for not conforming in advance.

As far as simulated hell, I think that fear of this as a possibility keeps the simulated heaven issue even more alive. Someone who would like a pleasant afterlife… which is probably almost all of us, might want to take efforts early to secure that such an afterlife is the norm in cases of simulation, and “hell” absolutely not permitted. Also, the idea that some people might run bad afterlives should probably further motivate people to try to also create as many good simulations as possible, to increase credence that “we” are in one of the good ones. This is like pouring white marbles into the urn to reduce the odds of drawing the black one. You see why the “loop” aspect of this can be kind of interesting. Especially for one-boxer-types, who try to “act out” the correct outcome after-the-fact. For one-boxers, this could be, from a purely and exclusively selfish perspective, the best thing they could possibly do with their life. Increasing the odds of a trillion-life-duration afterlife of extreme utility from 0.001 to 0.01 might be very selfishly rational.

I am not trying to "sell" this, as I have not even bought it myself, I am just sort of playing with it as a live idea. If nothing else, this seems like it might have some importance on considerations going forward. I think that people’s attitudes and approaches to religion suggest that this might be a powerful force for human motivation, and the second disjunct of the simulation argument shows that human motivation might have significant bearing both on our current reality, and on our anticipated future.

Comment by crmflynn on Welcome to Less Wrong! (8th thread, July 2015) · 2015-11-02T02:30:20.628Z · LW · GW

I have been lurking around LW for a little over a year. I found it indirectly through the Simulation Argument > Bostrom > AI > MIRI > LW. I am a graduate of Yale Law School, and have an undergraduate degree in Economics and International Studies focusing on NGO work. I also read a lot, but in something of a wandering path that I realize can and should be improved upon with the help, resources, and advice of LW.

I have spent the last few years living and working in developing countries around the world in various public interest roles, trying to find opportunities to do high-impact work. This was based around a vague and undertheorized consequentialism that has been pretty substantially rethought after finding FHI/MIRI/EA/LW etc. Without knowing about the larger effective altruism movement (aside from vague familiarity with Singer, QALY cost effectiveness comparisons between NGOs, etc.) I had been trying to do something like effective altruism on my own. I had some success with this, but a lot of it was just the luck of being in the right place at the right time. I think that this stuff is important enough that I should be approaching it more systematically and strategically than I had been. In particular, I am spending a lot of time moving my altruism away from just the concrete present and into thinking about “astronomical waste” and the potential importance of securing the future for humanity. This is sort of difficult, as I have a lot of experiential “availability” from working on the ground in poor countries which pulls on my biases, especially when faced with a lot of abstraction as the only counterweight. However, as stated, I feel this is too important to do incorrectly, even if it means taming intuitions and the easily available answer.

I have also been spending a lot of time recently thinking about the second disjunct of the simulation argument. Unless I am making a fundamental mistake, it seems as though the second disjunct, by bringing in human decision making (or our coherent extrapolated volition, etc.) into the process, sort of indirectly entangles the probable metaphysical reality of our world with our own decision making. This is true as a sort of unfolding of evidence if you are a two-boxer, but it is potentially sort-of-causally true if you are a one-boxer. Meaning if we clear the existential hurdle, this is seemingly the next thing between us and the likely truth of being in a simulation. I actually have a very short write-up on this which I will post in the discussion area when I have sufficient karma (2 points, so probably soon…) I also have much longer notes on a lot of related stuff which I might turn into posts in the future if, after my first short post, this is interesting to anyone.

I am a bit shy online, so I might not post much, but I am trying to get bolder as part of a self-improvement scheme, so we will see how it goes. Either way, I will be reading.

Thank you LW for existing, and providing such rigorous and engaging content, for free, as a community.