What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse?
post by Natália Mendonça
This is a question post.
I’m looking for a word similar to “longtermism” to refer to the view that the most important determinant of the value of our decisions is how they affect regions of the multiverse that are causally-disconnected from ours (henceforth “causally-disconnected regions”), since those regions are very big and can contain far more value-bearing locations than causally-connected regions.
(Affecting causally-disconnected regions is possible if there is some subjunctive dependence [LW · GW] between our decisions and outcomes in those regions; for example, if there are copies of us simulated in them.)
This is related to views posed by Wei Dai in this post [LW · GW] (though note that he doesn’t use any terminology like “causally-disconnected region”; my usage of that term is influenced by Mati Roy [EA(p) · GW(p)]). I’m interested in this because it seems to me that many of the intuitions that would lead someone to support longtermism would also lead them to support this view, as Wei Dai indicated in the last paragraph of this comment [LW(p) · GW(p)].
80000hours used to call a cluster of ideas related to caring about the long-term future the “long-term value thesis,” so I might start calling this the “causally-disconnected value thesis”; however, that sounds a bit too long and cumbersome, which is why I’m asking this question.
 If you don’t think there is a multiverse, just interpret my usage of “multiverse” as referring to the same thing as “universe.”
 Note that this definition is more analogous to how William MacAskill defines [EA · GW] strong longtermism than to how he defines longtermism.
answer by steven0461
) · GW
"Acausalism" works, but might be confused with the idea that acausal dependence matters at all, or with other philosophical doctrines that deny causality in some sense.
I'm not sure whether to be located in a place is a different thing from the place subjunctively depending on your behavior.
Some more ideas: "outofreachism" (closest to "longtermism"), "extrauniversalism", "subjunctive dependentism" (hardest to strawman), "elsewherism", "spooky axiology at a distance"
comment by swarriner ·
2020-08-11T16:44:02.410Z · LW(p) · GW(p)
Elsewherism strike me as the most usable of these options for aesthetic reasons. Spooky Axiology at a Distance is the name of my new prog rock band.
answer by ike
) · GW
Er, isn't "affect causally disconnected" an oxymoron?
comment by Natália Mendonça ·
2020-08-09T18:39:34.596Z · LW(p) · GW(p)
Thanks for your comment :) The definition of causality I meant to use in the question is physical causality, which doesn’t refer to things like affecting what happens in causally-disconnected regions of the multiverse that simulate your decision-making process. I’m going to edit the question to make that clearer.
Comments sorted by top scores.
comment by shminux ·
2020-08-09T19:03:30.377Z · LW(p) · GW(p)
I still don't understand what you mean by "causally-disconnected" here. In physics, it's anything in your future light cone (under some mild technical assumptions). In that sense longtermism (regular or strong, or very strong, or extra-super-duper-strong) is definitely interested in the causally connected (to you now) parts of the Universe. A causally disconnected part would be caring now about something already beyond the cosmological horizon, which is different from something that will eventually go beyond the horizon. You can also be interested in modeling those casually disconnected parts, like what happens to someone falling into a black hole, because falling into a black hole might happen in the future, and so you in effect are interested in the causally connected parts.
comment by Natália Mendonça ·
2020-08-09T19:27:26.249Z · LW(p) · GW(p)
I still don't understand what you mean by "causally-disconnected" here. In physics, it's anything in your future light cone (under some mild technical assumptions).
I think you mean to say “causally-connected,” not “causally-disconnected”?
I’m referring to regions outside of our future light cone.
A causally disconnected part would be caring now about something already beyond the cosmological horizon
Yes, that is what I’m referring to.
comment by shminux ·
2020-08-10T00:16:19.222Z · LW(p) · GW(p)
Ah, okay. I don't see any reason to be concerned about something that we have no effect on. Will try to explain below.
Regarding "subjunctive dependency" from the post linked in your other reply:
I agree with a version of "They are questions about what type of source code you should be running", formulated as "what type of an algorithm results in max EV, as evaluated by the same algorithm?" This removes the contentious "should" part, that implies that you have an option of running some other algorithm (you don't, you are your own algorithm).
The definition of "subjunctive dependency" in the post is something like "the predictor runs a simplified model of your actual algorithm that outputs the same result as your source code would, with high fidelity" and therefore the predictor's decisions "depend" on your algorithm, i.e. you can be modeled as affecting the predictor's actions "retroactively".
Note that you, an algorithm, have no control of what that algorithm is, you just are it, even if your algorithm comes equipped with the routines that "think" about themselves. If you also postulate that the predictor is an algorithm, as well, then the question of decision theory in presence of predictors becomes something like "what type of an agent algorithm results in max EV when immersed in a given predictor algorithm?" In that approach the subjunctive dependency is not a very useful abstraction, since the predictor algorithm is assumed to be fixed. In which case there is no reason to consider causally disconnected parts of the agent's universe.
Clearly your model is different from the above, since you seriously think about untestables and unaffectables.
comment by Dagon ·
2020-08-10T14:53:33.675Z · LW(p) · GW(p)
Does this view lead to any different behaviors or moral beliefs than regular long-term-ism? Acausal motivations (except in contrived situations, where the agent has forbidden knowledge about the unreachable portions) seem to be simply amplifications of what's right ANYWAY, taking a thousands- to millions-of-years view in one's own light-cone.