post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by gwern · 2020-08-09T23:51:24.819Z · LW(p) · GW(p)

I don't think I've seen any existing terms covering this. How about "acausal dominance" or "acausal supremacy"?

Replies from: benjy-forstadt-1, gworley
comment by Benjy Forstadt (benjy-forstadt-1) · 2020-08-10T05:50:55.703Z · LW(p) · GW(p)

I think “acausal-focused” works well as an adjective, compare to “suffering-focused”. As a noun, perhaps “acausal-focused altruism”?

comment by Gordon Seidoh Worley (gworley) · 2020-08-10T18:11:28.220Z · LW(p) · GW(p)

I'm not sure if this is quite it, but it does get at the "acausal trade" framing often taken when discussing these issues.

comment by Dagon · 2020-08-10T14:53:33.675Z · LW(p) · GW(p)

Does this view lead to any different behaviors or moral beliefs than regular long-term-ism? Acausal motivations (except in contrived situations, where the agent has forbidden knowledge about the unreachable portions) seem to be simply amplifications of what's right ANYWAY, taking a thousands- to millions-of-years view in one's own light-cone.

comment by shminux · 2020-08-09T19:03:30.377Z · LW(p) · GW(p)

I still don't understand what you mean by "causally-disconnected" here. In physics, it's anything in your future light cone (under some mild technical assumptions). In that sense longtermism (regular or strong, or very strong, or extra-super-duper-strong) is definitely interested in the causally connected (to you now) parts of the Universe. A causally disconnected part would be caring now about something already beyond the cosmological horizon, which is different from something that will eventually go beyond the horizon. You can also be interested in modeling those casually disconnected parts, like what happens to someone falling into a black hole, because falling into a black hole might happen in the future, and so you in effect are interested in the causally connected parts.

Replies from: Natália Mendonça
comment by Natália (Natália Mendonça) · 2020-08-09T19:27:26.249Z · LW(p) · GW(p)

I still don't understand what you mean by "causally-disconnected" here. In physics, it's anything in your future light cone (under some mild technical assumptions).

I think you mean to say “causally-connected,” not “causally-disconnected”?

I’m referring to regions outside of our future light cone.

A causally disconnected part would be caring now about something already beyond the cosmological horizon

Yes, that is what I’m referring to.

Replies from: gbear605, shminux
comment by gbear605 · 2020-08-09T19:44:12.125Z · LW(p) · GW(p)

From my understanding of the definition of causality, any action made in this moment cannot affect anywhere that is causally-disconnected from where and when we are. After all, if it could then that region definitionally. wouldn't be causally disconnected from us.

Are you referring to multiple future regions that are causally connected to the Earth at the current moment but are causally disconnected from each other?

Replies from: Natália Mendonça
comment by Natália (Natália Mendonça) · 2020-08-09T20:13:29.342Z · LW(p) · GW(p)

Things outside of your future light cone (that is, things you cannot physically affect) can “subjunctively depend” [LW · GW] on your decisions. If beings outside of your future light cone simulate your decision-making process (and base their own decisions on yours), you can affect things that happen there. It can be helpful to take into account those effects when you’re determining your decision-making process, and to act as if you were all of your copies at once.

Those were some of my takeaways from reading about functional decision theory (described in the post I linked above) and updateless decision theory.

Replies from: Slider
comment by Slider · 2020-08-11T18:07:53.797Z · LW(p) · GW(p)

A far off decision maker can't have direct evidence of your existence as then you would be the cause of their existence.

A far off observer can see a process that it can predict will result into you and things that it does may be cocauses with the future between you. I still think that the verb "affect" is wrong here.

Say there is a pregnant mother and he friend leaves into another country and lives there in isolation for 18 years but knowing there is likely to be a person sends a birthday gift with a card referring to "happy 18th birthday". Nothing that you do in your childhood or adulthood can affect what the information on the card reads if the far off country is sufficiently isolated. The event of you opening the box will be both a product how you lived your childhood and what the sender chose to put in the box. Even if the gift sender would want to reward better persons with better gifts the choice needs to eb based on what kind of baby you were and not what kind of adult you are.

And maybe crucially adult you will have past tht is not the past of baby you. The gift giver has no hope of taking a stance towards this data.

comment by shminux · 2020-08-10T00:16:19.222Z · LW(p) · GW(p)

Ah, okay. I don't see any reason to be concerned about something that we have no effect on. Will try to explain below.

Regarding "subjunctive dependency" from the post linked in your other reply:

I agree with a version of "They are questions about what type of source code you should be running", formulated as "what type of an algorithm results in max EV, as evaluated by the same algorithm?" This removes the contentious "should" part, that implies that you have an option of running some other algorithm (you don't, you are your own algorithm).

The definition of "subjunctive dependency" in the post is something like "the predictor runs a simplified model of your actual algorithm that outputs the same result as your source code would, with high fidelity" and therefore the predictor's decisions "depend" on your algorithm, i.e. you can be modeled as affecting the predictor's actions "retroactively".

Note that you, an algorithm, have no control of what that algorithm is, you just are it, even if your algorithm comes equipped with the routines that "think" about themselves. If you also postulate that the predictor is an algorithm, as well, then the question of decision theory in presence of predictors becomes something like "what type of an agent algorithm results in max EV when immersed in a given predictor algorithm?" In that approach the subjunctive dependency is not a very useful abstraction, since the predictor algorithm is assumed to be fixed. In which case there is no reason to consider causally disconnected parts of the agent's universe.

Clearly your model is different from the above, since you seriously think about untestables and unaffectables.

comment by bsaad · 2020-08-11T17:23:38.539Z · LW(p) · GW(p)

quasi-causalism (cf. Leslie's use of 'quasi-causation' in ''Ensuring two bird deaths with one throw")