Cooperating with aliens and AGIs: An ECL explainer

post by Chi Nguyen, _will_ (Will Aldred), Akash (akash-wasil) · 2024-02-24T22:58:47.345Z · LW · GW · 8 comments

8 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2024-02-28T03:16:11.332Z · LW(p) · GW(p)

Can you say something about your motivation to work on this? Why not leave it to future AI and/or humanity to figure out? Or what are the most important questions in this area to answer now?

Replies from: Chi Nguyen
comment by Chi Nguyen · 2024-03-01T00:26:51.148Z · LW(p) · GW(p)

Letting on-lookers know that I responded in this comment thread [LW(p) · GW(p)]

comment by PeterMcCluskey · 2024-02-27T04:55:33.195Z · LW(p) · GW(p)

Doesn't this depend on what we value?

In particular, you appear to assume that we care about events outside of our lightcone in roughly the way we care about events in our near future. I'm guessing a good deal of skepticism of ECL is a result of people not caring much about distant events.

Replies from: Chi Nguyen
comment by Chi Nguyen · 2024-02-27T23:27:43.072Z · LW(p) · GW(p)

Yeah, you're right that we assume that you care about what's going on outside the lightcone! If that's not the case (or only a little bit the case), that would limit the action-relevance of ECL.

(That said, there might be some weird simulations-shenanigans or cooperating with future earth-AI that would still make you care about ECL to some extent although my best guess is that they shouldn't move you too much. This is not really my focus though and I haven't properly thought through ECL for people with indexical values.)

comment by Anthony DiGiovanni (antimonyanthony) · 2024-03-24T18:29:27.717Z · LW(p) · GW(p)

The model does not capture the fact that the total value you can provide to the commons likely scales with the diversity (and by proxy, fraction) of agents that have different values. In some models, this effect is strong enough to flip whether a larger fraction of agents with your values favors cooperating or defecting.

I'm curious to hear more about this, could you explain what these other models are?

comment by Dan.Oblinger · 2024-03-01T00:59:06.669Z · LW(p) · GW(p)


I find myself arriving at a similar conclusion, but via a different path.

I notice that citizens often vote in the hopes that others will also vote and thus as a group will yield benefit.  the do this even when they know their vote alone will likely make no difference, and their voting does not cause others to vote.

So why do they do this?  My thought is that we are creatures that have evolved instincts that are adaptive for causally-interacting, social creatures.  In a similar way I expect other intelligence may have evolved in causally interacting social contexts and thus have developed similar instincts.  So this is why I expect distant aliens may behave in this way.

This conclusion is similar to yours, but I think the reasoning chain is a bit different:
(1) non-self-benefiting cooperation is evolutionarily preferred for "multi-turn" causally-interacting social agents.
(2) Thus such social agents (even distant alien ones) may evolve such behavior and apply it instinctively.
(3) As a result we (and they) find ourselves/themselves applying such in cooperative behavior in contexts that are known to ourselves/themselves to be provably a-causal.

Interestingly, I can imagine such agents using your argument as their post-hoc explanation of their own behavior even if the actual reason is rooted in their evolutionary history.

 

How does this argument fit into or with your framework?

Replies from: Chi Nguyen
comment by Chi Nguyen · 2024-03-01T20:23:58.805Z · LW(p) · GW(p)

Interesting. The main thing that pops out for me is that it feels like your story is descriptive while we try to be normative? I.e. it's not clear to me from what you say whether you would recommend to humans to act in this cooperative way towards distant aliens, but you seem to expect that they will do/are doing so. Meanwhile, I would claim that we should act cooperatively in this way but make no claims about whether humans actually do so.

Does that seem right to you or am I misunderstanding your point?

Replies from: Dan.Oblinger
comment by Dan.Oblinger · 2024-03-03T22:49:23.347Z · LW(p) · GW(p)

Chi, I think that is correct.

My arguments attempts to provide a descriptive explanation of why all evolved intelligence do have a tendency towards ECL, but it provide no basis to argue such intelligence should have such a tendency in a normative sense.

 

Still somehow as an individual (with such tendencies), I find the idea that other distant intelligence will also have a tendency towards ECL does provide some personal motivation.  I don't feel like such a "sucker" if I spend energy on an activity like this, since I know others will to, and it is only "fair" that I contribute my share.

Notice, I still have a suspicion that this way of thinking in myself is a product of my descriptive explanation.  But that does not diminish the personal motivation is provides me.

In this end, this is still not really a normative explanation.  At best is could be a MOTIVATING explanation, for the normative behavior you are hoping for.

~

For me, however, the main reason I like such a descriptive explanation is that it feels like it could one day be proved true.  We could potentially verify that ECL follows from evolution as a statement about the inherent and objective nature of the universe.  Such objective statements are of great interest to me, as they feel like I am understanding a part of reality itself.

Interesting topic!