Multiverse-wide Cooperation via Correlated Decision Making
post by Kaj_Sotala · 2017-08-20T12:01:36.212Z · LW · GW · Legacy · 2 commentsThis is a link post for https://foundational-research.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf
Contents
3 comments
2 comments
Comments sorted by top scores.
comment by Manfred · 2017-08-20T14:25:53.024Z · LW(p) · GW(p)
Why do we care about acausal trading with aliens to promote them acting with "moral reflection, moral pluralism," etc?
Replies from: Caspar42↑ comment by Caspar Oesterheld (Caspar42) · 2017-08-31T16:15:18.815Z · LW(p) · GW(p)
Thanks for the comment!
W.r.t. moral reflection: Probably many agents put little intrinsic value on whether society engages in a lot of moral reflection. However, I would guess that as a whole the set of agents having a similar decision mechanism as I have do care about this significantly and positively. (Empirically, disvaluing moral reflection seems to be rare.) Hence, (if the basic argument of the paper goes through) I should give some weight to it.
W.r.t. moral pluralism: Probably even fewer agents care about this intrinsically. I certainly don’t care about it intrinsically. The idea is that moral pluralism may avoid conflict or create gains from “trade”. For example, let’s say the aggregated values of agents with my decision algorithm contain two values A and B. (As I argue in the paper, I should maximize these aggregated values to maximize my own values throughout the multiverse.) Now, I might be in some particular environment with agents who themselves care about A and/or B. Let’s say I can choose between two distributions of caring about A and B: Either each of the agents cares about A and B, or some care only about A and the others only about B. The former will tend to be better if I (or rather the set of agents with my decision algorithm) care about A and B, because it avoids conflicts, makes it more easy to exploit comparative advantages, etc.
Note that I think neither promoting moral reflection nor promoting moral pluralism is a strong candidate for a top intervention. Multiverse-wide superrationality just increases their value relative to what, say, what a utilitarian would think about these interventions. I think it’s a lot more important to ensure that AI uses the right decision theory. (Of course, this is important, anyway, but I think multiverse-wide superrationality drastically increases its value.)
Replies from: Heighn