Report on modeling evidential cooperation in large worlds
post by Johannes Treutlein (Johannes_Treutlein) · 2023-07-12T16:37:52.443Z · LW · GW · 3 commentsThis is a link post for https://arxiv.org/pdf/2307.04879.pdf
Contents
Abstract None 3 comments
I have published a report on modeling evidential cooperation in large worlds. I originally started working on the report during a CEA Summer Research Fellowship back in 2018 and have now finally found the time to finish it. The report itself is 75 pages long, but the introduction includes a short 4 1/2 page summary with the main takeaways. Since the bulk of the work was done in 2018, it does not cover more recent work such as the ROSE Value [LW · GW].
Abstract
Evidential cooperation in large worlds (ECL) refers to the idea that humans and other agents can benefit by cooperating with similar agents with differing values in causally disconnected parts of a large universe. Cooperating provides agents with evidence that other similar agents are likely to cooperate too, resulting in gains from trade for all. This could be a crucial consideration for altruists.
I develop a game-theoretic model of ECL as an incomplete information bargaining problem. The model incorporates uncertainty about others' value systems and empirical situations, and addresses the problem of selecting a compromise outcome. Using the model, I investigate issues with ECL and outline open technical and philosophical questions.
I show that all cooperators must maximize the same weighted sum of utility functions to reach a Pareto optimal outcome. However, I argue against selecting a compromise outcome implicitly by normalizing utility functions. I review bargaining theory and argue that the Nash bargaining solution could be a relevant Schelling point. I introduce dependency equilibria (Spohn 2007), an equilibrium concept suitable for ECL, and generalize a folk theorem showing that the Nash bargaining solution is a dependency equilibrium. I discuss gains from trade given uncertain beliefs about other agents and analyze how these gains decrease in several toy examples as the belief in another agent decreases.
Finally, I discuss open issues in my model. First, the Nash bargaining solution is sometimes not coalitionally stable, meaning that a subset of cooperators can unilaterally improve payoffs by deviating from the compromise. I investigate conditions under which stable payoff vectors exist. Second, I discuss how to model agents' default actions without ECL.
3 comments
Comments sorted by top scores.
comment by Dawn Drescher (Telofy) · 2023-07-13T19:47:22.856Z · LW(p) · GW(p)
Amazing work! So glad it’s finally out in the open!
Replies from: Johannes_Treutlein↑ comment by Johannes Treutlein (Johannes_Treutlein) · 2023-07-14T06:26:56.241Z · LW(p) · GW(p)
Thank you! :)
comment by Joshua Clancy (joshua-clancy) · 2023-07-12T21:22:19.156Z · LW(p) · GW(p)
I always liked this area of thought. I often think about how some of the ecosystems in which humans evolved created good games to promote cooperation (perhaps not to as large an extent as would be preferable). For example, if over hunting and over foraging kills the tribe an interesting game theory game is created. A game where it is in everyone's interest to NOT be greedy. If you make a kill, you should share. If you gather more than you need you should share. If you are hunting/gathering too much, others should stop you. I do wonder whether training AI's in such a game environment would predispose them towards cooperation.