Applied Coalition Formationpost by Michaël Trazzi (mtrazzi) · 2018-05-09T07:07:42.014Z · score: 3 (1 votes) · LW · GW · None comments
Multi-Agent Systems Forming Coalitions Some Formalism Day-to-Day Applications Conclusion Further Reading None No comments
(In Yesterday's post [LW · GW] I was not clear enough about what I meant by "coalition formation". This is a more detailed post where I explain it in depth, and propose ways to apply it in your day-to-day life.)
Here is a comment I got from over-analyzing relationships from a single agent perspective.
'When writing about dating, it's kinda rude to talk about it as a one player game. "I am getting what I want out of my actions". For their part in the other side of the experience people don't like to be the one being hunted.
Yes rationality is a one-agent game most of the time, but we still need to be careful how we portray the other agents.' (Comment [LW · GW] from Elo [LW · GW])
What about modeling relationships as Multi-Agent Systems, then?
For the last four months, I have been thinking a lot about Multi-Agent settings.
Why? I have been taking a course on Multi-Agent systems which proved to be both challenging and fascinating. Implementing specific protocols to avoid simple behaviors (such as interblocking) turned out to be intrinsically difficult (more details on the programming assignment here).
More generally, the course was about protocols to allow coordination between agents, which coalition formation was a particular case.
Certain goals are not achievable alone (i.e. lifting a heavy object). Therefore, agents might agree to form group of agents, called coalitions, to achieve common goals.
At the end of their cooperation, the resulting rewards will be shared between the agents.
For instance, political parties are coalitions of citizens and an early-stage startup can be seen as the coalition of co-founders.
In game theory, the value of a cooperative game with agents is defined as follow:
At the grocery store. This is what I tried to explain yesterday [LW · GW]. Imagine you are in a supermarket, trying to fill your cart of food. What you already have inside your cart is a coalition of items, where is the total number of food items in the supermarket (maximum number of items you could purchase, in theory). Now, I claim that every time you put an additional item in your cart, you are unconsciously maximizing the marginal contribution of an item to your existing coalition.
Applying for a job. If you're altruist, ask yourself if your marginal contribution to the company is high enough. If you're egoist, try to maximize your shared rewards while minimizing your marginal contribution (by being lazy, arriving late, etc.).
Relationships. Do your group of friends really like your childhood friend? My friend had a social anxiety disorder. He could not say anything around any of my other friends. His marginal contribution to any social group was close to zero. Obviously, people did not like him, because they would have to share the rewards of a lovely dinner with someone who did not participate in any ways.
In short, multi-agent systems/coalitions are a useful framework to talk about adding one agent to an existing group. In a cooperative game, a value function assigns a numerical value to a coalition. Using this function, it is possible to compute the marginal contribution of a given element, and reason about day-to-day decisions.
L.S. Shapley, A value for n-person games, Contributions to the Theory of Games, Vol. II (H.W. Kuhn and A.W. Tucker, eds.), Annals of Mathematics Studies, vol. 28, Princeton University Press, Princeton, NJ, 1953, pp. 307–317.
G Weiss, Multiagent systems a modern approach to distributed artificial intelligence, MIT Press, 1999.
This is the 11th post of a series of daily LessWrong posts I started on April 28th.
Comments sorted by top scores.