The Alignment-Competence Trade-Off, Part 1: Coalition Size and Signaling Costs

post by Gentzel · 2020-01-15T23:10:01.055Z · LW · GW · 4 comments

This is part 1 of a series of posts I initially planned to organize as a massive post last summer on principal-agent problems. As that task quickly became overwhelming, I decided to break it down into smaller posts that ensure I cover each of the cases and mechanisms that I intended to.

Overall, I think the trade-off between the alignment of agents and the competence of agents can explain a lot of problems to which people often think there are simple answers. The less capable an agent is (whether the agent is a person, a bureaucracy, or an algorithm) the easier it is for a principal to assess the agent, and ensure the agent is working toward the principal’s goals. As agents become more competent, they become both more capable of actually accomplishing the principal’s goals and of merely appearing to accomplish the principal’s goals while pursuing their own. In debating policy changes, I often find one sided arguments that neglect this trade-off, and in general I think efforts to improve policies or the bureaucratic structures of companies, non-profits, and governments should be informed by it.


Part 1:

Virtue signaling and moralistic anger are both forces that have been useful for holding people accountable, and powerful mechanisms of cultural evolution: spreading some norms more successfully than others, and resulting in many societies holding similar norms.

However, the larger a group becomes, the less members of the group know on average about other individual member’s behavior or the consequences of it: making it harder to evaluate complex actions. This in turn gives an advantage to more clear forms of signaling that are more inefficient and costly than those that could be sustainable in smaller groups.

Examples:

In summary there are a lot of actions that are more directly efficient and selfishly beneficial for those that do them, but because they are not credible signals of good intent/are excuses that the corrupt would use, the options are not sustainable in larger societies. Small groups where people know each other well on the other hand can allow weirder norms to be sustainable without corruption due to their increased ability to vet each other. This may also explain why smaller groups in history often had more sustainable norms of exploiting defectors or outsiders which wouldn’t be sustainable in larger societies since you can’t tell if someone is robbing a thief or an innocent person. Reducing attempts at exploitation between small competing groups of insiders is likewise probably a good thing for scaling up societies.

In general, these signaling costs come from scenarios where people’s interests may not align, and the costs are paid to demonstrate alignment. Without efficient mechanisms to assess and vet each other, as groups scale they lose trust, and more costly signaling becomes required to sustain cooperation.

4 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2020-01-16T07:58:52.193Z · LW(p) · GW(p)

Wish I had this post to reference when I wrote AGI will drastically increase economies of scale [LW · GW].

Replies from: Gentzel
comment by Gentzel · 2020-01-20T20:33:25.162Z · LW(p) · GW(p)

Yea, when you can copy the same value function across all the agents in an bureaucracy, you don't have to pay signaling costs to scale up. Alignment problems become more about access to information rather than having misaligned goals.

comment by riceissa · 2020-01-18T09:08:44.907Z · LW(p) · GW(p)

I find it interesting to compare this post to Robin Hanson's "Who Likes Simple Rules?". In your post, when people's interests don't align, they have to switch to a simple/clear mechanism to demonstrate alignment. In Robin Hanson's post, people's interests "secretly align", and it is the simple/clear mechanism that isn't aligned, so people switch to subtle/complicated mechanisms to preserve alignment. Overall I feel pretty confused about when I should expect norms/rules to remain complicated or become simpler as groups scale.

I am a little confused about the large group sizes for some of your examples. For example, the vegan one doesn't seem to depend on a large group size: even among one's close friends or family, one might not want to bother explaining all the edge cases for when one will eat meat.

Replies from: Gentzel
comment by Gentzel · 2020-01-20T20:47:41.080Z · LW(p) · GW(p)

I think those two cases are pretty compatible. The simple rules seem to get formed due to the pressures created by large groups, but there are still smaller sub-groups within large groups than can benefit from getting around the inefficiency caused by the rules, so they coordinate to bend the rules.

Hanson also has an interesting post on group size and conformity: http://www.overcomingbias.com/2010/10/towns-norm-best.html

In the vegan case, it is easier to explain things to a small number of people than a large number of people, even though it may still not be worth your time with small numbers of people. It's easier to hash out argument with one family member than to do something your entire family will impulsively think is hypocritical during Thanksgiving.