[SEQ RERUN] Selecting Rationalist Groups

post by MinibearRex · 2013-04-09T05:43:49.871Z · LW · GW · Legacy · 1 comments

Contents

1 comment

Today's post, Selecting Rationalist Groups was originally published on 02 April 2009. A summary (taken from the LW wiki):

 

Trying to breed e.g. egg-laying chickens by individual selection can produce odd side effects on the farm level, since a more dominant hen can produce more egg mass at the expense of other hens. Group selection is nearly impossible in Nature, but easy to impose in the laboratory, and group-selecting hens produced substantial increases in efficiency. Though most of my essays are about individual rationality - and indeed, Traditional Rationality also praises the lone heretic more than evil Authority - the real effectiveness of "rationalists" may end up determined by their performance in groups.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Purchase Fuzzies and Utilons Separately, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

1 comments

Comments sorted by top scores.

comment by Viliam_Bur · 2013-04-09T08:44:58.179Z · LW(p) · GW(p)

Aumann's Agreement Theorem assumes that the participants are rational, honest, and respectful (believing each other to be rational and honest, and respectful). How easy is that to find in real life?

Let's start with rationality. The typical "sanity waterline" complaint would remove most of the population. (I will abstain from estimating honesty, because I don't have high confidence in my opinions about it.) The "sanity waterline" also means that for the other person there is a low probability that you are rational, therefore respect is also rare. So it seems to me that the Aumann's Agreement Theorem is irrelevant in the real life... until you gain enough rationality and social skills to find and recognize other rational people, and to gain their trust.

In other words, we can't use it at the beginning; and later we already have a habit of not using it. At some point we should switch to using it; to trust other rational and honest people's judgement. The problem is: when exactly? At which point should I believe myself to be rational and people-knowing enough to expect on average positive gains by applying Aumann's Agreement Theorem in my life?

Also, there is some problem with openness of the rationalist group to the outside world. By being open, the group has a chance to grow, and to learn about biases of the existing members. On the other hand, an open group probably cannot trust its members' rationality so much, simply because it has less data about the new ones. This could be solved by having a hierarchy, where the senior members would be trusted more simply because they were tested for a longer time. Also, how exactly is the group going to deal with a situation when it discovers that one of its members is not rational or honest enough? Should there be a ritual where a person can gain or lose the "trusted member" status? Or should each group member track their trust in other members individually? Which of these two options scales better with the size of the group?

Maybe in the "rationality dojo" we should not let individuals fight against individuals, but small groups against small groups; so that we measure not only the power of an individual, but also a power of cooperation. (At this moment I imagine the 1+3 ninja groups from Naruto.)