Why Our Kind Can't Cooperate

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-20T08:37:22.001Z · LW · GW · Legacy · 211 comments

From when I was still forced to attend, I remember our synagogue's annual fundraising appeal.  It was a simple enough format, if I recall correctly.  The rabbi and the treasurer talked about the shul's expenses and how vital this annual fundraise was, and then the synagogue's members called out their pledges from their seats.

Straightforward, yes?

Let me tell you about a different annual fundraising appeal.  One that I ran, in fact; during the early years of a nonprofit organization that may not be named.  One difference was that the appeal was conducted over the Internet.  And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd.  (To point in the rough direction of an empirical cluster in personspace.  If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.)

I crafted the fundraising appeal with care.  By my nature I'm too proud to ask other people for help; but I've gotten over around 60% of that reluctance over the years.  The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year's annual appeal.  I sent it out to several mailing lists that covered most of our potential support base.

And almost immediately, people started posting to the mailing lists about why they weren't going to donate.  Some of them raised basic questions about the nonprofit's philosophy and mission.  Others talked about their brilliant ideas for all the other sources that the nonprofit could get funding from, instead of them.  (They didn't volunteer to contact any of those sources themselves, they just had ideas for how we could do it.)

Now you might say, "Well, maybe your mission and philosophy did have basic problems—you wouldn't want to censor that discussion, would you?"

Hold on to that thought.

Because people were donating.  We started getting donations right away, via Paypal.  We even got congratulatory notes saying how the appeal had finally gotten them to start moving.  A donation of $111.11 was accompanied by a message saying, "I decided to give **** a little bit more.  One more hundred, one more ten, one more single, one more dime, and one more penny.  All may not be for one, but this one is trying to be for all."

But none of those donors posted their agreement to the mailing list.  Not one.

So far as any of those donors knew, they were alone.  And when they tuned in the next day, they discovered not thanks, but arguments for why they shouldn't have donated.  The criticisms, the justifications for not donating—only those were displayed proudly in the open.

As though the treasurer had finished his annual appeal, and everyone not making a pledge had proudly stood up to call out justifications for refusing; while those making pledges whispered them quietly, so that no one could hear.

I know someone with a rationalist cause who goes around plaintively asking, "How come the Raelian flying-saucer cult can get tens of thousands of members [probably around 40,000] interested in complete nonsense, but we can't even get a thousand people working on this?"

The obvious wrong way to finish this thought is to say, "Let's do what the Raelians do!  Let's add some nonsense to this meme!"  For the benefit of those not immediately stopped by their ethical inhibitions, I will observe that there may be a hundred failed flying-saucer cults for every one that becomes famous.  And the Dark Side may require non-obvious skills, which you, yes you, do not have:  Not everyone can be a Sith Lord.  In particular, if you talk about your planned lies on the public Internet, you fail.  I'm no master criminal, but even I can tell certain people are not cut out to be crooks.

So it's probably not a good idea to cultivate a sense of violated entitlement at the thought that some other group, who you think ought to be inferior to you, has more money and followers.  That path leads to—pardon the expression—the Dark Side.

But it probably does make sense to start asking ourselves some pointed questions, if supposed "rationalists" can't manage to coordinate as well as a flying-saucer cult.

How do things work on the Dark Side?

The respected leader speaks, and there comes a chorus of pure agreement: if there are any who harbor inward doubts, they keep them to themselves.  So all the individual members of the audience see this atmosphere of pure agreement, and they feel more confident in the ideas presented—even if they, personally, harbored inward doubts, why, everyone else seems to agree with it.

("Pluralistic ignorance" is the standard label for this.)

If anyone is still unpersuaded after that, they leave the group (or in some places, are executed)—and the remainder are more in agreement, and reinforce each other with less interference.

(I call that "evaporative cooling of groups".)

The ideas themselves, not just the leader, generate unbounded enthusiasm and praise.  The halo effect is that perceptions of all positive qualities correlate—e.g. telling subjects about the benefits of a food preservative made them judge it as lower-risk, even though the quantities were logically uncorrelated.  This can create a positive feedback effect that makes an idea seem better and better and better, especially if criticism is perceived as traitorous or sinful.

(Which I term the "affective death spiral".)

So these are all examples of strong Dark Side forces that can bind groups together.

And presumably we would not go so far as to dirty our hands with such...

Therefore, as a group, the Light Side will always be divided and weak.  Atheists, libertarians, technophiles, nerds, science-fiction fans, scientists, or even non-fundamentalist religions, will never be capable of acting with the fanatic unity that animates radical Islam.  Technological advantage can only go so far; your tools can be copied or stolen, and used against you.  In the end the Light Side will always lose in any group conflict, and the future inevitably belongs to the Dark.

I think that one's reaction to this prospect says a lot about their attitude towards "rationality".

Some "Clash of Civilizations" writers seem to accept that the Enlightenment is destined to lose out in the long run to radical Islam, and sigh, and shake their heads sadly.  I suppose they're trying to signal their cynical sophistication or something.

For myself, I always thought—call me loony—that a true rationalist ought to be effective in the real world.

So I have a problem with the idea that the Dark Side, thanks to their pluralistic ignorance and affective death spirals, will always win because they are better coordinated than us.

You would think, perhaps, that real rationalists ought to be more coordinated?  Surely all that unreason must have its disadvantages?  That mode can't be optimal, can it?

And if current "rationalist" groups cannot coordinate—if they can't support group projects so well as a single synagogue draws donations from its members—well, I leave it to you to finish that syllogism.

There's a saying I sometimes use:  "It is dangerous to be half a rationalist."

For example, I can think of ways to sabotage someone's intelligence by selectively teaching them certain methods of rationality.  Suppose you taught someone a long list of logical fallacies and cognitive biases, and trained them to spot those fallacies in biases in other people's arguments.  But you are careful to pick those fallacies and biases that are easiest to accuse others of, the most general ones that can easily be misapplied.  And you do not warn them to scrutinize arguments they agree with just as hard as they scrutinize incongruent arguments for flaws.  So they have acquired a great repertoire of flaws of which to accuse only arguments and arguers who they don't like.  This, I suspect, is one of the primary ways that smart people end up stupid.  (And note, by the way, that I have just given you another Fully General Counterargument against smart people whose arguments you don't like.)

Similarly, if you wanted to ensure that a group of "rationalists" never accomplished any task requiring more than one person, you could teach them only techniques of individual rationality, without mentioning anything about techniques of coordinated group rationality.

I'll write more later (tomorrow?) on how I think rationalists might be able to coordinate better.  But today I want to focus on what you might call the culture of disagreement, or even, the culture of objections, which is one of the two major forces preventing the atheist/libertarian/technophile crowd from coordinating.

Imagine that you're at a conference, and the speaker gives a 30-minute talk.  Afterward, people line up at the microphones for questions.  The first questioner objects to the graph used in slide 14 using a logarithmic scale; he quotes Tufte on The Visual Display of Quantitative Information.  The second questioner disputes a claim made in slide 3.  The third questioner suggests an alternative hypothesis that seems to explain the same data...

Perfectly normal, right?  Now imagine that you're at a conference, and the speaker gives a 30-minute talk.  People line up at the microphone.

The first person says, "I agree with everything you said in your talk, and I think you're brilliant."  Then steps aside.

The second person says, "Slide 14 was beautiful, I learned a lot from it.  You're awesome."  Steps aside.

The third person—

Well, you'll never know what the third person at the microphone had to say, because by this time, you've fled screaming out of the room, propelled by a bone-deep terror as if Cthulhu had erupted from the podium, the fear of the impossibly unnatural phenomenon that has invaded your conference.

Yes, a group which can't tolerate disagreement is not rational.  But if you tolerate only disagreement—if you tolerate disagreement but not agreement—then you also are not rational.  You're only willing to hear some honest thoughts, but not others.  You are a dangerous half-a-rationalist.

We are as uncomfortable together as flying-saucer cult members are uncomfortable apart.  That can't be right either.  Reversed stupidity is not intelligence.

Let's say we have two groups of soldiers.  In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy.  In group 2, everyone at all levels knows all about tactics and strategy.

Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?

In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.

Doing worse with more knowledge means you are doing something very wrong.  You should always be able to at least implement the same strategy you would use if you are ignorant, and preferably do better.  You definitely should not do worse.  If you find yourself regretting your "rationality" then you should reconsider what is rational.

On the other hand, if you are only half-a-rationalist, you can easily do worse with more knowledge.  I recall a lovely experiment which showed that politically opinionated students with more knowledge of the issues reacted less to incongruent evidence, because they had more ammunition with which to counter-argue only incongruent evidence.

We would seem to be stuck in an awful valley of partial rationality where we end up more poorly coordinated than religious fundamentalists, able to put forth less effort than flying-saucer cultists.  True, what little effort we do manage to put forth may be better-targeted at helping people rather than the reverse—but that is not an acceptable excuse.

If I were setting forth to systematically train rationalists, there would be lessons on how to disagree and lessons on how to agree, lessons intended to make the trainee more comfortable with dissent, and lessons intended to make them more comfortable with conformity.  One day everyone shows up dressed differently, another day they all show up in uniform.  You've got to cover both sides, or you're only half a rationalist.

Can you imagine training prospective rationalists to wear a uniform and march in lockstep, and practice sessions where they agree with each other and applaud everything a speaker on a podium says?  It sounds like unspeakable horror, doesn't it, like the whole thing has admitted outright to being an evil cult?  But why is it not okay to practice that, while it is okay to practice disagreeing with everyone else in the crowd?  Are you never going to have to agree with the majority?

Our culture puts all the emphasis on heroic disagreement and heroic defiance, and none on heroic agreement or heroic group consensus.  We signal our superior intelligence and our membership in the nonconformist community by inventing clever objections to others' arguments.  Perhaps that is why the atheist/libertarian/technophile/sf-fan/Silicon-Valley/programmer/early-adopter crowd stays marginalized, losing battles with less nonconformist factions in larger society.  No, we're not losing because we're so superior, we're losing because our exclusively individualist traditions sabotage our ability to cooperate.

The other major component that I think sabotages group efforts in the atheist/libertarian/technophile/etcetera community, is being ashamed of strong feelings.  We still have the Spock archetype of rationality stuck in our heads, rationality as dispassion.  Or perhaps a related mistake, rationality as cynicism—trying to signal your superior world-weary sophistication by showing that you care less than others.  Being careful to ostentatiously, publicly look down on those so naive as to show they care strongly about anything.

Wouldn't it make you feel uncomfortable if the speaker at the podium said that he cared so strongly about, say, fighting aging, that he would willingly die for the cause?

But it is nowhere written in either probability theory or decision theory that a rationalist should not care.  I've looked over those equations and, really, it's not in there.

The best informal definition I've ever heard of rationality is "That which can be destroyed by the truth should be."  We should aspire to feel the emotions that fit the facts, not aspire to feel no emotion.  If an emotion can be destroyed by truth, we should relinquish it.  But if a cause is worth striving for, then let us by all means feel fully its importance.

Some things are worth dying for.  Yes, really!  And if we can't get comfortable with admitting it and hearing others say it, then we're going to have trouble caring enough—as well as coordinating enough—to put some effort into group projects.  You've got to teach both sides of it, "That which can be destroyed by the truth should be," and "That which the truth nourishes should thrive."

I've heard it argued that the taboo against emotional language in, say, science papers, is an important part of letting the facts fight it out without distraction.  That doesn't mean the taboo should apply everywhere.  I think that there are parts of life where we should learn to applaud strong emotional language, eloquence, and poetry.  When there's something that needs doing, poetic appeals help get it done, and, therefore, are themselves to be applauded.

We need to keep our efforts to expose counterproductive causes and unjustified appeals, from stomping on tasks that genuinely need doing.  You need both sides of it—the willingness to turn away from counterproductive causes, and the willingness to praise productive ones; the strength to be unswayed by ungrounded appeals, and the strength to be swayed by grounded ones.

I think the synagogue at their annual appeal had it right, really.  They weren't going down row by row and putting individuals on the spot, staring at them and saying, "How much will you donate, Mr. Schwartz?"  People simply announced their pledges—not with grand drama and pride, just simple announcements—and that encouraged others to do the same.  Those who had nothing to give, stayed silent; those who had objections, chose some later or earlier time to voice them.  That's probably about the way things should be in a sane human community—taking into account that people often have trouble getting as motivated as they wish they were, and can be helped by social encouragement to overcome this weakness of will.

But even if you disagree with that part, then let us say that both supporting and countersupporting opinions should have been publicly voiced.  Supporters being faced by an apparently solid wall of objections and disagreements—even if it resulted from their own uncomfortable self-censorship—is not group rationality.  It is the mere mirror image of what Dark Side groups do to keep their followers.  Reversed stupidity is not intelligence.

211 comments

Comments sorted by top scores.

comment by Paul Crowley (ciphergoth) · 2009-03-20T12:32:24.997Z · LW(p) · GW(p)

In this community, agreeing with a poster such as yourself signals me as sycophantic and weak-minded; disagreement signals my independence and courage. There's also a sense that "there are leaders and followers in this world, and obviously just getting behind the program is no task for so great a mind as mine".

However, that's not the only reason I might hesitate to post my agreement; I might prefer only to post when I have something to add, which would more usually be disagreement. Since I don't only vote up things I agree with, perhaps I should start hacking on the feature that allows you to say "6 members marked their broad agreement with this point (click for list of members)".

Replies from: Cameron_Taylor, Gray, Emile
comment by Cameron_Taylor · 2009-03-20T13:15:43.838Z · LW(p) · GW(p)

However, that's not the only reason I might hesitate to post my agreement; I might prefer only to post when I have something to add, which would more usually be disagreement. Since I don't only vote up things I agree with, perhaps I should start hacking on the feature that allows you to say "6 members marked their broad agreement with this point (click for list of members)".

That would be great.

Replies from: Court_Merrigan
comment by Court_Merrigan · 2009-03-21T02:32:19.855Z · LW(p) · GW(p)

That would be a great feature, I think. Ditto on on broad disagreements.

comment by Gray · 2011-03-24T04:43:50.047Z · LW(p) · GW(p)

This is a good point, but I think there's a ready solution to that. Agreement and disagreement, by themselves, are rather superficial. Arguments, on the other hand, rationalists have more respect for. When you agree with someone, it seems that you don't have the burden to formulate an argument because, implicitly, you're referring to the first person's argument. But when you disagree with someone, you do have the burden of formulating a counterargument. So I think this is why rationalists tend to have more respect for disagreement than agreement, because disagreement requires an argument, whereas agreement doesn't need to.

But on reflection, this arrangement is fallacious. Why shouldn't agreement also require an argument? I think it may seem to add to the strength of an argument if multiple people agree that it is sound, but I don't think it does in reality. If multiple people develop the same argument independently, then the argument might be somewhat stronger; but clearly this isn't the kind of agreement we're talking about here. If I make an argument, you read my argument, and then you agree that my argument is sound, you haven't developed the same argument independently. Worse, I've just biased you towards my argument.

The better alternative is, when you agree with an argument, there should be the burden of devising a different argument that argues for the same conclusion. Of course, citing evidence also counts as an "argument". In this manner, a community of rationalists can increase the strength of a conclusion through induction; the more arguments there are for a conclusion, the stronger that conclusion is, and the better it can be relied upon.

Replies from: CuSithBell, Idan Arye, CuSithBell
comment by CuSithBell · 2011-03-24T04:52:38.236Z · LW(p) · GW(p)

In that case you're "writing the last line first", I suspect it might not reduce bias. Personally, I often try to come up with arguments against positions I hold or am considering, which sometimes work and sometimes do not. Of course, this isn't foolproof either, but might be less problematic.

Replies from: None, Yoav Ravid, Gray
comment by [deleted] · 2011-03-24T05:31:28.156Z · LW(p) · GW(p)

In real life this is common, and the results are not always bad. It's incredibly common in mathematics. For example, Fermat's Last Theorem was a "last line" for a long time, until someone finally filled in the argument. It may also be worth mentioning that the experimental method is also "last line first". That is, at the start you state the hypothesis that you're about to test, and then you test the hypothesis - which test, depending on the result, may amount to an argument from evidence for the hypothesis.

Another case in point, this time from history: Darwin and natural selection. At some point in his research, natural selection occurred to him. It wasn't, at that point, something that he had very strong evidence for, which is why he spent a lot of time gathering evidence and building argument for it. So there's another "last line first" which turned out pretty well in the end.

Replies from: JGWeissman, CuSithBell
comment by JGWeissman · 2011-03-24T06:02:26.898Z · LW(p) · GW(p)

It may also be worth mentioning that the experimental method is also "last line first". That is, at the start you state the hypothesis that you're about to test, and then you test the hypothesis - which test, depending on the result, may amount to an argument from evidence for the hypothesis.

No. When you state the hypothesis, it means that, depending on the evidence you are about to gather, your bottom line will be that the hypothesis is true or that the hypothesis is false (or that you can't tell if the hypothesis is true or false). Writing the Bottom Line First would be deciding in advance to conclude that the hypothesis is true.

Depending on where the hypothesis came from, the experimental method may be Privileging the Hypothesis, which the social process of science compensates for by requiring lots of evidence.

Replies from: None
comment by [deleted] · 2011-03-24T06:21:24.929Z · LW(p) · GW(p)

Deciding in advance to conclude that the hypothesis is true is not a danger if the way you decide to do that is by some means that in reality won't let you do that if the hypothesis is false. Keep in mind: you can decide to do something and still be unable to do it.

Suppose I believe that a hypothesis is true. I believe it so strongly, that I believe a well-designed experiment will prove that it is true. So I decide in advance to conclude that the hypothesis is true by doing what I am positive in advance will prove the hypothesis, which is to run a well-designed experiment which will convince the doubters. So I do that, and (suppose) that the experiment supports my hypothesis. The fact that my intentions were to prove the hypothesis don't invalidate the result of the experiment. The experiment is by its own good design protected from my intentions.

A well-designed experiment will yield truth whatever the intentions of the experimenter. What makes an experiment good isn't good intentions on the part of the experimenter. That's the whole point of the experiment: we can't trust the experimenter, and so the experiment by design renders the experimenter powerless. (Of course, we can increase our confidence even further by replicating the experiment.)

Now let's change both the intention and the method. Suppose you don't know whether a hypothesis is true and decide to discover whether it is true by examining the evidence. The method you choose is "preponderance of evidence". It is quite possible for you completely erroneously and unintentionally to in effect cherry-pick evidence for the hypothesis you were trying to test. People make procedural mistakes like this all the time without intending to do so. For example, you see one bit of evidence, and make note of the fact that this particular bit of evidence makes the the hypothesis appear to be true. But now, uh oh! You're subject to confirmation bias! That means that you will automatically, without meaning to, start to pay attention to confirming and ignore disconfirming evidence. And you didn't mean to!

Depending on where the hypothesis came from, the experimental method may be Privileging the Hypothesis

Absolutely, but privileging the hypothesis is a danger whether or not you have decided in advance to conclude the hypothesis. Look at Eliezer's own description:

Then, one of the detectives says, "Well... we have no idea who did it... no particular evidence singling out any of the million people in this city... but let's consider the hypothesis that this murder was committed by Mortimer Q. Snodgrass, who lives at 128 Ordinary Ln. It could have been him, after all."

This detective has, importantly, not decided in advance to conclude that Snodgrass is the murderer.

comment by CuSithBell · 2011-03-24T05:38:16.749Z · LW(p) · GW(p)

I think the thing which is jumping out as strange to me is doing this after you've been convinced, seemingly to enhance your credence. Still, this is a good point.

Replies from: None
comment by [deleted] · 2011-03-24T05:50:13.693Z · LW(p) · GW(p)

The danger that Eliezer warns against is absolutely real. So what's special about math? In the case of math, I think that there is something special, and that is that it's really, really hard to make a bogus argument in math and pass it by somebody who's paying attention. In the case of experimental science, the experiment is deliberately constructed to take the result out of the hands of the experimenter. At least it should be. The experimenter only controls certain variables.

So why is there ever a danger? The problem seems to arise with the mode of argument that involves "the preponderance of evidence". That kind of argument is totally exposed to cherry-picking, allowing the cherry-picker to create whatever preponderance he wants. It is, unfortunately, maybe the most common argument that you'll find in the world.

comment by Yoav Ravid · 2023-03-14T14:15:58.977Z · LW(p) · GW(p)

The two methods can be combined: When you read something you agree with, try to come up with a counterargument, if you can't refute the counterargument, post it, if you can, then post both the counterargument and its refutation.

comment by Gray · 2011-03-24T04:59:06.810Z · LW(p) · GW(p)

Sorry, I'm not exactly sure what "writing the last line first" means. I'm guessing you referring to the syllogism, and you take my proposal to mean arguing backwards from the conclusion to produce another argument for the same conclusion. Is this correct?

Replies from: CuSithBell
comment by CuSithBell · 2011-03-24T05:33:28.872Z · LW(p) · GW(p)

I'm referring to this notion of knowing what you want to conclude, and then fitting the argument to that specification. My intuition, at least, is that it would be more useful to focus on weaknesses of your newly adopted position - and if it's right, you're bound to end up with new arguments in favor of it anyway.

I agree, though, that agreement should not be taken as license to avoid engaging with a position.

I suppose I should note, given the origin of these comments, that I recommend these things only in a context of collaboration - and if we're talking about a concrete suggestion for action or the like rather than an airy matter of logic, the rules are somewhat different.

comment by Idan Arye · 2020-11-03T09:00:43.540Z · LW(p) · GW(p)

Should arguers be encouraged, then, to not write all the arguments if favor of their claim in order to leave more room for those who agree with them to add their own supporting arguments?

This requires either refraining from fully exploring the subject (so that you don't think of all the arguments you can) or straight out omitting arguments you thought of. Not exactly Dark Side, but not fully Light Side either...

comment by CuSithBell · 2011-03-24T05:44:50.876Z · LW(p) · GW(p)

Y'know, you may be right. I also suspect this is something that depends to a significant extent on the type of proposition under consideration.

comment by Emile · 2009-03-20T13:31:05.302Z · LW(p) · GW(p)

In this community, agreeing with a poster such as yourself signals me as sycophantic and weak-minded; disagreement signals my independence and courage. There's also a sense that "there are leaders and followers in this world, and obviously just getting behind the program is no task for so great a mind as mine".

Does it really signal that to other readers, or is that just in your mind? If you see someone posting an agreement, do you really judge him as a weak-minded sycophant?

Replies from: Nebu, Annoyance
comment by Nebu · 2009-03-20T17:53:52.643Z · LW(p) · GW(p)

If they post just a "Amazing post, as usual Eliezer" without further informative contribution, then I too get this mild sense of "sucking up" going on.

Actually, this whole blog (as well as Overcoming Bias) does have this subtle aura of "Eliezer is the rationality God that we should all worship". I don't blame EY for this; more probably, people are just naturally (evolutionarily?) inclined to religious behaviour, and if you hang around LW and OB, then you might project towards the person who acts like the alpha-male of the pack. In fact, it might not even need to have any religious undertones to it. It could just be "alpha-male mammalian evolution society" stuff.

Eliezer is a very smart person. Certainly much smarter than me. But so is Robin Hanson. (I won't get into which one is "smarter", as they are both at least two levels above me) and I feel he is often-- "under-appreciated" perhaps is the closest word?-- perhaps because he doesn't posts as often, but perhaps also because people tend to "me too" Eliezer a lot more often than they "me too" Robin (but again this might be because EY posts much more frequently than RH).

Replies from: pjeby, Cameron_Taylor
comment by pjeby · 2009-03-20T18:21:29.930Z · LW(p) · GW(p)

It's simpler than that: 1) Eliezer expresses certainty more often than Robin, and 2) he self-discloses to a greater degree. The combination of the two induces tendency to identification and aspiration. (The evolutionary reasons for this are left as an exercise for the reader.)

Please note that this isn't a denigration -- I do exactly the same things in my own writing, and I also identify with and admire Eliezer. Just knowing what causes it doesn't make the effect go away.

(To a certain extent, it's just audience-selection -- expressing your opinions and personality clearly will make people who agree/like what they hear become followers, those who disagree/dislike become trolls, and those who don't care one way or the other just go away altogether. NOT expressing these things clearly, on the other hand, produces less emotion either way. I love the information I get from Robin's posts, but they don't cause me to feel the same degree of personal connection to their author.)

comment by Cameron_Taylor · 2009-03-21T01:07:12.491Z · LW(p) · GW(p)

Eliezer is a very smart person. Certainly much smarter than me. But so is Robin Hanson. (I won't get into which one is "smarter", as they are both at least two levels above me) and I feel he is often-- "under-appreciated" perhaps is the closest word?-- perhaps because he doesn't posts as often, but perhaps also because people tend to "me too" Eliezer a lot more often than they "me too" Robin (but again this might be because EY posts much more frequently than RH).

I do believe I under appreciate Robin. However, what it feels like to me is that my personality at I suspect a gentic level is more similar to that of Eleizer than of Robin. In particular my impression of Robin is that he is more talented than Eleizer at social kinds of cognition. That does not mean I think Robin is less rational. It means that when I read Eleizer's work I think "yeah, that's bloody obvious!" whereas some of Robin's significant contributions I actually have to actively account for my own biasses and work to consider his expertise and that of those he refers to.

My suspicion is that people who have similar minds to Robin would be less inclined to be involved in rationalist discourse than the more instinctively individualist. This accounts somewhat for the differences in 'me too's but if anything makes Robin more remarkable.

comment by Annoyance · 2009-03-20T14:18:07.784Z · LW(p) · GW(p)

"If you see someone posting an agreement, do you really judge him as a weak-minded sycophant?"

It depends greatly on what they're agreeing with, and what they've said and done before.

comment by dfranke · 2009-03-20T20:33:05.247Z · LW(p) · GW(p)

The nice thing about karma/voting sites like this one is that they provide an efficient and socially acceptable mechanism for signaling agreement: just hit the upmod button. Nobody wants to read or listen to page after page of "me too"; forcing people to tolerate this would be bad enough to negate the advantage of making agreement visible. Voting accomplishes the same visibility without the irritating side-effects.

Replies from: Nebu, diegocaleiro
comment by Nebu · 2009-03-20T20:48:11.419Z · LW(p) · GW(p)

There's a bit of noise, as I sometimes vote up someone I disagree with if they raise an interesting point, and I very, very rarely vote someone down just because I disagree with them.

This "bit of noise" becomes significant on sites with a small number of subscribers, as a +/-2 vote is a "big deal".

Replies from: dfranke
comment by dfranke · 2009-03-20T20:58:57.171Z · LW(p) · GW(p)

I think that's a feature, not a bug. What an upvote expresses is nearer to "you should listen to this guy" than to "I agree with this guy", but I think the former is more useful information.

comment by diegocaleiro · 2010-12-17T17:07:30.919Z · LW(p) · GW(p)

There should be an emotional display of how many upvotes a post got.

Numbers are, well, too numbery for that.

Either a smile with ever growing smile.

or a ballon that grows bigger and bigger (for posts that really get way too upvoted, the ballon could explode into colorfull bright carnival paper, or candy, or Brad Pitt, or Russian Redheads...)

Ok, ballon or smile, who is with me?

Replies from: notriddle
comment by notriddle · 2013-04-30T01:24:37.534Z · LW(p) · GW(p)

I like the idea, but they seem kind of gimmicky. (thinking of LW's comments section, it would be hard to give another icon the kind of prominence we want, without making it too big). How about a green/red bar, like the one on YouTube?

comment by MBlume · 2009-03-21T04:00:36.951Z · LW(p) · GW(p)

I must admit, I think I do find myself going into Vulcan mode when posting on LW. I find myself censoring very simple social cues -- expressions of gratitude, agreement, emotion -- because I imagine them being taken for noise. I think I'm going to make an effort to snap myself out of this.

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2009-03-21T07:40:49.885Z · LW(p) · GW(p)

Same here. It's very natural for me to thank people when they say or do something awesome, to encourage promising newbies, and to express my agreement when I do agree, but I got the impression that such things are generally frowned upon here, so I found myself suppressing them.

Actually, I didn't mind that much -- the power of ideas discussed here way outweighs these social inconveniences, and I can easily live with that. But personally, I would prefer to be able to express my agreement and gratitude without spending too much calories on worrying about my tribal status.

(Of course we'll need to keep the signal/noise ratio in check, but I'll post my ideas on that in a separate comment).

comment by TheOtherDave · 2010-11-21T21:31:24.333Z · LW(p) · GW(p)

Two thoughts.

  1. In any relationship where I have influence, I expect to get more of what I model.

For example, in a community where I have influence, I expect demonstrating explicit support to push community norms towards explicit support, and demonstrating criticism to push norms towards criticism.

This creates the admittedly frustrating situation where, if a community is too critical and insufficiently supportive, it is counterproductive for me to criticize that. That just models criticism, which gets me more criticism; the more compelling and powerful my criticism, the more criticism I'll get in return.

If a community is too critical and insufficiently supportive, I do better to model agreement as visibly and as consistently as I can, and to avoid modeling criticism. For example, to criticize people privately and support them publicly.

  1. In any relationship where I have influence, I expect to get more of what I reward.

If a community is too critical and insufficiently supportive, I do well to be actively on the lookout for others' supportive contributions and to reward them (for example: by praising them, by calling other people's attention to them, and/or by paying attention to them myself). I similarly do well to withhold those rewards from critical contributions.

Replies from: Vaniver
comment by Vaniver · 2010-11-21T21:35:51.405Z · LW(p) · GW(p)

Voted up. (Explicit support and rewards, ahoy!)

comment by Vladimir_Golovin · 2009-03-20T21:02:37.799Z · LW(p) · GW(p)

Heh, it seems like this post has primed me for agreement, and I upvoted a lot more comments than I usually do. And it looks like many others did this as well -- look at the upvote counts! I was reading and voting with Kibitzer on, and was surprised to see the numbers.

(Have I just lowered my status by signaling that I'm susceptible to priming?)

Replies from: pjeby, jschulter
comment by pjeby · 2009-03-20T21:37:37.403Z · LW(p) · GW(p)

Nah, you've raised it, by signaling that you're honest. At least, that's how it would work among true rationalists (as opposed to anti-irrationalists). ;-)

comment by jschulter · 2011-01-19T02:16:56.642Z · LW(p) · GW(p)

They surprised me too. (I actually felt the urge to use an unnecessary exclamation point there the priming's made me so enthusiastic...)

And I think that the status gained from the fact that you noticed being primed probably outweighed any lost due to it us being told it happened. Though now that we're noticing it, we need to decide which frequency of upvoting we should be using so we can avoid the effect.

comment by Benquo · 2023-06-28T11:52:32.163Z · LW(p) · GW(p)

This article seems to model rational discourse as a cybernetic system made of two opposite actions that need to be balanced:

  • Agreement / support of shared actions
  • Disagreement / criticism

Agreement and disagreement are not basic elements of a statement about base reality, they're contextual facts about the relation of your belief to others' beliefs. Is "the sky is blue" agreement or dissent? Depends on what other people are saying. If they're saying it's blue, it's agreement. If they're saying it's green, it's dissent. Someone might disagree with someone by supporting an action, or agree with a criticism of what was previously a shared story. When you have a specific belief about the world, that belief is not made of disagreement or agreement with others, it's made of constrained conditional anticipations about your observations [LW · GW].

This error seems likely related to using a synagogue fundraiser as the central case of a shared commitment of resources, rather than something like an assurance contract! There's a very obvious antirational motive for synagogue fundraisers not to welcome criticism - God is made up, and a community organized around the things its members would genuinely like to do together wouldn’t need to invoke fictitious justifications. Rational coordination should be structurally superior, not just the same old methods but for a better cause.

Insofar as there's something to be rescued from this post, it's that establishing common knowledge of well-known facts is underrated, because it helps with coordination to turn mutual knowledge into common knowledge so everyone can rely on everyone else in the community acting on that info. But that also recommends blurting out, "The emperor's naked!".

There's also the problem that sometimes people say stuff that's off-topic and not helpful enough to be worth it - but compressing the complexity of that problem down to managing the level of agreement vs criticism is substituting an easier but unhelpful task in place of a more difficult but important one.

Replies from: Benquo
comment by Benquo · 2023-06-28T11:58:46.510Z · LW(p) · GW(p)

In hindsight, a norm against criticizing during a fundraiser, when there is always a fundraiser, leads to a community getting scammed by people telling an incoherent story about an all-powerful imaginary guy, just like they did in the synagogue example.

comment by AnnaSalamon · 2009-03-20T09:48:31.144Z · LW(p) · GW(p)

Many points that are both new and good. Like prase, and like a selection of other fine LW-ers with whom I hope to be agreeing soon, I think your post is awesome :)

One root of the agreement/disagreement asymmetry is perhaps that many of us aspiring rationalists are intellectual show-offs, and we want our points to show everyone how smart we are. Status feels zero-sum, as though one gains smart-points from poking holes in others' claims and loses smart-points from affirming others' good ideas. Maybe we should brainstorm some schemas for expressing agreement while adding intellectual content and showing our own smarts, like "I think your point on slide 14 is awesome. And I bet it can be extended to new context __", or "I love the analogy you made on page 5; now that I read it, I see how to take my own research farther..."

Related: maybe we feel self-conscious about speaking if we don't have anything "new" to add to the conversation, and we don't notice "I, too, agree" as something new. One approach here would be to voice, not just agreement, but the analysis that's going into each individual's agreement, e.g. "I agree; that sounds just like my own experience trying to get an atheists club started", or "I'm adopting these beliefs now, because I trust Eliezer's judgment here, but I have little confirming evidence of my own, so don't double-count my agreement as new evidence". Voicing the causal structure of our agreement would:

  • Give us practice seeing how others navigate evidence and Aumann-type issues;
  • Expose us to others' evidence;
  • Guard against information cascades (assuming honesty in those participating);
  • Let us affirm our identities as smart rationalists, while we express agreement. :)
Replies from: MBlume, Davorak
comment by MBlume · 2009-03-20T09:55:32.676Z · LW(p) · GW(p)

Related: maybe we feel self-conscious about speaking if we don't have anyting "new" to add to the conversation, and we don't notice "I, too, agree" as something new.

I've often wrestled with this myself, and hesitated to comment for just this reason.

Replies from: MichaelGR, CannibalSmith, Eliezer_Yudkowsky, Roko
comment by MichaelGR · 2009-03-21T00:42:48.882Z · LW(p) · GW(p)

Me too.

comment by CannibalSmith · 2009-03-20T11:58:54.618Z · LW(p) · GW(p)

Me too!

comment by Roko · 2009-03-20T21:39:47.482Z · LW(p) · GW(p)

Me too

comment by Davorak · 2011-02-05T02:03:15.504Z · LW(p) · GW(p)

I would encourage you to make this a fornt page post if you have the time. I think these thoughts and strategies are positive, rational and necessary group building skills for any long term group that fulfills rationalist goals. Or maybe it should be in the community guidelines(do these exist? I imagine the sequences as extended community guidelines) so most new members read them over.

comment by MoreOn · 2012-01-01T19:51:39.708Z · LW(p) · GW(p)

“If I agree, why should I bother saying it? Doesn’t my silence signal agreement enough?”

That’s been my non-verbal reasoning for years now! Not just here: everywhere. People have been telling me, with various degrees of success, that I never even speak except to argue. To those who have been successful in getting through to me, I would respond with, “Maybe it sounds like I’m arguing, but you’re WRONG. I’m not arguing!”

Until I read this post, I wasn’t even aware that I was doing it. Yikes!

Replies from: Omegaile
comment by Omegaile · 2012-01-22T20:32:40.974Z · LW(p) · GW(p)

“If I agree, why should I bother saying it? Doesn’t my silence signal agreement enough?”

The fact is that there is a strong motive to disagree: either I change my opinion, or you do.

On the other hand, the motives for agreeing are much more subtle: there is an ego boost; and I can influence other people to conform. Unless I am a very influent person, these two reasons are important as a group, but not much individually.

Which lead us to think: There is a similar problem with elections, and why economists don´t vote .

Anyway there is a nice analogy with physics: eletromagnetic force are much stronger than gravitational, but at large scale gravity is much more influent. (which is kinda obvius and made me think why no one pointed this on this post before)

comment by RickJS · 2009-09-08T18:18:43.125Z · LW(p) · GW(p)

BRAVO, Eliezer! Huuzah! It's about time!

I don't know if you have succeeded in becoming a full rationalist, but I know I haven't! I keep being surprised / appalled / amused at my own behavior. Intelligence is way overrated! Rationalism is my goal, but I'm built on evolved wet ware that is often in control. Sometimes my conscious, chooses-to-be-rationalist mind is found to be in the kiddy seat with the toy steering wheel.

I haven't been publicly talking about my contributions to the Singularity Institute and others fighting to save us from ourselves. Part of that originates in my father's attitude that it is improper to brag.

I now publicly announce that I have donated at least $11,000 to the Singularity Institute and its projects over the last year. I spend ~25 hours per week on saving humanity from Homo Sapiens.

I say that to invite others to JOIN IN. Give humanity a BIG term in your utility function. Extinction is Forever. Extinction is for ... us?

Thank you, Eliezer! Once again, you've shown me a blind spot, a bias, an area where I can now be less wrong than I was.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens™ :-|

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-09-08T18:45:02.827Z · LW(p) · GW(p)

Cool!

Just am curious.. What do you do for 25 hours a week to save humanity from itself?

Replies from: RickJS
comment by RickJS · 2009-09-09T16:56:38.618Z · LW(p) · GW(p)

Mostly, I study. I also go to a few conferences (I'll be at the Singularity Summit) and listen. I even occasionally speak on key issues (IMO), such as (please try thinking WITH these before attacking them. Try agreeing for at least a while.):

  • "There is no safety in assuring we have a power switch on a super-intelligence. That would be power at a whole new level. That's pretty much Absolute Power and would bring out the innate corruption / corruptibility / self-interest in just about anybody."
  • "We need Somebody to take the dangerous toys (arsenals) away."
  • "Just what is Humanity up to that requires 6 Billion individuals?"

All of that is IN MY OPINION. <-- OK, the comments to this post showed me the error of my ways. I'm leaving this here because comments refer to it.

Edited 07/14/2010 because I've learned since 2009-09 that I said a lot of nonsense.

Replies from: Jack, thomblake
comment by Jack · 2009-09-09T17:54:25.075Z · LW(p) · GW(p)

I can't help but think that those activities aren't going to do much to save humanity. I don't want to send you into an existential crisis or anything but maybe you should tune down your job description. "Saving Humanity from Homo Sapiens™" is maybe acceptable for Superman. It might be affably egotistical for someone who does preventive counter-terrorism re: experimental bioweapons. "Saving Humanity from Homo Sapiens one academic conference at a time" doesn't really do it for me.

Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.

Replies from: RickJS, RickJS
comment by RickJS · 2009-09-11T18:32:45.957Z · LW(p) · GW(p)

Jack wrote on 09 September 2009 05:54:25PM:

Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.

I don't wish for it. That part was inside parentheses with a question mark. I merely suspect it MAY be needed.

Please explain to me how the destruction follows from the rule of a god-like totalitarian.

Thank you for your time and attention.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)

Replies from: Jack
comment by Jack · 2009-09-12T18:47:46.875Z · LW(p) · GW(p)

Maybe some Homo Sapiens would survive, humanity wouldn't. Are the human animals in 1984 "people"? After Winston Smith dies is there any humanity left?

I can envision a time when less freedom and more authority is necessary for our survival. But a god-like totalitarian pretty much comes out where extinction does in my utility function.

Replies from: pdf23ds, RickJS
comment by pdf23ds · 2009-09-23T10:02:24.615Z · LW(p) · GW(p)

IIRC, Winston Smith doesn't die; by the end, his spirit is completely broken and he's practically a living ghost, but alive.

comment by RickJS · 2009-09-19T23:41:34.403Z · LW(p) · GW(p)

Oh. My mistake. When you wrote, "Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.", I read:

  • [Totalitarian rule... ] ... [is] ... the best way to destroy humanity, (as in cause and effect.)
  • OR maybe you meant: wishing ... [is] ... the best way to destroy humanity

It just never occurred to me you meant, "a god-like totalitarian pretty much comes out where extinction does in my utility function".

Are you willing to consider that totalitarian rule by a machine might be a whole new thing, and quite unlike totalitarian rule by people?

comment by RickJS · 2009-09-11T18:06:05.522Z · LW(p) · GW(p)

Jack wrote on 09 September 2009 05:54:25PM :

I can't help but think that those activities aren't going to do much to save humanity.

I hear that. I wasn't clear. I apologise.

I DON'T KNOW what I can do to turn humanity's course. And, I decline to be one more person who uses that as an excuse to go back to the television set. Those activities are part of my search for a place where I can make a difference.

"Saving Humanity from Homo Sapiens™" is maybe acceptable for Superman.

... but not acceptable from a mere man who cares, eh?

(Oh, all right, I admit, the ™ was tongue-in-cheek!)

Skip down to END BOILERPLATE if and only if you've read version v44m

First, please read this caveat: Please do not accept anything I say as True.

Ever.

I do write a lot of propositions, without saying, "In My Opinion" before each one. It can sound preachy, like I think I've got the Absolute Truth, Without Error. I don't completely trust anything I have to say, and I suggest you don't, either.

Second, I invite you to listen (read) in an unusual way. "Consider it": think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold".

If you have a reaction (e.g. "That's WRONG!"), please gently save it aside for later. For just a while, please try on the concept, test drive it, use the idea in your life. Perhaps you'll see something even beyond what I offered.

There will plenty of time to criticize, attack, and destroy it AFTER you've "panned for the gold". You won't be missing an opportunity.

Third, I want you to "get" what I offered. When you "get it", you have it. You can pick it up and use it, and you can put it down. You don't need to believe it or understand it to do that. Anything you BELIEVE is "glued to your hand"; you can't put it down.

-=-= END BOILERPLATE version 44m

I think we may have different connotations. I'm going to reluctantly use an analogy, but it's just a temporary crutch. Please drop it as soon as you get how I'm using the word 'saving'.

If I said, "I'm playing football," I wouldn't be implying that I'm a one-man team, or that I'm the star, or that the team always loses when I'm not there. Rigorously, it only means that I'm playing football.

However, it is possible to play football for the camaraderie, or the exercise, or to look good, or to avoid losing. A person can play football to win. Regardless of the position played. It's about attitude, commitment, and responsibility SEIZED rather than reluctantly accepted.

I DECLARE that I am saving humanity from Homo Sapiens. That's a declaration, a promise, not a description subject to True / probability / False. I'm playing to win.

Maybe I'll never be allowed to get on the field. I remember the movie Rudy, about Dan Ruettiger. THAT is what it is to be playing football in the face of being a little guy. That points toward what it is to be Saving Humanity from Homo Sapiens in the face of no evidence and no agreement.

You could give me a low probability of ever making a difference . But before you do, ask yourself, "What will this cause?"

It occurs to be that this little sub-thread beginning with "Mostly, I study. " illustrates what Eliezer was pointing out in "Why Our Kind Can't Cooperate.".

  • "Some things are worth dying for. Yes, really! And if we can't get comfortable with admitting it and hearing others say it, then we're going to have trouble caring enough - as well as coordinating enough - to put some effort into group projects. You've got to teach both sides of it, "That which can be destroyed by the truth should be," and "That which the truth nourishes should thrive." "

You, too, can be Saving Humanity from Homo Sapiens. You start by saying so.

The clock is ticking.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, even if I NEVER get on the field)

comment by thomblake · 2009-09-09T17:33:51.445Z · LW(p) · GW(p)

IN MY OPINION

I'm not sure what this was supposed to add, especially with emphasis. Whose opinion would we think it is?

Replies from: RickJS
comment by RickJS · 2009-09-11T00:01:38.393Z · LW(p) · GW(p)

I've been told that my writing sounds preachy or even religious-fanatical. I do write a lot of propositions without saying "In my opinion" in front of each one. I do have a standard boilerplate that I am to put at the beginning of each missive:

First, please read this caveat: Please do not accept anything I say as True.

Ever.

I do write a lot of propositions, without saying, "In My Opinion" before each one. It can sound preachy, like I think I've got the Absolute Truth, Without Error. I don't completely trust anything I have to say, and I suggest you don't, either.

Second, I invite you to listen (read) in an unusual way. "Consider it": think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold".

If you have a reaction (e.g. "That's WRONG!"), please gently save it aside for later. For just a while, please try on the concept, test drive it, use the idea in your life. Perhaps you'll see something even beyond what I offered.

There will plenty of time to criticize, attack, and destroy it AFTER you've "panned for the gold". You won't be missing an opportunity.

Third, I want you to "get" what I offered. When you "get it", you have it. You can pick it up and use it, and you can put it down. You don't need to believe it or understand it to do that. Anything you BELIEVE is "glued to your hand"; you can't put it down.

-=-= END Boilerplate

In that post, I got lazy and just threw in the tag line at the end. My mistake. I apologize. I won't do that again.

With respect and high regard,
Rick Schwall
Saving Humanity from Homo Sapiens (playing the game to win, but not claiming I am the star of the team)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-09-11T08:34:32.423Z · LW(p) · GW(p)

This only makes it worse, because you can't excuse a signal. (See rationalization, signals are shallow).

Also: just because you believe you are not fanatical, doesn't mean you are not. People can be caught in affective death spirals even around correct beliefs.

Replies from: RickJS
comment by RickJS · 2009-09-11T19:11:08.977Z · LW(p) · GW(p)

Vladimir_Nesov wrote on 11 September 2009 08:34:32AM:

This only makes it worse, because you can't excuse a signal.

This only makes what worse? Does it makes me sound more fanatical?

Please say more abut "you can't excuse a signal". Did you mean I can't reverse the first impression the signal inspired in somebody's mind? Or something else?

Also: just because you believe you are not fanatical, doesn't mean you are not. People can be caught in affective death spirals even around correct beliefs.

OK I'll start with a prior = 10% that I am fanatical and / or caught in an affective death spiral.

What do you recommend I do about my preachy style?

I appreciate your writings on LessWrong. I'm learning a lot.

Thank you for your time and attention.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-09-11T19:25:08.689Z · LW(p) · GW(p)

What do you recommend I do about my preachy style?

I suggest trying to determine your true confidence on each statement you write, and use the appropriate language to convey the amount of uncertainty you have about its truth.

If you receive feedback that indicates that your confidence (or apparent confidence) is calibrated too high or too low, then adjust your calibration. Don't just issue a blanket disclaimer like "All of that is IN MY OPINION."

Replies from: RickJS
comment by RickJS · 2009-09-19T23:01:55.152Z · LW(p) · GW(p)

OK.

Actually, I'm going to restrain myself to just clarifying questions while I try to learn the assumed, shared, no-need-to-mention-it body of knowledge you fellows share.

Thanks.

comment by PhilGoetz · 2009-03-20T19:54:18.499Z · LW(p) · GW(p)

Two observations:

  • In American culture, when you give money to a charity, you aren't supposed to tell people. Christian doctrine frowns heavily on that, and we are all partly indoctrinated with that doctrine. That's why no one sent their "yes" response to the list.

  • You just wrote a post with 22 web links, and 19 of them were to your own writings. I think that says more about why we can't cooperate than anything else in the post.

Replies from: Technologos, Cameron_Taylor
comment by Technologos · 2009-03-21T04:02:38.786Z · LW(p) · GW(p)

Far from being a negative aspect of the post, the self-linking is a key element of Eliezer's effort to build a common vocabulary for rationalists. I've personally found them extremely helpful for reminding myself of the context of the words, when I've forgotten. They're basically footnotes.

How can we cooperate if we don't even speak the same language?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-03-21T20:36:02.427Z · LW(p) · GW(p)

It's better to have those links than not to have them. It's a bit as if Eliezer were writing a large, hypertext book that we are writing footnotes in. But the lack of links to the writings of other people shows a lack of engagement and a self-preoccupation that smart people tend to have. Too often, when we ask others for co-operation, we really mean "get behind my ideas and my agenda".

Cooperation involves compromise. It involves participating in the critique of those ideas. It requires, as a prerequisite, believing that others are smart enough to look at the same evidence and see things that you missed. In a forum like this, actual interest in cooperation is evidence by writing relatively short posts, and then responding at length to many of the comments; rather than by writing extremely long posts, and then making a few short responses to comments.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-21T21:06:10.091Z · LW(p) · GW(p)

I link to myself because I know what I have written.

Replies from: Nominull
comment by Nominull · 2009-03-21T21:16:11.605Z · LW(p) · GW(p)

Maybe you should read something written by somebody else sometime.

Replies from: Davorak
comment by Davorak · 2011-02-05T02:20:56.360Z · LW(p) · GW(p)

This is an unhelpful comment and did not contribute to the conversation and I interpret it as an attack. Instead of attack why not engage EY on why he thinks it is so important to link to want he has written rather then what other people have written.

Any time I get the urge to use a "witty" oneliner I instead ask for the persons reasoning, perspective and logic that lead them to their conclusion.

Replies from: Nominull, Perplexed, wedrifid
comment by Nominull · 2011-02-05T04:54:33.563Z · LW(p) · GW(p)

First let me say that I do not think that attacks are by their very nature impermissible, and if you do, how dare you put "witty" in scare quotes? That's just flat-out unkind.

Anyway, it's a little hard for me to defend my comments of two years ago against attack, because I no longer remember what prompted me to make them. I will do my best to reconstruct my mental state leading up to the comment I made.

I don't think I was necessarily on PhilGoetz's side when I read his comment. I think I agreed, and still agree, with Technologos. But when I read the Wise Master's response to it, it didn't sit right with me. It read like an attempt to fight back against attack with anything that came to hand, rather than an attempt to seek truth. Surely, I must have felt, if the Wise Master were thinking clearly, he would see that unfamiliarity with the works of others is not an excuse, but in fact the entire problem. I feel that I wanted to communicate this insight. I chose the form that I did probably because it was the first one that came to mind. I hang out on some pretty rough and tumble internet forums, described by one disgruntled former poster as "geek bevis[sic] and buthead[sic] humour[sic]". Sharp, witty-without-the-scare-quotes one-liners are built into my muscle memory at this point, and I view a well-executed burn as having aesthetic value in and of itself. I dunno, there is something to be said for short, elegant responses to provoke thought, rather than long plodding walls of text.

Anyway, that's my reasoning, perspective, and logic. I hope you found this enlightening.

Replies from: Davorak
comment by Davorak · 2011-02-05T06:36:35.169Z · LW(p) · GW(p)

"witty" was describing my remark, as in the remarks I hold back on may not actually be witty, I was not trying to reference your remark though in retrospect it does seem easy to infer that so I apologize for communicating sloppily.

Attacks that do not forward the conversation are not useful. If the attacker does not expose the logic and data behind their attack then the person being attacked has no logic or data to pick a part and respond to and has no reason to believe that the attacker is earnest in seeking the truth.

Replies from: wedrifid
comment by wedrifid · 2011-02-05T07:27:30.197Z · LW(p) · GW(p)

"witty" was describing my remark, as in the remarks I hold back on may not actually be witty, I was not trying to reference your remark though in retrospect it does seem easy to infer that so I apologize for communicating sloppily.

Your attack against Nominull was, in fact, stronger and less ambiguous than Nominull's.

Attacks that do not forward the conversation are not useful. If the attacker does not expose the logic and data

The logic behind the point was actually quite obvious, which is not to say I would have presented it in this context. As Perplexed points out, sometimes there are benefits to taking the effort that you do know what other people have written. (Incidentally, I upvoted both Eliezer Phil and left Nominull alone).

Nominull's comment, discourteous or not, furthered the actual conversation while yours did not (and nor did mine). So that isn't the deciding factor here of why your kind of attack is different from Nominull's kind. I think the difference in perception is that you are responding to provocation, which many people perceive as a whole different category - but that can depend which side you empathise with.

Replies from: Davorak
comment by Davorak · 2011-02-05T08:12:36.999Z · LW(p) · GW(p)

Your attack against Nominull was, in fact, stronger and less ambiguous than Nominull's.

You use the terms "Stronger", "less ambiguous" when I did not make the claim of weaker or more ambiguous. Are you implying that I am untruthful in your first quote of me, if so it is a misinterpretation on your part.

The logic behind the point was actually quite obvious, which is not to say I would have presented it in this context.

The logic on why Nominull values EY linking and quoting philosophical works is not obvious to me. Nor is it obvious to me what Nominull's mental model on why EY has not been linking an quoting philosophical works(from 2009 comment). With out making that mental model clear and pointing out supporting evidence I do not see who it is useful.

As Perplexed points out, sometimes there are benefits to taking the effort that you do know what other people have written.

I do not see any one denying that there are benefits to this in this conversation. I can not tell if you have a deeper point.

I think the difference in perception is that you are responding to provocation

That does not fit to how I view my response. It seems to me that the conversation could have taken a much different and more productive route right after EY's comment and Nominull's comment discouraged it. I gave the alternative of engaging EY on "why he thinks it is so important to link to want he has written rather then what other people have written" that I thought would lead to a more productive conversation. I want to encrage productive conversation if I am going to be a community member of lesswrong.

comment by Perplexed · 2011-02-05T02:50:54.116Z · LW(p) · GW(p)

This is an unhelpful comment and did not contribute to the conversation.

I disagree. It is a very appropriate response to Eliezer's flip dismissal of Goetz's quite sincere (and to my mind, good) suggestions.

Eliezer is, of course, very well-read for a man of his age, but he is actually a bit parochial given the breadth of his ambitions and the authoritative, didactic writing style. His credibility, his communication ability, his fundraising, and even his ideas could probably benefit if he made a conscious effort to make his writing a bit more scholarly.

I understand that Eliezer is both very busy and very prolific, but I thought that his excuse (that he cited himself so much only for reasons of convenience (or laziness)) was much too dismissive of Phil's arguments - in large part because I think his excuse is quite likely the truth.

Replies from: Davorak
comment by Davorak · 2011-02-05T03:15:44.833Z · LW(p) · GW(p)

With only a sentence and without back and forth conversation do you have the ability to pull out flippant intent from:

I link to myself because I know what I have written.

I do not know EY so I can not assign myself a high probability of doing so. In truth I subconsciously assigned a high probability that Nominull was in the same boat as me, in other words I jumped to conclusions. Do you assign yourself a high probability of determining EY's intent from the above? If so please share if you can.

I can imagine EY's statement made with helpful intent(I could have made that statement with helpful intent), responding to it as if it was made with unhelpful intent with out evidence does not seem rational/helpful to me.

Replies from: Perplexed
comment by Perplexed · 2011-02-05T05:19:39.770Z · LW(p) · GW(p)

I think you are attaching too much importance to inferring the intent (flippant vs helpful) of Eliezer's one-line response to several dozen lines of discussion, and attaching too little importance to assessing the tone. In any case, the dictionary definition of flippant:

frivolously disrespectful, shallow, or lacking in seriousness; characterized by levity

seems to be about tone, rather than intent. Eliezer's comment qualifies as flippant. Nominull's response was also flippant by this definition. This matching tone strikes me as appropriate - which is exactly what I said.

At the point where Eliezer made his comment, he was being mildly criticized. His flippant comment, which I think was exactly truthful, carried the subtext that he was not particularly interested in discussing those criticisms at that time. He is totally within his rights sending that message. The criticism was mild, and formulating a serious and thoughtful response to the criticism is not something he was required to do. He could have just ignored it. He chose not to.

Sometimes clever, conversation-stopping responses don't stop conversations. Particularly when they are a little bit rude. Eliezer got a clever and rude response back. And for almost two years, everyone was satisfied with that ending.

Replies from: Davorak, Davorak
comment by Davorak · 2011-02-05T07:03:05.910Z · LW(p) · GW(p)

Eliezer got a clever and rude response back. And for almost two years, everyone was satisfied with that ending.

I think there is a high probability that lack of further comments is just due to the propensity not to post in old conversations.

I figured if the sequences and in post links are to be taken seriously then the comments should be too. Old comments should not be treated as if they were perserved in carbonite but living arguments.

comment by Davorak · 2011-02-05T06:55:15.273Z · LW(p) · GW(p)

You can replace intent with tone and I would stand by that point. I could make the same remark without disrespectful, shallow, lacking seriousness, and with out levity.

Sometimes clever, conversation-stopping responses don't stop conversations. Particularly when they are a little bit rude. Eliezer got a clever and rude response back.

By your description Eliezer makes a true but rude remark and receives a rude response back and this is "appropriate." I do not see how a rude response to what is believed to be a rude comment is productive, it does not bring any logic or new data to the table.

Replies from: wedrifid
comment by wedrifid · 2011-02-05T10:07:41.614Z · LW(p) · GW(p)

it does not bring any logic or new data to the table.

This example did.

Replies from: Davorak
comment by Davorak · 2011-02-05T16:08:05.061Z · LW(p) · GW(p)

Are you replying to this?

comment by wedrifid · 2011-02-05T07:05:28.282Z · LW(p) · GW(p)

21 March 2009 09:16:11PM

It is long past time for chastisement, if it was ever required.

Replies from: Davorak
comment by Davorak · 2011-02-05T07:19:47.593Z · LW(p) · GW(p)

I respond to a similar comment here.

It is not about chastisement, it is about the people, like me, who come and read it later.

Replies from: wedrifid
comment by wedrifid · 2011-02-05T07:39:57.715Z · LW(p) · GW(p)

You seem to be remarkably willing to assert how your comments should be interpreted with respect to intent, meaning and social implications. Yet you do not seem to have paid Nominull that same courtesy.

Replies from: Davorak
comment by Davorak · 2011-02-05T08:28:45.343Z · LW(p) · GW(p)

Well I know what my intent is I know what I want my social implications to be. It makes sense that I try and communicate them. I accept that Nominull hangs "out on some pretty rough and tumble internet forums" and did not have unproductive intentions. I have not claimed that Nominull had unproductive intentions.

An example of impoliteness is needed if you want to continue this conversation.

comment by Cameron_Taylor · 2009-03-21T00:57:48.273Z · LW(p) · GW(p)

The observation about American culture (which applies to Australian culture too) is a good one.

You just wrote a post with 22 web links, and 19 of them were to your own writings. I think that says more about why we can't cooperate than anything else in the post.

I don't agree that the 19 links paint such negative picture. In fact, three external links in a single post is remarkable.

comment by jimrandomh · 2009-03-20T18:30:13.612Z · LW(p) · GW(p)

In hindsight, the problem with your fundraiser was obvious. There were two communications channels: one private channel for people who contributed, and one channel for everyone else. Very few people will post a second message after they've already posted one, so the existence of the private channel prevented contributors from posting on the mailing list. Removing all the contributors from the public channel left only nay-sayers and an environment that favored further nay-saying. The fix would be to merge the two channels: publish the messages received from contributors, unless they request otherwise.

comment by Cameron_Taylor · 2009-03-20T11:33:26.619Z · LW(p) · GW(p)

I agree with everything you said in your talk, and I think you're brilliant.

I've noticed that I am often hesitant to publicly agree with comments and posts here on LessWrong because often agreement will be seen as spam. While upvotes do count as something, it is far easier to post a disagreement than to invent an excuse to post something that mostly agrees. This can be habit forming.

Comparing say Less Wrong with a Mensa online discussion group I've noticed that my probaility of disagreement is far lower with the self identified rationalists than with the self and test identified generic smart people. The levels of Dark Side Argument are almost incomparable. I have begun disengaging from Dark debates wherever convenient purely to form better habits at agreement.

Replies from: prase
comment by prase · 2009-03-20T14:26:50.465Z · LW(p) · GW(p)

In fact, agreement is a sort of spam - it consumes space and usually doesn't bring new thoughts. When I imagine a typical conference where the participants are constantly running out of time, visualising the 5-minute question interval consumed by praise to the speaker helps me a lot in rationalising why the disagreement culture is necessary. Not that it would be the real reason why I would flee screaming out of the room, I would probably do even if the time wasn't a problem.

When I read the debates at e.g. daylightatheism.org I am often disgusted by how much agreement there is (and it is definitely not a Dark Side blog). So I think I am strongly immersed in the disagreement culture. But, all cultural prejudices aside, I will probably always find a discussion consisting of "you are brilliant" type statements extraordinarily boring.

Replies from: pjeby, Davorak, MichaelGR, Nominull, Court_Merrigan
comment by pjeby · 2009-03-20T16:07:31.486Z · LW(p) · GW(p)

It doesn't have to bring new thoughts to serve a purpose. A chorus of agreement is an emotional amplifier.

Replies from: AndrewH
comment by AndrewH · 2009-03-20T21:02:25.202Z · LW(p) · GW(p)

Not only that, it becomes a glue that binds people together, the more agreement the stronger the binding (and the more that get bound). At least that is the analogy that I use when I look at this; we (rationalists) have no glue, they (religions) have too much.

comment by Davorak · 2011-02-05T01:54:59.778Z · LW(p) · GW(p)

Agreement does not need to be contentless and therefore spam. It can fill in holes in the argument, take a different perspective(helping a different segment of the reading population), add specific details to the argument that were glossed over and much more.

I will probably always find a discussion consisting of "you are brilliant" type statements extraordinarily boring.

It sounds like you have a problem with lack of content more then you do with agreement. I am sure you would find contentless disagreement just a boring.

Replies from: prase
comment by prase · 2011-02-06T15:27:19.802Z · LW(p) · GW(p)

Agreements are a lot more often contentless, as a rule. When disagreeing, people feel motivated to include some reasons, and even if they don't, the one who was disagreed with feels motivated to ask for the reasons. But in principle you are right that my objections don't primarily aim at agreement.

comment by MichaelGR · 2009-03-21T00:42:12.484Z · LW(p) · GW(p)

I think you are focusing too much on discussions.

There are other activities where success can depend heavily on not acting alone, and it is in those types of activities (such as fundraising, seizing political power, reforming institutions, etc) that rationalist-types are disadvantaged by their lack of coordination.

comment by Nominull · 2009-03-20T15:36:40.356Z · LW(p) · GW(p)

I agree!

comment by Court_Merrigan · 2009-03-21T02:35:20.317Z · LW(p) · GW(p)

You didn't read Eliezer's post very carefully, did you? You need more practice in agreement and conformity. There are a limited number of "right" answers out there. It's alright to agree on them, when they are found.

comment by SoullessAutomaton · 2009-03-20T15:11:31.266Z · LW(p) · GW(p)

I'm going to agree with the people saying that agreement often has little to no useful information content (the irony is acknowledged). Note, for instance, that content-free "Me too!" posts have been socially contraindicated on the internet since time immemorial, and content-free disagreement is also generally verboten. This also explains the conference example, I expect. Significantly, if this is actually the root of the issue, we don't want to fight it. Informational content is a good thing. However, we may need to find ways to counteract the negative effects.

Personally, having been somewhat aware of this phenomenon, when I've agreed with what someone said I sometimes try to contribute something positive; a possible elaboration on one of their points, a clarification of an off-hand example if it's something I know well, an attempt to extend their argument to other territory, &c.

In cases like the fundraising one, where the problem is more individual misperception of group trends, we probably want something like an anonymous poll--i.e., "Eliezer needs your help to fund his new organization to encourage artistic expression from rationalists. Would you donate money to this cause?", with a poll and a link to a donation page. I would expect you'd actually get a slightly higher percentage voting "yes" than actually donating, though I don't know if that would be a problem. You'd still get the same 90% negative responses, but people would also see that maybe 60% said they would donate.

Replies from: JulianMorrison, Demosthenes
comment by JulianMorrison · 2009-03-20T15:46:22.013Z · LW(p) · GW(p)

"A slightly higher percentage"? More like: no correlation.

I recall that McDonalds were badly burned by "would you X". Would people buy salads? oh god yes, they'd love an opportunity to eat out and stick to their diets. Did they buy salads, once McDonalds had added them? Nope.

Similarly I recall that last US election the Ron Paul Blimp campaign was able to get a lot more chartable pledges than real-world money, and pretty quickly died from underfunding.

Replies from: Nebu, Annoyance, SoullessAutomaton
comment by Nebu · 2009-03-20T18:37:10.563Z · LW(p) · GW(p)

I recall that McDonalds were badly burned by "would you X". Would people buy salads? oh god yes, they'd love an opportunity to eat out and stick to their diets. Did they buy salads, once McDonalds had added them? Nope.

Someone[1] must be buying those salads, as McDonalds is keeping them on the market, and given that food spoils, it doesn't make financial sense for them to keep offering a product which doesn't sell.

1: I've actually tried the McDonalds salad 3 times. The first time, it was very (and surprisingly) good. The other two times it was mediocre.

Replies from: CarlShulman
comment by CarlShulman · 2009-03-21T06:21:52.353Z · LW(p) · GW(p)

You can keep small stocks of an item, and it can have positive effects beyond direct revenues, e.g. if families with one dieting or vegetarian member don't avoid McDonald's because that person can eat a salad.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-21T08:06:21.890Z · LW(p) · GW(p)

I think the positive effect is that they can say that they sell salads, people can convince themselves they intend to buy the salad, and so on.

Replies from: homunq, Matt_Simpson
comment by homunq · 2009-05-08T21:59:25.577Z · LW(p) · GW(p)

I saw a study recently that said that the mere presence of a salad on the menu increases people's consumption. I deeply doubt that fast food chains were surprised by that result.

From the nature of the study, it's not even about convincing themselves they intend to buy a salad. By merely seriously having considered the option, they give themselves virtue points which offset the vice of more consumption.

comment by Matt_Simpson · 2009-03-21T08:09:56.302Z · LW(p) · GW(p)

I think the positive effect...

Or rather, another positive effect. These explanations aren't mutually exclusive.

That being said, nice insight.

comment by Annoyance · 2009-03-20T15:53:32.682Z · LW(p) · GW(p)

Yes, excellent point that should be underlined for the readers here.

People's metaknowledge is very poor. Their knowledge about themselves, especially so.

comment by SoullessAutomaton · 2009-03-20T20:35:16.842Z · LW(p) · GW(p)

"A slightly higher percentage"? More like: no correlation.

You make an excellent point, I was not really thinking clearly there.

However, I will note that my intent was not that it should produce an accurate prediction of donations, but to better gauge public opinion on the idea to counteract the tendency to agree silently but disagree loudly.

comment by Demosthenes · 2009-03-20T16:22:36.410Z · LW(p) · GW(p)

I've worked for a number of non profits and in analysis of our direct mailings, we would get a better response from a mailing that included one of two things

  1. A single testimonial mentioning the amount that some person gave
  2. Some sort of comment about the group average (listeners are making pledges of $150 this season)

This is one of the reasons that some types of nonprofits choose to create levels of giving; my guess is that it is gaming these common level of giving ideas by creating artificial norms of participation. Note You can base your levels on actual evidence and not just round numbers! (plus inflation, right?)

We also generally found that people respond well to the idea of a matching donation (which is rational since your gift is now worth more).

I do believe that anonymous fund raising removes information about community participation that is very valuable to potential donors. Part of making a donation is responding the signal that you are not the only one sending a check to a hopeless office somewhere.

Anonymous polls might be a good idea, but especially among rational types, you might want the individual testimony: you get to see some of the reasoning!

I think the synagogue in the story picked up on these ideas and used them effectively. But the nice thing about raising money through direct mailing and the internet is that you can run experiments!

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-03-20T20:48:07.082Z · LW(p) · GW(p)

I do believe that anonymous fund raising removes information about community participation that is very valuable to potential donors. Part of making a donation is responding the signal that you are not the only one sending a check to a hopeless office somewhere.

The reason I specified anonymity was to reduce the likelihood of a social stigma attached to not donating. The idea of pressuring people into an otherwise voluntary gesture of support makes me very uncomfortable.

However, I may be overcautious on that aspect, and I defer to your greater experience with fundraising. Do you have any other empirical observations about response to fundraising efforts? You could consider submitting an article on the subject, either as it relates to instrumental rationality, or for the benefit of anyone else who might want to organize a rationality-related non-profit.

Replies from: Demosthenes
comment by Demosthenes · 2009-03-21T04:17:23.881Z · LW(p) · GW(p)

I think your caution is warranted, the fact that you can see the other people in the synagogue who don't stand up could be very hurtful to the nonparticipants. Highlighting individual donors or small groups is a good way to show public support without giving away to much information about your membership's participation as a whole.

If you are interested in more rigorous studies (we did ours in excel), you might want to try Dean Karlan's "Does Price Matter in Charitable Giving? Evidence from a Large-Scale Natural Field Experiment " http://karlan.yale.edu/p/MatchingGrant.pdf

I will try to dig up some other papers online

Replies from: TheOtherDrJones
comment by TheOtherDrJones · 2010-12-02T03:39:30.546Z · LW(p) · GW(p)

Amongst a group of people who know and interact with each other regularly such as a synagogue those who have the means to donate money and those who do not would be an extremely obvious piece of information to the members of that group.

There are actually two actions taken here by members, they either do not donate or they donate a certain amount. To the members of the group the amount donated is as much of an information channel as the choice to donate or not to donate. Those who donate a lot and are rich may cause offence by donating less than expected, those who donate a little when there is no expectation may gain esteem.

You are proposing a situation in which an individual donates less than expected by such a magnitude that it seriously affects people's esteem for them. This is possible, although given social pressures unlikely. It can occur at all because the magnitude of the donation combined with the wealth of the individual and the support for the cause are all easily calculable. Magnitude of donation is known, wealth is implied by clothes, status symbols or frank discussions about income, and support for the place of worship is expected to be high.

In a group of rational people donating to support a cause they have the option of donating, not donating and voicing support or criticism. You have established a reasonable grounds for why people do not arbitrarily voice support, and for why people voice criticism. But let's look at the amount donated and imagine it were being done publicly, is there a state where people can be hurt by donation or non-donation?

Even if the amount donated and a reasonable guess at the wealth of the individual are available, the amount donated can still vary by the level of support the person feels for the cause. There is no level of donation that is 'incorrect' just as there is no arbitrary 'correct' level of support. Therefore the situation is most unlikely to cause social harm to the individual donating, or those who do not donate as there is a rational reason for any level of donation.

comment by JulianMorrison · 2009-03-20T10:52:16.850Z · LW(p) · GW(p)

To be honest, I suspect a lot of those folks, and I include myself here, were anti-collectivists first.

In my own mind, the emotive rule "I might follow, but I must never obey" is built over a long childhood war and an eventual hard-fought and somewhat Pyrrhic victory. I know it's reversed stupidity, but it's hard to let go.

What good rationalist techniques are there for changing such things?

Replies from: pjeby, Emile, Annoyance, Cameron_Taylor, Davorak
comment by pjeby · 2009-03-20T16:18:49.433Z · LW(p) · GW(p)

Ask "what's bad about obeying?" Imagine a specific concrete instance of obeying, and then carefully observe your automatic, unconscious response. What bad thing do you expect is going to happen?

Most likely, you will get a response that says something about who you are as a person: your social image, like, "then I'll be weak". You can then ask how you learned that obeying makes someone weak... which may be an experience like your peers teasing you (or someone else) for obeying. You can then rationally examine that experience and determine whether you still think you have valid evidence for reaching that conclusion about obedience.

Please note, however, that you cannot kill an emotional decision like this without actually examining your own evidence for the proposition, as well as against it. The mere knowledge that your rule is irrational is not sufficient to modify it. You need to access (and re-assess) the actual memor(ies) the rule is based on.

comment by Emile · 2009-03-20T11:53:00.650Z · LW(p) · GW(p)

Recognizing that "I might follow, but I must never obey" is an emotional rule is already a good first step, much better than trying to rationalize it.

I've recognized that same pattern in myself - a bad feeling in response to the idea of following / obeying even when it's an objectively good idea to do so. I imagined an "asshole with a time machine" who would follow me around, observe what I did (buy a ham sandwitch for lunch, enter a book store...), go back in time a few seconds before my decision and order me to do it.

Once I realized I was much more angry against this hypothetical asshole than it was reasonable to, I tried getting rid of that anger. I guess I succeeded (the idea doesn't bug me as much), but I don't know if it means I won't have any more psychological resistance to obeying. I am probably still pretty biased towards individualism / giving more value to my opinion just because it's my own, but I'd like to find ways to get rid of that..

comment by Annoyance · 2009-03-20T14:22:12.146Z · LW(p) · GW(p)

"What good rationalist techniques are there for changing such things?"

Carefully examining the potential reasons for going along with someone else. Emile's point below is a very good one.

'Obedience' implies that we must go along with what someone says we should do. It's much better to think (hopefully accurately) that we've choosing to do something which coincidentally is also what someone has suggested. We don't need to choose to obey to go along.

Carefully examining the justifications for actions is also important. If there are compelling reasons to do X, the fact that we've been "ordered" to do X is irrelevant, just as being ordered NOT to do X is.

Replies from: bruno-mailly
comment by Bruno Mailly (bruno-mailly) · 2018-07-27T07:08:06.145Z · LW(p) · GW(p)
Carefully examining the justifications for actions is also important. If there are compelling reasons to do X, the fact that we've been "ordered" to do X is irrelevant, just as being ordered NOT to do X is.

Unfortunately, "doing what they say" tend to make people believe they are the top dog.

And a bit too many people are prompt to get this idea, reluctant to abandon it, and abuse it to no end.

So, pragmatically, sometimes it's better to find another way to get the desired result, or at least delay action to diminish that bad association.

comment by Cameron_Taylor · 2009-03-20T11:34:42.780Z · LW(p) · GW(p)

In my own mind, the emotive rule "I might follow, but I must never obey" is built over a long childhood war and an eventual hard-fought and somewhat Pyrrhic victory. I know it's reversed stupidity, but it's hard to let go.

Really? I've always thought my similar rule was embedded in my DNA.

comment by Davorak · 2011-02-05T02:14:44.670Z · LW(p) · GW(p)

Stating that you are not obeying and that you are take a particular course of action because it is a good idea seems to work/help some people.

Realize that the anti-collectivist pull is an explotable weakness it leaves you vulnerable to people who are perceptive and want to harm you. Some would say that you should just avoid getting people to want to harm you, however a consequence is that you would have to avoid standing up to people who harm the world, people you care for and some time yourself.

comment by Scott Alexander (Yvain) · 2009-03-20T19:36:19.918Z · LW(p) · GW(p)

Wait a second, now we're using Jews trying to run a synagogue as an example of a group who cooperate and don't always disagree with each other for the sake of disagreeing? Your synagogue must have been very different from mine. You never heard the old "Ten Jews, ten opinions - or twenty if they're Reform" joke? Or the desert island joke?

I also agree with everyone. In particular, I agree with Cameron and Prase that it's tough to just say "I agree". I agree with ciphergoth that I worry that I'm sucking up to you too much. I agree with Anna Salamon that we tend to be intellectual show-offs. I agree with Julian that many of us probably started off with a contrarian streak and then became rationalists. I agree with Jacob Lyles that there's a strong game theory element here - I lose big if rationalists don't cooperate, I win a little if we all cooperate under Eliezer's benevolent leadership, but to a certain way of thinking I win even more if we all cooperate under my benevolent leadership and there's no universally convincing proof that cooperating under someone else is always the highest utility option. And I agree with practically everything in the main post.

One thing I don't agree with: being ashamed of strong feelings isn't a specifically rationalist problem. It's a broader problem with upper/middle class society. Possibly more on this later.

Replies from: Eliezer_Yudkowsky, KevinC, Davorak
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-20T19:43:38.943Z · LW(p) · GW(p)

I've never been dragged to any other religious institution, so I wouldn't have any other example to use. I expect these forces are much stronger at Jesus Camp or the Raelians. But yes, even Jewish institutions still coordinate better than atheist ones.

comment by KevinC · 2010-11-22T10:35:53.511Z · LW(p) · GW(p)

Granting that the jokes you refer to are generally accurate, wouldn't that make the synagogue a better example for a rationalist Cat Herd than some other religious organization where people "think" in lockstep with the Dear Leader? The synagogue would represent an example of a group of people who manage to cooperate effectively even with a high level of dissensus (neologism for the opposite of consensus). Which, as I understand it, is the goal Eliezer is aiming for in this post.

comment by Davorak · 2011-02-05T02:39:49.801Z · LW(p) · GW(p)

And you win the most when the group is so rational that almost anyone could serve as the benevolent leader.

Replies from: wedrifid
comment by wedrifid · 2011-02-05T10:22:04.205Z · LW(p) · GW(p)

And you win the most when the group is so rational that almost anyone could serve as the benevolent leader.

The group trait required is not rationality - it is other traits that also share positive affect.

Replies from: Davorak
comment by Davorak · 2011-02-06T01:15:01.681Z · LW(p) · GW(p)

I was not asserting that rationality is all that you need to make the most efficient group, if that was what you are getting at.

I think we agree that starting with groups A and B both with x skills if group A is more rational it will also be the more effective group.

My argument was as the ability of the group to act rationally increases, the utility difference between being a member and being the leader will decreases as the group becomes better at judging the leaders value.

comment by AndrewKemendo · 2009-03-21T03:12:08.065Z · LW(p) · GW(p)

I personally see public disagreements as a way to refine the intent of the person under the spotlight rather than a social display of individualism. When I disagree with someone it is not for the sake of disagreeing but rather to refine what I may think is a good idea that has a few weak points. I do this to those I respect and agree with because I hope that others will do this to me.

I think the broader question here is not whether we should encourage widespread agreement in order to create cohesion - but rather if we can ensure that the tenets we collectively agree on are correct conclusions. That is in my mind the main difference between rationalists and what I would call tribalists - in general the majority agree on tenets which are have serious rational flaws or they do simply not raise contest with said tenets. Otherwise if we do follow the leader, then if there are true flaws in that particular modus - we will never discover them.

I agree that it is hard to start a movement based on this - however I see this as a positive attribute. Just as the (flawed) idea of representative democracy was supposed to slow government to a crawl - the rationalist mindset slows group think and confirmation bias to a near halt. It is however a strong movement, however slow.

comment by JamesAndrix · 2009-03-20T16:01:47.971Z · LW(p) · GW(p)

On 'What Do We Mean By "Rationality"?' when you said "If that seems like a perfectly good definition, you can stop reading here; otherwise continue." - I took your word for it and stopped reading. But apparently comments aren't enabled there.

You have significantly altered my views on morality (Views which I put a GREAT deal of mental and emotional effort into.) I suspect I am not alone in this.

I think there's a fine line between tolerating the appearance of a fanboy culture, and becoming a fanboy culture. The next rationalist pop star might not be up to the challenge.

And for that matter, how many time would you want to risk be subjected to agreement without succumbing? It's not wireheading, but people do get addicted.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-03-20T19:21:36.900Z · LW(p) · GW(p)

I think there's a fine line between tolerating the appearance of a fanboy culture, and becoming a fanboy culture. The next rationalist pop star might not be up to the challenge.

Agreement and disagreement look more like skills that we can develop (and can improve at both of) than ends of a continuum (where moving toward one means moving away from the other).

I mean, we can reduce the apparent and actual extent to which we're an Eliezer fan-club or echo chamber, and improve our armor against the emotional and social pressures that "we all think the Great Leader is perfect" tends to form. And we can also, simulateously, improve our ability to endorse good ideas even when someone else already said that idea, and to actually coordinate to get stuff done in groups.

Replies from: Roko
comment by Roko · 2009-03-20T21:44:00.652Z · LW(p) · GW(p)

we can reduce the apparent and actual extent to which we're an Eliezer fan-club or echo chamber, and improve our armor against the emotional and social pressures that "we all think the Great Leader is perfect"

I think Eli has succeeded in attracting enough very clever people to the community that this is not a massive danger. If Robin, you, Carl S, Yvain, Nick T, Nick Hay, Vladimir N, etc all disagreed with him for the same reason, and he didn't retract, he would look silly.

comment by novalis · 2011-09-26T16:45:38.578Z · LW(p) · GW(p)

"[A] survey of 186 societies found, belief in a moralising God is indeed correlated with measures of group cohesion and size." - God as Cosmic CCTV, Dan Jones

comment by Alicorn · 2009-03-20T16:50:42.310Z · LW(p) · GW(p)

I'm not sure if this was at work in your fundraiser, but I know I tend to see exhortations from others that I give to charitable causes/nonprofits as attempts at guilt tripping. (I react the same way when I'm instructed to vote, or brush my teeth twice a day, or anything else that sounds less like new information and more like a self-righteous command.) For this reason, I try to keep quiet when I'm tempted to encourage others to give to my pet charity/donate blood/whatever, for fear that I'll inspire the opposite reaction and hurt my goal. I don't always succeed, but that's an explanation other than a culture of disagreement for why some people might not have contributed to the discussion from a pro-giving position.

comment by mark_spottswood · 2009-03-20T14:01:11.710Z · LW(p) · GW(p)

Good points.

This may be why very smart folks often find themselves unable to commit to an actual view on disputed topics, despite being better informed than most of those who do take sides. When attending to informed debates, we hear a chorus of disagreement, but very little overt agreement. And we are wired to conduct a head count of proponents and opponents before deciding whether an idea is credible. Someone who can see the flaws in the popular arguments, and who sees lots of unpopular expert ideas but few ideas that informed people agree on, may give up looking for the right answer.

The problem is that smart people don't give much credit to informed expressions of agreement when parceling out status. The heroic falsfier, or the proposer of the great new idea, get all the glory.

comment by jacoblyles · 2009-03-20T09:54:09.543Z · LW(p) · GW(p)

There is no guarantee of a benevolent world, Eliezer. There is no guarantee that what is true is also beneficial. There is no guarantee that what is beneficial for an individual is also beneficial for a group.

You conflate many things here. You conflate what is true with what is right and what is beneficial. You assume that these sets are identical, or at least largely overlapping. However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case.

Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this. The prisoners dilemma and the tragedy of the commons are not new ideas. Telling people to act in the group interest because God said so is effective. It is easy to see how informing people of the costs of action, because truth is noble and people ought not be lied to, can be counter-effective.

Perhaps we should stop striving for the maximum rational society, and start pursuing the maximum rational society which is stable in the long term. That is, maybe we ought to set our goal to minimizing irrationality, recognizing that we will never eliminate it.

If we cannot purposely introduce a small bit of beneficial irrationality into our group, then fine: memetic evolution will weed us out and there is nothing we can do about it. People will march by the millions to the will of saints and emperors while rational causes whither on the vine. Not much will change.

Robin made an excellent post along similar lines, which captures half of what I want to say:

http://lesswrong.com/lw/j/the_costs_of_rationality/

I'll be writing up the rest of my thoughts soon.

Sorry, I can't find the motivation to jump on the non-critical bandwagon today. I had the idea about a week ago that there is no guarantee that truth= justice = prudence, and that is going to be the hobby-horse I ride until I get a good statement of my position out, or read one by someone else.

Replies from: Eliezer_Yudkowsky, conchis
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-20T19:07:14.355Z · LW(p) · GW(p)

I one-box on Newcomb's Problem, cooperate in the Prisoner's Dilemma against a similar decision system, and even if neither of these were the case: life is iterated and it is not hard to think of enforcement mechanisms, and human utility functions have terms in them for other humans. You conflate rationality with selfishness, assume rationalists cannot build group coordination mechanisms, and toss in a bit of group selection to boot. These and the referenced links complete my disagreement.

Replies from: jacoblyles
comment by jacoblyles · 2009-03-20T19:11:19.784Z · LW(p) · GW(p)

Thanks for the links, your corpus of writing can be hard to keep up with. I don't mean this as a criticism, I just mean to say that you are prolific, which makes it hard on a reader, because you must strike a balance between reiterating old points and exploring new ideas. I appreciate the attention.

Also, did you ever reply to the Robin post I linked to above? Robin is a more capable defender of an idea than I am, so I would be intrigued to follow the dialog.

Replies from: Davorak
comment by Davorak · 2011-02-05T02:28:17.574Z · LW(p) · GW(p)

If you are rational enough, perceptive enough and EY's writing is consistant enough at some point you will not have to read everything EY writes to have a pretty good idea of what his views on a matter will be. I would bet a good some of money that EY would prefer to have his reader gain this ability then read all of his writings.

comment by conchis · 2009-03-20T13:32:23.639Z · LW(p) · GW(p)

"However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case."

Except that we are free to adopt any version of rationality that wins. Rationality should be responsive to a given universe design, not the other way around.

"Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this."

Really? Most of the "individual rationality -> suboptimal outcomes" results assume that actors have no influence over the structure of the games they are playing. This doesn't reflect reality particularly well. We may not have infinite flexibility here, but changing the structure of the game is often quite feasible, and quite effective.

Replies from: pjeby, Nebu, jacoblyles, Annoyance
comment by pjeby · 2009-03-20T16:24:49.401Z · LW(p) · GW(p)

For example, we could establish a social norm that compulsive public disagreement is a shameful personal habit, and that you can't be even remotely considered "formidable" if you haven't gotten rid of the urge to seek status by pulling down others.

Replies from: Nebu
comment by Nebu · 2009-03-20T18:33:19.086Z · LW(p) · GW(p)

I disagree.

comment by Nebu · 2009-03-20T18:32:39.972Z · LW(p) · GW(p)

However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case.

Except that we are free to adopt any version of rationality that wins. Rationality should be responsive to a given universe design, not the other way around.

I don't think your argument applies to jacoblytes' argument. Jacoblytes claims that there is no reason for "rational" to equal "(morally/ethically) right", unless an intelligent designer designed the universe in line with our values.

So it's not about winning versus losing. It's that unless the rules of the game are set up just in a certain way, then winning may entail causing suffering to others (e.g. to our rivals).

Replies from: jacoblyles
comment by jacoblyles · 2009-03-20T18:54:56.736Z · LW(p) · GW(p)

My writing in these comments has not been perfectly clear, but Nebu you have nailed one point that I was trying to make: "there is no guarantee that morally good actions are beneficial".

The Christian morality is interesting, here. Christians admit up front that following their religion may lead to persecution and suffering. Their God was tortured and killed, after all. They don't claim that what is good will be pleasant, as the rationalists do. To that degree, the Christians seem more honest and open-minded. Perhaps this is just a function of Christianity being an old religion and having the time to work out the philosophical kinks.

Of course, they make up for it by offering infinite bliss in the next life, which is cheating. But Christians do have a more honest view of this world in some ways.

Maybe we conflate true, good, and prudent because our "religion" is a hard sell otherwise. If we admitted that true and morally right things may be harmful, our pitch would become "Believe the truth, do what is good, and you may become miserable. There is no guarantee that our philosophy will help you in this life, and there is no next life". That's a hard sell. So we rationalists cheat by not examining this possibility.

There is some truth to the Christian criticism that Atheists are closed-minded and biased, too.

comment by jacoblyles · 2009-03-20T17:53:49.471Z · LW(p) · GW(p)

"Except that we are free to adopt any version of rationality that wins. "

In that case, believing in truth is often non-rational.

Many people on this site have bemoaned the confusing dual meanings of "rational" (the economic utility maximizing definition and the epistemological believing in truth definition). Allow me to add my name to that list.

I believe I consistently used the "believing in truth" definition of rational in the parent post.

Replies from: conchis
comment by conchis · 2009-03-20T18:48:31.466Z · LW(p) · GW(p)

I agree that the multiple definitions are confusing, but I'm not sure that you consistently employ the "believing in truth" version in your post above.* It's not "believing in truth" that gets people into prisoners' dilemmas; it's trying to win.

*And if you did, I suspect you'd be responding to a point that Eliezer wasn't making, given that he's been pretty clear on his favored definition being the "winning" one. But I could easily be the one confused on that. ;)

"In that case, believing in truth is often non-rational."

Fair enough. Though I wonder whether, in most of the instances where that seems to be true, it's true for second-best reasons. (That is, if we were "better" in other (potentially modifiable) ways, the truth wouldn't be so harmful.)

comment by Annoyance · 2009-03-20T14:23:15.957Z · LW(p) · GW(p)

"Except that we are free to adopt any version of rationality that wins."

There's only one kind of rationality.

Replies from: Nick_Novitski, Pierre-Andre
comment by Nick_Novitski · 2009-03-20T16:21:38.126Z · LW(p) · GW(p)

I agree, but that one kind is able to determine an optimal response in any universe, except one where no observable event can ever be reliably statistically linked to any other, which seems like it could be a small subset, and not one we're likely to encounter except

Certainly, there are any number of world-states or day-to-day situations where a full rigorous/sceptical/rational and therefore lengthy investigation would be a sub-optimal response. Instinct works quickly, and if it works well enough, then it's the best response. But obviously, instinct cannot self-analyze and determine whether and in what cases it works "well enough," and therefore what factors contribute to it so working, etc. etc.

Passing the problem of a gun jamming the Rationality-Function might return the response, "If the gun doesn't fire, 90% of the time, pulling the lever action will solve the problem. The other 10% of the time, the gun will blow up in your hand, leading to death. However, determining to reasonable certainty which type of problem you're experiencing, in the middle of a firefight, will lead to death 90% of the time. Therefore, train your Instinct-Function to pull the lever action 100% of the time, and rely on it rather than me when seconds count."

Does this sound like what you mean by a "beneficial irrationality"?

Also: I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right. To me, these assertions appear uncontroversial, but you seem to disagree. What about them bothers you, and when will we get to see your article?

Replies from: jacoblyles
comment by jacoblyles · 2009-03-20T18:24:11.295Z · LW(p) · GW(p)

"Does this sound like what you mean by a "beneficial irrationality"?"

No. That's not really what I meant at all. Take nationalism or religion, for example. I think both are based on some false beliefs. However, a belief in one or the other may make a person more willing to sacrifice his well-being for the good of his tribe. This may improve the average chances of survival and reproduction of an individual in the tribe. So members of irrational groups out-compete the rational ones.

In the post above Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it's hurting the rational tribe. That's informative, and sort of my point.

There is some evidence that we have brain structures specialized for religious experience. One would think that these structures could only have evolved if they offered some reproductive benefit to animals becoming self-aware in the land of tooth and claw.

In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn't recommend it to a 13th century peasant.

"I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right."

This is not true a priori. That is my point. My challenge to you, Eliezer, and the other denizens of this site is simply: "prove it".

And I offer this challenge especially to Eliezer. Eliezer, I am calling you out. Justify your optimism in the prudence of truth.

Disprove the parable of Eve and the fruit of the tree of knowledge.

Replies from: pjeby, Eliezer_Yudkowsky, conchis
comment by pjeby · 2009-03-20T19:08:55.720Z · LW(p) · GW(p)

Disprove the parable of Eve and the fruit of the tree of knowledge.

I don't know 'bout no Eve and fruits, but I do know something about the "god-shaped hole". It doesn't actually require religion to fill, although it is commonly associated with religion and religious irrationalities. Essentially, religion is just one way to activate something known as a "core state" in NLP.

Core states are emotional states of peace, oneness, love (in the universal-compassion sense), "being", or just the sense that "everything is okay". You could think of them as pure "reward" or "satisfaction" states.

The absence of these states is a compulsive motivator. If someone displays a compulsive social behavior (like needing to correct others' mistakes, always blurting out unpleasant truths, being a compulsive nonconformist, etc.) it is (in my experience) almost always a direct result of being deprived of one of the core states as a child, and forming a coping response that seems to get them more of the core state, or something related to it.

Showing them how to access the core state directly, however, removes the compulsion altogether. Effectively, it's like wireheading directly to the core state internally drops the reward/compulsion link to the specific behavior, restoring choice in that area.

Most likely, this is because it's the unconditional presence of core states that's the evolutionary advantage you refer to. My guess would be that non-human animals experience these core states as a natural way of being, and that both our increased ability to anticipate negative futures, and our more-complex social requirements and conditions for interpersonal acceptance actually reduce the natural incidence of reaching core states.

Or, to put it more briefly: core states are supposed to be wireheaded, but in humans, a variety of mechanisms conspire to break the wireheading.... and religion is a crutch that reinstates it externally, by exploiting the compulsion mechanism.

Appropriately trained rationalists, on the other hand, can simply reinstate the wireheading internally, and get the benefits without "believing in" anything. (In fact, application of the process tends to surface and extinguish left-over religious ideas from childhood!)

Explaining the actual technique would require considerably more space than I have here, however; the briefest training I've done on the subject was over an hour in length, although the technique itself is simple enough to be done in a few minutes. A little googling will find you plenty on the subject, although it's extremely difficult to learn from the short checklist versions of the technique you're likely to find on the 'net.

The original book on the subject, Core Transformation, is somewhat better, but it also mixes in a lot of irrelevant stuff based on the outdated "parts" metaphor in NLP -- "parts" are just a way of keeping people detached from their responses, and that's really orthogonal to the primary purpose of the technique, which is really sort of a "stack trace" of active unconscious/emotional goals to uncover the system's root goal (and thereby access the core state of "pure utility" underneath).

In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn't recommend it to a 13th century peasant.

Anyone who knows how to access their core states has the ability to call up mystical states of peace, bliss, and what-not, at any moment they actually need or want them. An external idea isn't necessary to provide comfort -- the necessary state already exists inside of you, or religion couldn't possibly activate it.

comment by conchis · 2009-03-20T19:07:40.704Z · LW(p) · GW(p)

"Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it's hurting the rational tribe. That's informative, and sort of my point."

So if that's Eliezer's point, and it's also your point, what is it that you actually disagree about?

I take Eliezer to be saying that sometimes rational individuals fail to co-operate, but that things needn't be so. In response, you seem to be asking him to prove that rational individuals must co-operate - when he already appears to have accepted that this isn't true.

Isn't the relevant issue whether it is possible for rational individuals to co-operate? Provided we don't make silly mistakes like equating rationality with self-interest, I don't see why not - but maybe this whole thread is evidence to the contrary. ;)

Replies from: jacoblyles
comment by jacoblyles · 2009-03-20T19:14:52.261Z · LW(p) · GW(p)

My point isn't exactly clear for a few reasons. First, I was using this post opportunistically to explore a topic that has been on my mind for awhile. Secondly, Eliezer makes statements that sometimes seem to support the "truth = moral good = prudent" assumption, and sometimes not.

He's provided me with links to some of his past writing, I've talked enough, it is time to read and reflect (after I finish a paper for finals).

comment by Pierre-Andre · 2009-03-20T16:26:06.570Z · LW(p) · GW(p)

True, but that "one kind of rationality" might not be what you think it is. Conchis's point holds if you use "rationality" = "everything should always be taken into account, if possible" or something alike.

A "rational" solution to a problem should always take into account those "but in the real word it doesn't work like that...". Those are part of the problem, too.

For example, a political leader acting "rationally" will take into account the opinion of the population (even if they are "wrong" and/or give to much importance to X) if it can affect his results in the next election. The importance of this depends on his "goal" (position of power? well being of the population?) and on the alternative if not elected (will my opponent's decisions do more harm?).

comment by Annoyance · 2009-03-20T14:17:15.133Z · LW(p) · GW(p)

As the old joke says: What do you mean 'we', white man?

The real reason ostensibly smart people can't seem to cooperate is that most of them have no experience with reaching actual conclusions. We train people to make whatever position they espouse look good, not to choose positions well.

Replies from: Nick_Novitski
comment by Nick_Novitski · 2009-03-20T15:20:29.765Z · LW(p) · GW(p)

What makes a position well-chosen or more likely to assit in reaching actual conclusions?

Replies from: Annoyance
comment by Annoyance · 2009-03-20T15:28:57.818Z · LW(p) · GW(p)

The logical structure of the best argument supporting it, the quality of the evidence in that argument, and the extensiveness of that evidence.

Instead of those things, most of us pay attention to rhetoric and status.

Take a look at high school speech and debate organizations, and the things they stress. What development of skills and techniques do their debates encourage?

Replies from: Technologos, diegocaleiro
comment by Technologos · 2009-03-22T22:01:14.811Z · LW(p) · GW(p)

A good point, and a serious problem. When I was in high school debate (Lincoln-Douglas), I hated the degree to which the competition was really about jargon and citation of overwhelming but irrelevant "evidence." I think the tipping point was when somebody claimed that teaching religion in public schools would lead to an environmental catastrophe (and even more, it was purely an argument from authority).

At one point, I ran a case that relied on no empirical evidence whatsoever (however abhorrent that may sound here): it was a quasi-Aristotelian argument that if you accept the value in the first premise--I believe it was "knowledge"--then the remainder followed. The whole case was perhaps three minutes long, half the allowed time, and formatted to make the series of premises and conclusions very obvious.

Best I could tell, there was only one weak link in the argument that was easily debatable. I correctly guessed that the people I was debating were more used to listing "evidence" than arguing logic, and most people had absolutely no idea how to handle even clearly stated premises and conclusions.

I was arguing against the position I actually hold, which is why there was still a flaw in the argument, but it won the majority of the debates nonetheless. Sad, more than anything.

comment by diegocaleiro · 2010-12-17T16:54:37.219Z · LW(p) · GW(p)

This "best argument" idea disconsiders the danger of one argument against an army http://lesswrong.com/lw/ik/one_argument_against_an_army/

comment by [deleted] · 2012-12-16T18:02:51.353Z · LW(p) · GW(p)

I completely agree with this post. It's heartwarmingly and mindnumbingly agreeable, I would like to praise it and applaud it forever and ever. On a more serious note, personally it feels like not contributing anything into the conversation if you're just agreeing. Like for an example if I read a 100 posts in here, I don't feel compelled to add a comment which says just "I agree." to each of them because it feels like it doesn't add to the substance of the issue. - So I'm totally doing what the post predicts.

I have really read a hundred or so posts and I think the majority of them are brilliant, and to be honest I don't think there have been any posts by Eliezer in particular that I have read which I would've considered really bad. I think they're great. I'm not even stretching it very far when I'm saying that they've changed my look on life.

Personally I truly hope that whoever comes up with the first functional AIs has concern for the future of humanity and takes the time and trouble to ponder moral issues and is responsible about it in general. In fact I believe the world would be a little better place if a larger number of our leaders and political decision makers would demonstrate similar interests - for an example if they could sit down every now and then and contemplate on the meaning of altruism or caring for one another - or they would stop by and read a post on this website.

So this seems like the perfect post to just agree with and add the following suggestion to the conversation: If it feels like you don't want to just agree to something, even if you do really agree, try and find a way to do that while also making a contribution, additional detail or insight. :)

Awesome posts!

comment by TraderJoe · 2012-11-21T19:58:28.891Z · LW(p) · GW(p)

On the other hand, if you are only half-a-rationalist, you can easily do worse with more knowledge. I recall a lovely experiment which showed that politically opinionated students with more knowledge of the issues reacted less to incongruent evidence, because they had more ammunition with which to counter-argue only incongruent evidence.

What exactly is the problem with this? The more knowledge I have, the smaller a weighting I place on any new piece of data.

Replies from: empleat
comment by empleat · 2021-07-24T05:32:17.402Z · LW(p) · GW(p)

Seems so: https://aeon.co/essays/why-humans-find-it-so-hard-to-let-go-of-false-beliefs

I Am probably so rational, because I have ASD, people with ASD don't include emotions into their reasoning: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4532317/#:~:text=In%20typical%20individuals%2C%20alexithymia%20was,fear%2C%20disgust%2C%20and%20anger. And I Am great in logic!!! I have aphantasia - meaning no imagination. Even if I understand some logic perfectly - I couldn't make you an exercise for it! And I can't give any examples almost, as next to 0 imagination! That's also maybe why I Am so rational: https://iai.tv/articles/why-humans-are-the-most-irrational-animals-auid-1239&utm_source=reddit&_auid=2020 But I Am somehow very creative - because ADHD? And I have overexcitabilities and I Am very emotional & sensitive person! Emotions are tied to creativity! I get excited easily, but also bored easily!

Also I have bad memory, so I forget things and have to constantly reinvent them and revise them from 0. But I Am very logical-critical-rational! This is so interesting I saw some guy, which has it same and his almost exactly the way I Am except ASD and ADHD! So interesting!

 I also didn't know anything until my 21 and just played games, then I read like 1 million articles in a year about "Free Will". I always try yo take everything from 0 and I have ADHD so I see it from all perspectives. I also revise my views, even if I Am wrong, I will learn so much from that! Being wrong is as important for learning as being right! So you see something doesn't work and get information from that! Who tries to be only right, doesn't really learn why and something is bad and why something is good!

We don't know anything for sure except: "I think therefore I Am" Even that maybe, what if I Am dead and alive at the same time - QM (even isn't this only analogy?, no idea if this can be extended to human death!) unless I saw all permutations of everything how could I know? And even most brilliant people who ever lived succumbed to logical fallacies, or said total nonsense... E.g. anecdote: what Musk said about Covid... Even most brilliant people have so much to learn. Unless you read every book that exist and know everything from every perspective, you still may know nothing... Even most brialliant people know 0.00000000...1% Stephen Hawking said: he can't even keep up with new studies in his area... 

I like this statement: "that those of us too dazed by the job of living to exert an extra mental effort". As bias is defined as cognitive shortcuts, you can really never think too much... I experience most profound existential boredom and I live quiet life doing nothing whole day, but analyzing everything and reflecting about everything... Problem is you need to slow down and enjoy something, otherwise it is harmful to intelligence I found! And today time life is too quick, nevertheless I don't have words for people, which get news only from facebook...

TBH even I (I probably one of most rational people in the world, it would take long to explain) noticed that sometimes I let maybe creep bias in tangential judgements (especially if I Am low on energy, wonder if I could affect me in future, if idea gets anchored). But I don't base my reasoning on them. It is like an idea, which needs to be investigated and I have OCD about every permutation honestly... I have even OCD about being criticized by other people in hypotheticals, not even kidding...

Here my system how I judge knowledge: 

  • logical conclusion based on axioms
  • axiom
  • logical conclusions based on empirical evidence
  • empirically verified observations/facts (mostly inductive!) 1. with large consensus in scientific community like: Evolution
  • theory (1. theory tested by an experiment, but isn't fully accepted in scientific circles - depends, 2. number of citations and journal impact factor, 3. is not tested by an experiment yet)
  • hypothesis
  • assumption
  • intuition
  • idea
  • I don't know shit so I don't have an opinion yet (random thoughts) - I don't understand why some people have strong opinions, if they don't know anything about something... Simply don't talk, if you don't know, but ask, or study it...
  • assumption of materialism etc.

I usually start from 0 and I leave things open and revise my opinions all the time...

So one has to be aware of structure on which he base his arguments and precision of which things can be known (which is very difficult admittedly). Even scientists can be uncritical naturally, because their work is based on assumptions and verified inductively most of the time! https://medium.com/starts-with-a-bang/is-the-inflationary-universe-a-scientific-theory-not-anymore-905615723b0f

Or scientists' unabillity to understand Philosophy, although I suspect: they rather don't know anything about it given opulent opinions. As people which are very intelligent (which know a lot) may think they know even about things outside of their area! https://qz.com/627989/why-are-so-many-smart-people-such-idiots-about-philosophy/

Smart people constantly question everything, because they realize how much they don't know! So you should always question yourself, also heavy criticism is excellent. You learn so much from being criticized, as self-reflection/analysis is difficult - even for experts! Unfortunately that is one thing which is banned, criticize anything bam you are banned, can't even open bank account, or have job...

Like in academia e.g. a student claims that there are differences between men and women and commission investigates that, what the hell? Even she wasn't kicked, fact that they investigating academic freedom of speech... Or conservative professors has to have security detail to lecture and this is not even new: https://www.wizbangblog.com/2009/04/18/conservative-speakers-need-body-guards-when-speaking-on-college-campuses/  

Lefties are arguably even more aggressive and likely to commit a crime... It is because rich want destroy difference between genders, because they invested a lot of money in transgender clinics, transformation cost even $150 000 for one person... https://readingjunkie.com/2021/07/09/big-pharma-deploys-brown-shirts-to-protect-the-trans-industrial-complex/ 

I also recommend: https://fabiusmaximus.com/ an excellent site, where economists and rather known people write, certainly no no names. And cite many scientific studies for their claims! 

We live absolutely under totalitarianism and corporatocracy already and when china will become superpower, it will be even more grim! Did you check last news? Their economy increased about 8%... Also corporations want to create their own municipal goverment, it was only postponed because covid, but in december 21, they will consider it in congress! 

Even governer said: it has to be studied first, so it is not realistic couple years yet - thank god! And corporations will want to push this through! Which would unadulterated post-apocalyptic world in its truest form... https://www.marketwatch.com/story/in-nevada-desert-blockchains-llc-aims-to-be-its-own-municipal-government-01613252864

Big corporations like coca cola speak into prepared legislations etc.

And there is also elitism in scientific circles, which doesn't help. https://bigthink.com/culture-religion/inequality-mathematics

Do you know what is funny, I stated on (scientificforums.net) that science is largely social endeavor and what is true is determined mostly by what is accepted at the top in scientific circles. But I got instantly flamed from elitists there! Science is largely social endeavor, because scientists are people like any else! 

De Ropp: describes as the pursuit of "knowledge," and then outlines many of the ways that this game as well is often corrupted, muddied and tainted (by players whom De Ropp sounds intimately familiar with). Says De Ropp, "Much of it is mere jugglery, a tiresome ringing of changes on a few basic themes by investigators who are little more than technicians with higher degrees . . . Anything truly original tends to be excluded by that formidable array of committees that stands between the scientist and the money he needs for research. He must either tailor his research plans to fit the preconceived ideas of the committee or find himself without funds. Moreover, in the Science Game as in the Art Game there is much insincerity and a frenzied quest for status that sparks endless puerile arguments over priority of publication. The game is played not so much for knowledge as to bolster the scientist's ego.".

E.g. if you don't have rep. you won't get publicized in prominent journals, no matter what are your arguments, because again people decide at the end based on emotions... https://bigthink.com/experts-corner/decisions-are-emotional-not-logical-the-neuroscience-behind-decision-making

That is not to say, all science is about that and it is what science ought to be. But in reality it largely is! And without funding you can't do anything and science is largely funded from private sector, because from taxes went like 1.41% couple years in past...

Sorry I prob. talk to much, because I see everything from everything ad infinitum... I used to get like 20 ideas from every other thing and from that another 20, before I was depressed and chronic pain...

comment by patrissimo · 2009-03-21T17:55:42.886Z · LW(p) · GW(p)

You're awesome, Eli. I love the mix of rationality and emotion here. Emotion is a powerful tool for motivating people. We of the Light Side are rightfully uncomfortable with its power to manipulate, but that doesn't mean we have to abandon it completely.

I recently suggested a rationality "cult" where the group affirmation and belonging exercise is to circle up and have each person in turn say something they disagree with about the tenets of the group. Then everyone cheers and applauds, giving positive feedback. But now I see that this is going too far towards disagreement - better would be for each person to state one area of agreement and one of disagreement with the cult's principles, or today's sermon or exercises, and then be applauded.

comment by Daniel_Burfoot · 2009-03-20T15:08:26.748Z · LW(p) · GW(p)

I think there's an interesting moral of the anecdote, but I'm not sure it's the one you expressed.

My conclusion is: rationalists who desire to discard the burdensome yoke of their cultural traditions, linked inextricably as they are to religion, will have to relearn an entirely new set of cultural traditions from scratch. For example, they will need to learn a new mechanism design that allows them to cooperate in donating money to cause that is accepted as being worthwhile (I think the "ask for money and then wait for people to call out contributions" scheme is damned brilliant).

Replies from: pjeby
comment by pjeby · 2009-03-20T16:12:19.399Z · LW(p) · GW(p)

Here's an even better one, under the right circumstances:

"Would everyone please stand up for a moment? Thank you. Now, please remain standing if you believe that our organization is doing important things for the good of the world. Terrific, terrific. Okay, please continue to stand if you're going to make a pledge of at least $X. Fantastic! Now, please continue to stand if you're going to make a pledge of at least $X*2..."

Of course, it won't work very well on a room full of non-conformists... you might have trouble getting them to stand in the first place, especially if they know what's coming.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-20T19:10:21.597Z · LW(p) · GW(p)

That only works once, if that much. People don't like feeling forced and manipulated.

Replies from: pjeby
comment by pjeby · 2009-03-20T19:31:04.880Z · LW(p) · GW(p)

"Right circumstances" includes support for your cause and rapport with your audience, such that most of them don't feel manipulated. The one time I saw that method used, the speaker already had the audience in the palm of his hand, such that they felt they'd already gotten their money's worth just from having listened to him. The stand-up/opt-out trick was just to push an already-high expected conversion rate higher.

(An example of how good a rapport he had: early in the presentation, he asked that people please promise to not even attempt to give him any money that day... and several people laughed and shouted "No!")

Of course, I suppose if you're that good, the trick is moot. On the other hand, the public approach your synagogue used is equally manipulative... it just builds the conformity pressure more slowly, instead of all at once.

comment by Emiya (andrea-mulazzani) · 2020-12-12T18:31:50.900Z · LW(p) · GW(p)

Perhaps a way to have comments of agreement that can also work as signalling your own smarts would be to say that you agree, and that the best part/most persuasive part/most useful part is X while providing reasons why. 

comment by Skylar626 · 2009-03-20T18:10:29.888Z · LW(p) · GW(p)

Isnt the secret power of Rationality that it can stand up to review? Religious cults are able to demand extreme loyalty because the people are not presented alternatives and are not able to question the view they are handed. One of our strengths seems to be in discernment and argumentation which naturally leads to fractious in-fighting. What would we call "withholding criticism for the Greater Good"?

Replies from: pjeby
comment by pjeby · 2009-03-20T19:20:04.002Z · LW(p) · GW(p)

The difference is simply in the critic's motivation: are they trying to improve the situation, or just trying to avoid the expected outcome of agreement? E.g., are you criticizing charities because you want them to do better, or because you don't want to shell out the money AND don't want to admit it? (I'm unashamedly in the "I don't want to send money to Africa and I don't care if I have a logical reason for it" camp, and so have no need to make up a bunch of reasons it's bad.)

If the critic were really interested in improvement, they'd be suggesting improvements or better yet, DOING something about improvement.

comment by Roko · 2009-03-20T15:07:22.608Z · LW(p) · GW(p)

"But if you tolerate only disagreement - if you tolerate disagreement but not agreement - then you also are not rational. You're only willing to hear some honest thoughts, but not others. You are a dangerous half-a-rationalist."

  • Excellent point. I agree completely, and have had similar thoughts about the problem with the "skeptic" community myself. upvote
comment by MBlume · 2009-03-20T09:22:57.880Z · LW(p) · GW(p)

To point in the rough direction of an empirical cluster in personspace. If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.

If someone understands the phrase "empirical cluster in personspace," they probably are who you're talking about. =)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-20T19:12:26.332Z · LW(p) · GW(p)

That was what the first draft said, but I considered it for a few moments and realized that as eloquent statements go, it suffered the unfortunate flaw of not actually being true.

comment by [deleted] · 2012-01-23T20:08:46.665Z · LW(p) · GW(p)

This is very interesting; I have usually refrained from replying because I could not think of anything to say that wasn't trivial. Will take care to voice agreement in th future where applicable.

comment by thomblake · 2009-03-24T14:42:14.501Z · LW(p) · GW(p)

But none of those donors posted their agreement to the mailing list. Not one.

Couldn't you just ask contributors for the right to make their donations public?

Replies from: whynot
comment by whynot · 2009-09-18T16:39:52.788Z · LW(p) · GW(p)

The Christian and other ethics often demand that the left hand not know what the right hand is doing. However, you can certainly indicate the sum of donations so far without violating anyone's privacy.

The commitment of those who do donate may be more inspiring than the excuses of those who do not.

Replies from: Davorak
comment by Davorak · 2011-02-05T02:37:42.969Z · LW(p) · GW(p)

An automated reply system could make a post with the donated amount and unique anonymous user name. That way people reading the counter arguments see people donating between some posts.

comment by JoshuaFox · 2009-03-21T19:25:19.193Z · LW(p) · GW(p)

Then clearly your fund-raising drive would have benefited from a mechanism for publicizing and externalizing support.

Charitable organizations commonly use a variety of such methods. The example you gave is just one. If correctly designed the mechanisms do not cause support to be swamped by criticism, and they can operate without suppressing any free thought or speech.

E.g. publishing (with their agreement) the names of donors, the amounts, and endorsements; using that information to solicit from other donors; getting endorsements from respected people; appointing wealthy donors to use their own donations as an example when leading solicitation drives among other wealthy donors etc.

The situation does not seem as dire as you suggest.

And you'd better bet that synagogue fund-raising drives get all the gripes that you received, and more!

comment by roland · 2009-03-21T02:59:08.323Z · LW(p) · GW(p)

Way to go Eliezer, you have my full support! And another great posting, btw!

comment by Johnicholas · 2009-03-20T15:41:41.733Z · LW(p) · GW(p)

To some extent, this was discussed in "The Starfish and the Spider", which is about "leaderless groups". The book praises the power of decentralized, individualistic cultures (that you describe as "Light Side"). However, it admits that they're slower and less-well coordinated than hierarchical organizations (like the military, or some corporations).

You've outlined some of the benefits (recruitment, coordinated action) of encouraging public agreement and identifying with the group. You've also outlined some of the dangers (pluralistic ignorance, etc.).

Possibly the appropriate answer is to create multiple groups, so that each can be a check against the others turning into cults. Possibly even a fractal of groups and subgroups.

comment by Cassandra · 2009-03-20T15:24:10.245Z · LW(p) · GW(p)

I have been thinking about this subject for a while because I saw the same type of culture of disagreement prevent a group I was a member of from doing anything worthwhile. The problem is very interesting to me because I come from the opposite side of the spectrum being heavily collectivist. I take pleasure in conforming to a group opinion and being a follower but I also have nurtured a growing rationalist position for the last few years. So despite my love of being a follower I often find myself aspiring to a leadership position in order to weld my favored groups into a cohesive whole rather than an un-unified mob. The only solution I have been able to come up with so far is forming a core of beliefs and values which the group can accept without criticism, even if some of the members disagree with some of the parts. This is of course very hard to do.

comment by whpearson · 2009-03-20T10:33:23.210Z · LW(p) · GW(p)

"Those who had nothing to give, stayed silent; those who had objections, chose some later or earlier time to voice them. That's probably about the way things should be in a sane human community"

Personally I think that you were speaking to the wrong crowd when trying to fund raise. Or perhaps I should say too wide a crowd. Like trying to fundraise for tokamak fusion in a mailing list where people are interested in fusion in the generality. People who don't believe that tokamaks will ever be stable/usable are duty bound to try and convince the other people of that so they won't waste their money (also it means less money in the pot for their projects).

Geek cooperative projects can work, but generally only if there is a mathematical or empirical way to get everyone on the right page, or you have to filter the group you are trying to work with by philosophical position.

With regards to signaling agreement, I think part of the problem is that agreements tend to give little information. If everyone on a certain mailing list said I agree and here is how much money I am donating, I would consider it spam, too much bandwidth for not enough new information... Polls would probably be better, or the organiser of the fund raiser could give running updates (which I believe you did, IIRC).

comment by prase · 2009-03-20T09:34:12.667Z · LW(p) · GW(p)

I have to agree completely.

Replies from: diegocaleiro, CannibalSmith
comment by diegocaleiro · 2010-12-17T16:37:12.453Z · LW(p) · GW(p)

I don't have to agree completely. But I choose to.

I also choose to link the donation's page for the SIAI here.

http://singinst.org/donate

Yes, this felt great... my emotions seem to be in tune with my high-level goals.

comment by CannibalSmith · 2009-03-20T12:01:38.028Z · LW(p) · GW(p)

Me too!

comment by MadHatter · 2023-11-17T23:08:30.824Z · LW(p) · GW(p)

There's an easy and obvious coordination mechanism for rationalists, which is just to say they're building X from science fiction book Y, and then people will back them to the hilt, as long as their reputation and track record for building things without hurting people is solid. Celebrated Book Y is trusted to explain the upsides and downsides of thing X, and people are trusted to have read the book and have the Right Opinions about all the tradeoffs and choices that come with thing X. 

So really, it all comes down to the thing that actually powers the synagogue's annual appeal - the Torah. The Torah has been doing its job for as long as there have been Jews to read it (or recite it from memory before it was written down), so everyone in the community can agree that the Torah and the Talmud and whatever are reasonable, so coordination becomes trivial. The rabbi standing up in front of the congregation has read all the Torah and Talmud there is, everyone knows and agrees on this fact, so the rabbi is trusted to have the best interests of the community at heart. Since the rabbi has the best interests of the community at heart, the expenses that the community has incurred are obviously real and obviously pressing. Since the rabbi hasn't been going around doing horrible things (he hasn't, right?), everyone knows that the money will actually be spent on the thing the rabbi says it will be spent on, not, I don't know, building nuclear weapons to bomb the competing synagogue across town.

I think rationalists have a deeply good sense of responsibility for their actions and opinions. That's the best thing about them. But I think they also don't have enough respect for the actions and opinions of other people (particularly Other People With Different Opinions Who Are Not As Smart As Me). That's the worst thing about them. As worst things go, it's a pretty minor character flaw; nobody's eating babies alive, they're just kind of smug and condescending in a way that is counterproductive.

I think the thing going on with your pledge drive was that people are afraid to be publicly wrong about something that ends up mattering a great deal, and they don't trust anyone else's opinion about whether they're right or wrong. To break through that force field, we need to start trusting our own wisdom literature, which is science fiction and fantasy, to help us solve the unsolvable challenges that Whatever-Is-Out-There keeps putting in front of us. Sure, mistakes will be made, bad books will be written, people will disagree about what the books mean. All that happens with the Torah and the Talmud too. It just doesn't seem realistic that any one person could act in accordance with most or all the ethical rules laid out in all the science fiction and fantasy books that exist and still end up building something truly evil by coordinating effectively with other people who are also steeped in the science fiction and fantasy traditions. What would that even look like?

comment by SilverFlame · 2023-04-30T21:43:32.403Z · LW(p) · GW(p)

I have a modest amount of pair programming/swarming experience, and there are some lessons I have learned from studying those techniques that seem relevant here:

  • General cooperation models typically opt for vagueness instead of specificity to broaden the audiences that can make use of them
  • Complicated/technical problems such as engineering, programming, and rationality tend to require a higher level of quality and efficiency in cooperation than more common problems
  • Complicated/technical problems also exaggerate the overhead costs of trying to harmonize thought and communication patterns amongst the team(s) due to reduced tolerance of failures

With these in mind, I would posit that a factor worth considering is that the traditional models of collaboration simply don't meet the quality and cost requirements in their unmodified form. It is quite easy to picture a rationalist determining that the cost of forging new collaboration models isn't worth the opportunity costs, especially if they aren't actively on the front lines of some issue they consider Worth It.

comment by frontier64 · 2021-08-10T01:34:45.802Z · LW(p) · GW(p)

I agree. I don't often say I agree for efficiency. You've made the point more eloquently than I could and my few sentences in support of you would probably strengthen your point socially, but it wouldn't improve the argument in some logical sense.

I love signaling agreement when I can do it and be just as eloquent as the writing I'm agreeing with. Famous authors put a lot of work into the blurbs they write recommending their friend's books. And that work shows. "X is a great summertime romp, full of adventure!" sure is a glowing recommendation, but it's not that eloquent and I can tell the author didn't put much time into writing it. Guess they didn't think X was worth the time to write a real nice blurb. But when a good author writes an interesting blurb for a book it gives me very high expectations.

I think this applies to ideas as well.

comment by Emiya (andrea-mulazzani) · 2020-12-12T18:15:26.449Z · LW(p) · GW(p)

Our culture puts all the emphasis on heroic disagreement and heroic defiance, and none on heroic agreement or heroic group consensus.

There's a lot more of this in anime, I feel. A lot of characters end up trusting someone from the bottom of their hearts, agreeing to follow their vision to the end, and you see whole group of good guys that are wholeheartedly committed and united to the same idea. Even main characters often show this trait toward others.

comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-19T16:21:04.390Z · LW(p) · GW(p)

"Yes, a group which can't tolerate disagreement is not rational.  But if you tolerate only disagreement—if you tolerate disagreement but not agreement—then you also are not rational". Well, agreement may just be perceived default. If I sit at a talk and find nothing to say about (and, mind you, that happens R. A. R. E. L. Y) it means either that I totally agree or that it is so wrong I don't know where to begin.

Also, your attitude on "we are not to win arguments, we are to win", your explicit rejection of rhetorics (up to the seemingly-ignorant question "Why do people think that mentioning the death of some poor fella buying snake oil is argument for regulation?" - because bringing it up like that is a rhetorical argument to that side even if it is not a rational one) may be another weakness more or less common among rationalists. There are ways to sway people to your side, not necessarily including direct lies - and still rationalists tend to refuse using them.

comment by amitpamin · 2012-06-18T22:31:18.871Z · LW(p) · GW(p)

Wow. I don't identify as a cynic or spock, but of the many articles I have read on Less Wrong since I discovered it yesterday, this one is perhaps the most perspective changing.

comment by BenLowell · 2011-06-28T23:18:27.501Z · LW(p) · GW(p)

It makes me happy that those traits you list as what rationalists are usually thought of ----disagreeable, unemotional, cynacal, loners---are unfamiliar. The rationalists I have grown up in the past few years reading this site are both optimistic and caring, along with many other qualities.

comment by coolcortex · 2009-09-18T10:37:53.185Z · LW(p) · GW(p)

Eliezer, I applaud your post. Bravo. I agree.

I'm new to this site and I was compelled to sign up immediately.

There's not much to add here, but that I hope people appreciate the significance of not shutting off all emotions, much like you argue in this post.

comment by rhollerith · 2009-03-20T21:44:00.956Z · LW(p) · GW(p)

Those who suspect me of advocating my unconventional moral position to signal my edgy innovativeness or my nonconformity should consider that I have held the position since 1992, but only since 2007 have I posted about it or discussed it with anyone but a handful of friends.

Replies from: AnnaSalamon, Cyan, Court_Merrigan
comment by AnnaSalamon · 2009-03-21T02:41:07.793Z · LW(p) · GW(p)

I believe rhollerith. I met him the other week and talked in some detail; he strikes me as someone who's actually trying. Also, he shared the intellectual roots of his moral position, and the roots make sense as part of a life-story that involves being strongly influenced by John David Garcia's apparently similar moral system some time ago.

Hollerith doesn't mean he was applying his moral position to AI design since '92, he means that since '92, he's been following out a possible theory of value that doesn't assign intrinsic value to human life, to human happiness, or to similar subjective states. I'm not sure why people are stating their disbelief.

Replies from: rhollerith
comment by rhollerith · 2009-03-21T05:30:46.473Z · LW(p) · GW(p)

Good point, Anna: John David Garcia did not work in AI or apply his system of values to the AI problem, but his system of values yields fairly unambiguous recommendations when applied to the AI problem -- much more unambiguous than human-centered ways of valuing things.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-21T05:33:49.211Z · LW(p) · GW(p)

Off-topic until May, all.

comment by Cyan · 2009-03-20T21:58:14.053Z · LW(p) · GW(p)

Unfortunately, they can't consider that you have have held the position since 1992 -- all they can consider is that you claim to have done so. You could get your handful of friends to testify, I suppose...

Replies from: rhollerith
comment by rhollerith · 2009-03-21T05:41:33.190Z · LW(p) · GW(p)

Cyan points out, correctly, that all the reader can consider is that I claim to have held a certain position since 1992. But that is useful information for evaluating my claim that I am not just signaling because a person is less likely to have deceived himself about having held a position than about his motivations for a sequence of speech acts! And I can add a second piece of useful information in the form of the following archived email. Of course I could be lying when I say that I found the following message on my hard drive, but participants in this conversation willing to lie outright are (much) less frequent than participants who have somehow managed to deceive themselves about whether they really held a certain position since 1992, who in turn are less frequent than participants who have somehow managed to deceive themselves about their real motivation for advocating a certain position.

1995 Jul 4 16:20

Subject: Re: July 15th

Russell Brand writes:

Will you be able to join us at my house to hear John David Garcia talk about the mechanisms for thought, creativity and quantum mechanics?

I certainly would like to join you. Garcian ethics has become an important part of my philosophy, and I want to meet people who assign a similar importance to the ethical principles outlined in Creative Transformation.

Replies from: Cyan
comment by Cyan · 2009-03-22T07:06:43.988Z · LW(p) · GW(p)

I don't disagree with the above post -- I just wanted to make a pedantic distinction between claims and facts in evidence. (Also, my choice of the pronoun "they" rather than "we" was deliberate.)

comment by Court_Merrigan · 2009-03-21T02:33:32.391Z · LW(p) · GW(p)

I don't believe you.

Replies from: rhollerith
comment by rhollerith · 2009-03-21T05:31:09.666Z · LW(p) · GW(p)

Don't believe my advocacy of the moral position is not really just signaling or don't believe I've held the moral position since 1992?

Replies from: Court_Merrigan
comment by Court_Merrigan · 2009-03-21T05:50:23.020Z · LW(p) · GW(p)

I don't know how long you've held the position, or much care - I don't think it's relevant. But it is signaling, I think, for 2 reasons:

  • Your public concern with saying it's not signaling is just a way of signaling;
  • Claiming a certain timespan of belief is just an old locker room way of saying "I got here first." Which surely is signaling.

This is the sort of thing that causes unnecessary splintering in groups. I have a very visceral reaction to this sort of signaling (which I would label preening, actually). Perhaps I should examine that.

Replies from: Cameron_Taylor, rhollerith
comment by Cameron_Taylor · 2009-03-21T06:04:46.703Z · LW(p) · GW(p)

It is likely the case that rhollerith's moral position contains at least some element of signalling. His expression thereof probably does too. In fact, there are few aspects of social behavior that could be credibly claimed to be devoid of signalling. That said, these points do not impress me in the slightest.

Your public concern with saying it's not signaling is just a way of signaling;

Yes, public concern surely involves signalling. That doesn't mean that which is concerned about isn't also true. Revealing truth is usually an effective form of signalling.

Claiming a certain timespan of belief is just an old locker room way of saying "I got here first." Which surely is signaling.

It is completely unreasonable to dismiss claims because they are similar to something that was signalling in the locker room. Even the "I got here first" signalling in said locker room quite often accompanies the signaller, in fact, getting there first.

comment by rhollerith · 2009-03-21T07:35:22.147Z · LW(p) · GW(p)

I suspect that you have not become acquainted with my moral position! If you knew my moral position, you would be more likely to say I am ruining the party by crapping in the punchbowl than to say I am preening. (Preen. verb. Congratulate oneself for an accomplishment).

comment by Annoyance · 2009-03-20T15:40:46.442Z · LW(p) · GW(p)

People are also unwilling to express agreement because they know, and fear, group consensus and the pressure to fit in. Those usually lead to groupspeak and groupthink.

Given that one of the primary messages of the local Powers That Be is that other people's evaluations should be a factor in your own - that other people's conclusions should be considered as evidence when you try to conclude - and that's incompatible with effective rationality, as well as the techniques needed to prevent self-reinforcing mob consensus.

comment by Hordoo · 2014-10-05T17:31:51.060Z · LW(p) · GW(p)

Not only the culture of disagreement takes place. When I see "+1", I think what a mind processes do that: commenter needs some attention but have nothing to say? And so when I want to post "+1", I do not do that, for someone didn't think the same about me. Usually I'm trying to make some complement to original post, or little correction to it with clear approval of the rest. Something not important and, at the same time, not just "+1".

There is a way to solve this problem, but it dangerous. Rationalist can watch discussion closely and not only clever thoughts, but the common effect that discussion have on other watchers, and make some activity every time when discussion have wrong effect. But doing this rationalist makes political discussion from rational.

The only way is to remember the purpose for communication takes place. Not every communications is discussion. And this is the most rational way: rationalist every his move should do knowing the purpose for this move. When we speak about cooperating rationalists, we should also remember that there are common goals and individual goals, and rationalist should weigh both and every time pick the most important at this moment.

And in context of donations: what the reason for rationalist to publish his reasons to not donate? Guilt and attempt to justify himself? Or maybe attempt to draw attention: "now look guys how clever my thoughts"? All the reasons I can imagine is individual goals, that this "rationalist" is considering more important than common goals of community. So either this "rationalist" is enemy of community, or he is just stupid (the same thing, generally).

comment by dspeyer · 2013-01-10T08:33:15.466Z · LW(p) · GW(p)

I wonder if one person can have a big effect on this sort of thing.

For example, I've known charity organizers to publish the number of donors and the total money donated every few days. Even without identifying donors, that does a lot to make people feel less alone.

comment by Epiphany · 2012-09-03T19:36:19.852Z · LW(p) · GW(p)

An alternate explanation: I've noticed a trend where rationalists seem more likely to criticize ideas in general. Perhaps a key experience that needs to happen before some people choose to undergo the rigors of becoming a rationalist is a "waking up" after some trauma that makes them err on the side of being paranoid. I have observed that most people without a "wake up" trauma prefer to simply retain optimism bias and tend to conserve thinking resources for other uses. Someone who thinks as much as you do probably does not feel a need to conserve thinking resources, and probably finds this concept ridiculous, but for most people, stamina for how much thinking they can do in a day is a factor - sad, but true. So, a trauma might be needed to make skepticism appeal to people. It may be that rational thought is often implemented as a defense mechanism and this leads them to create strong habits of doing rational thought in ways that tear ideas down without doing a comparable amount of practice in confirming ideas.

In my opinion, I think the solution to this would be to assist them in reaching a point of satiation when it comes to being great at tearing ideas down. If it's a self-defense mechanism, no amount of brilliant rational appeals will make them give it up. Even if one starts by explaining the risks of tearing ideas down too much, that's only confusing to the self-defense system, people won't know what to do with the cognitive dissonance that causes, so they're likely to reject it. If they feel secure because of a high level of ability with tearing ideas down, they'll probably be more open to seeing the limitations to that and doing more practice with methods of confirming ideas.

comment by timtyler · 2011-01-02T17:15:24.731Z · LW(p) · GW(p)

organizing atheists has been compared to herding cats, because they tend to think independently and will not conform to authority - The God Delusion

Maybe - but they seem to work together well enough - if you pay them.

Replies from: shokwave
comment by shokwave · 2011-01-02T17:25:16.478Z · LW(p) · GW(p)

Whereas theists will pay tithes to be ordered around.

Replies from: timtyler
comment by timtyler · 2011-01-02T17:48:46.950Z · LW(p) · GW(p)

They war with other theists as well. Cooperation benefits from a shared mission.

comment by Loren · 2009-03-21T23:07:23.919Z · LW(p) · GW(p)

Rather than ourselves making the drastic cultural changes that Eli talks about, perhaps it would be more efficient to piggyback on to another movement which is further down that path of culture change, so long as that movement isn't irrational. See this URL:

http://www.thankgodforevolution.com/node/1711

Check out the rest of the web site if you have time, or better yet, buy and read the book the web site is promoting. As you can see from the URL above, cooperation is an important value in the group.

I have been observing the spiritual practices promoted by this web site for just a few weeks, and already it's been giving me tremendous personal benefit. My relationship with my wife and kids is better, I have more enthusiasm for life when I get up in the morning, I no longer find doing chores so onerous, it's much easier for me to refrain from my vices, and I just generally feel more satisfied with the way things are. That's quite a bit for just a few weeks, and I sense the benefits are going to continue to grow with time so long as I adhere to the spiritual practices.

Even though I support Eli's non-profit (that can't be named), I have a very strong urge to give 10-fold as much money to the group that makes such an immediate and real difference in my life.

The really cool thing, though, is that the group is completely compatible with what Eli is trying to do, and should be able to help the cause rather than hinder it, unless we dismiss the group out of hand because their culture is more like a religion than a group of rationalists.

If you think the material on the web site URL I posted above is in any way irrational, please let me know about it. I'd like to hear what you're thinking.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-21T23:46:47.282Z · LW(p) · GW(p)

This isn't a comment, this is an attempted post in which you say in more detail what's going on over there and which "practices" you're talking about. It then gets voted up or voted down. In any case, don't try to do this sort of thing in one comment.

...though I see you don't have enough karma yet to post; but that's exactly what we've got the system for, eh?

comment by Psy-Kosh · 2009-03-20T20:54:31.736Z · LW(p) · GW(p)

Hrm, overall makes sense. But now, HOW do you suggest, for something here, an online forum, actually doing that sort of thing in the general case without it translating to a whole bunch of people going, effectively, "me too"?

I do remember when for a certain unnamed organization you started the "donate today and tomorrow" drive (or whatever you called it, something to that effect), I did post to a certain mailing list my thoughts that both led me to donate and what I was thinking in response to that sort of appeal, etc etc.

comment by byrnema · 2009-03-20T18:53:56.241Z · LW(p) · GW(p)

In the pursuit of truth it is rational to argue and, at first glance, irrational to agree. The culling of truth proceeds by "leaving be" the material that is correct and modifying (arguing with) the part that is not. (While slightly tangential, it is good to recall that the scientific method can only argue with a hypothesis; never confirm it.)

At a conference where there is a dialogue it is a waste of time to agree, as a lack of argument is already implicit agreement. After the conference, however, the culling of truth further progresses by assimilating and disseminating the correct material. So while it may not be rational to go to the mike and say, "I agree, you are brilliant", it is a form of true agreement to tell other people that they were brilliant.

Nevertheless, we're human beings, and by that I mean we're not entirely rational in the sense of a deterministic computational machine. We care about our interaction with our community, and in this sense it is rational to give encouragement.

comment by byrnema · 2009-03-24T05:02:11.277Z · LW(p) · GW(p)

I'm a beginner that thinks meta-discussions are fun..

Eliezer is asking about whether we should tolerate tolerance. Let's suppose -- for the sake of argument -- that we do not tolerate tolerance. If X is intolerable, then the tolerance of X is intolerable.

So if Y tolerates X, then Y is intolerable. And so on.

Thus, if we accept that we cannot tolerate toleration, then also we cannot tolerate toleration of tolerance, and also we cannot tolerate toleration of toleration of tolerance.

I would think of tolerance as a relationship between X and Y in which Y acquires the intolerability of X.

comment by [deleted] · 2009-03-21T09:56:24.538Z · LW(p) · GW(p)

I think that there are parts of life where we should learn to applaud strong emotional language, eloquence, and poetry. When there's something that needs doing, poetic appeals help get it done, and, therefore, are themselves to be applauded.

That may be, but I generally find YOUR poetic appeals to make me throw up in my mouth. I read my mother your bit about how amazing it was that love was born out of the cruelty of natural selection, and even she thought it was sappy.

Replies from: MBlume
comment by MBlume · 2009-03-21T10:06:42.929Z · LW(p) · GW(p)

I read my mother your bit about how amazing it was that love was born out of the cruelty of natural selection, and even she thought it was sappy.

I, on the other hand, nearly started sobbing, so I guess it takes all kinds.

Replies from: Corey_Newsome
comment by nazgulnarsil · 2009-03-20T10:51:53.733Z · LW(p) · GW(p)

I don't see how individualism can beat out collectivism as long as groups = more power. for individualism to work each person would have to wield equal power to any group.

Replies from: Nick_Novitski
comment by Nick_Novitski · 2009-03-20T16:40:30.099Z · LW(p) · GW(p)

One view doesn't need to "beat out" the other; for each societal state, there's a corresponding equilibrium between individualistic- and group-think (or rather, group-think for varying sizes of groups) as each person weigh the costs and benefits of adherence for them. In a world of individuals, an organized and specialized group of any size "= more power." Witness sedentary farmers displacing hunter-gatherers. On the other hand, in a world of groups, a rogue individualistic prisoner's-dilemma-defector is king. Witness sociopaths in corporate structures, or the plots of far too many Star Trek episodes.

The balance of power can shift as Individualism becomes a better choice, due to its risks lessening and rewards increasing, whether due to culture, technology, or extensive debates on websites.