Intellectual Hipsters and Meta-Contrarianism

post by Scott Alexander (Yvain) · 2010-09-13T21:36:33.236Z · LW · GW · Legacy · 367 comments

Contents

369 comments

Related to: Why Real Men Wear Pink, That Other Kind of Status, Pretending to be Wise, The "Outside The Box" Box

WARNING: Beware of things that are fun to argue -- Eliezer Yudkowsky

Science has inexplicably failed to come up with a precise definition of "hipster", but from my limited understanding a hipster is a person who deliberately uses unpopular, obsolete, or obscure styles and preferences in an attempt to be "cooler" than the mainstream. But why would being deliberately uncool be cooler than being cool?

As previously discussed, in certain situations refusing to signal can be a sign of high status. Thorstein Veblen invented the term "conspicuous consumption" to refer to the showy spending habits of the nouveau riche, who unlike the established money of his day took great pains to signal their wealth by buying fast cars, expensive clothes, and shiny jewelery. Why was such flashiness common among new money but not old? Because the old money was so secure in their position that it never even occurred to them that they might be confused with poor people, whereas new money, with their lack of aristocratic breeding, worried they might be mistaken for poor people if they didn't make it blatantly obvious that they had expensive things.

The old money might have started off not buying flashy things for pragmatic reasons - they didn't need to, so why waste the money? But if F. Scott Fitzgerald is to be believed, the old money actively cultivated an air of superiority to the nouveau riche and their conspicuous consumption; not buying flashy objects becomes a matter of principle. This makes sense: the nouveau riche need to differentiate themselves from the poor, but the old money need to differentiate themselves from the nouveau riche.

This process is called countersignaling, and one can find its telltale patterns in many walks of life. Those who study human romantic attraction warn men not to "come on too strong", and this has similarities to the nouveau riche example. A total loser might come up to a woman without a hint of romance, promise her nothing, and demand sex. A more sophisticated man might buy roses for a woman, write her love poetry, hover on her every wish, et cetera; this signifies that he is not a total loser. But the most desirable men may deliberately avoid doing nice things for women in an attempt to signal they are so high status that they don't need to. The average man tries to differentiate himself from the total loser by being nice; the extremely attractive man tries to differentiate himself from the average man by not being especially nice.

In all three examples, people at the top of the pyramid end up displaying characteristics similar to those at the bottom. Hipsters deliberately wear the same clothes uncool people wear. Families with old money don't wear much more jewelry than the middle class. And very attractive men approach women with the same lack of subtlety a total loser would use.1

If politics, philosophy, and religion are really about signaling, we should expect to find countersignaling there as well.


Pretending To Be Wise

Let's go back to Less Wrong's long-running discussion on death. Ask any five year old child, and ey can tell you that death is bad. Death is bad because it kills you. There is nothing subtle about it, and there does not need to be. Death universally seems bad to pretty much everyone on first analysis, and what it seems, it is.

But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly. Precisely because these benefits are so much smaller than the cost, they are hard to notice. It takes a particularly subtle and clever mind to think them up. Any idiot can tell you why death is bad, but it takes a very particular sort of idiot to believe that death might be good.

So pointing out this contrarian position, that death has some benefits, is potentially a signal of high intelligence. It is not a very reliable signal, because once the first person brings it up everyone can just copy it, but it is a cheap signal. And to the sort of person who might not be clever enough to come up with the benefits of death themselves, and only notices that wise people seem to mention death can have benefits, it might seem super extra wise to say death has lots and lots of great benefits, and is really quite a good thing, and if other people should protest that death is bad, well, that's an opinion a five year old child could come up with, and so clearly that person is no smarter than a five year old child. Thus Eliezer's title for this mentality, "Pretending To Be Wise".

If dwelling on the benefits of a great evil is not your thing, you can also pretend to be wise by dwelling on the costs of a great good. All things considered, modern industrial civilization - with its advanced technology, its high standard of living, and its lack of typhoid fever -  is pretty neat. But modern industrial civilization also has many costs: alienation from nature, strains on the traditional family, the anonymity of big city life, pollution and overcrowding. These are real costs, and they are certainly worth taking seriously; nevertheless, the crowds of emigrants trying to get from the Third World to the First, and the lack of any crowd in the opposite direction, suggest the benefits outweigh the costs. But in my estimation - and speak up if you disagree - people spend a lot more time dwelling on the negatives than on the positives, and most people I meet coming back from a Third World country have to talk about how much more authentic their way of life is and how much we could learn from them. This sort of talk sounds Wise, whereas talk about how nice it is to have buses that don't break down every half mile sounds trivial and selfish..

So my hypothesis is that if a certain side of an issue has very obvious points in support of it, and the other side of an issue relies on much more subtle points that the average person might not be expected to grasp, then adopting the second side of the issue will become a signal for intelligence, even if that side of the argument is wrong.

This only works in issues which are so muddled to begin with that there is no fact of the matter, or where the fact of the matter is difficult to tease out: so no one tries to signal intelligence by saying that 1+1 equals 3 (although it would not surprise me to find a philosopher who says truth is relative and this equation is a legitimate form of discourse).

Meta-Contrarians Are Intellectual Hipsters

A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.

A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

According to the survey, the average IQ on this site is around 1452. People on this site differ from the mainstream in that they are more willing to say death is bad, more willing to say that science, capitalism, and the like are good, and less willing to say that there's some deep philosophical sense in which 1+1 = 3. That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death. Instead, they are at the level where they want to differentiate themselves from the somewhat smarter people who think the side benefits to death are great. They are, basically, meta-contrarians, who counter-signal by holding opinions contrary to those of the contrarians' signals. And in the case of death, this cannot but be a good thing.

But just as contrarians risk becoming too contrary, moving from "actually, death has a few side benefits" to "DEATH IS GREAT!", meta-contrarians are at risk of becoming too meta-contrary.

All the possible examples here are controversial, so I will just take the least controversial one I can think of and beg forgiveness. A naive person might think that industrial production is an absolute good thing. Someone smarter than that naive person might realize that global warming is a strong negative to industrial production and desperately needs to be stopped. Someone even smarter than that, to differentiate emself from the second person, might decide global warming wasn't such a big deal after all, or doesn't exist, or isn't man-made.

In this case, the contrarian position happened to be right (well, maybe), and the third person's meta-contrariness took em further from the truth. I do feel like there are more global warming skeptics among what Eliezer called "the atheist/libertarian/technophile/sf-fan/early-adopter/programmer empirical cluster in personspace" than among, say, college professors.

In fact, very often, the uneducated position of the five year old child may be deeply flawed and the contrarian position a necessary correction to those flaws. This makes meta-contrarianism a very dangerous business.

Remember, most everyone hates hipsters.

Without meaning to imply anything about whether or not any of these positions are correct or not3, the following triads come to mind as connected to an uneducated/contrarian/meta-contrarian divide:

- KKK-style racist / politically correct liberal / "but there are scientifically proven genetic differences"
- misogyny / women's rights movement / men's rights movement
- conservative / liberal / libertarian4
- herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson
- don't care about Africa / give aid to Africa / don't give aid to Africa
- Obama is Muslim / Obama is obviously not Muslim, you idiot / Patri Friedman5

What is interesting about these triads is not that people hold the positions (which could be expected by chance) but that people get deep personal satisfaction from arguing the positions even when their arguments are unlikely to change policy6 - and that people identify with these positions to the point where arguments about them can become personal.

If meta-contrarianism is a real tendency in over-intelligent people, it doesn't mean they should immediately abandon their beliefs; that would just be meta-meta-contrarianism. It means that they need to recognize the meta-contrarian tendency within themselves and so be extra suspicious and careful about a desire to believe something contrary to the prevailing contrarian wisdom, especially if they really enjoy doing so.


Footnotes

1) But what's really interesting here is that people at each level of the pyramid don't just follow the customs of their level. They enjoy following the customs, it makes them feel good to talk about how they follow the customs, and they devote quite a bit of energy to insulting the people on the other levels. For example, old money call the nouveau riche "crass", and men who don't need to pursue women call those who do "chumps". Whenever holding a position makes you feel superior and is fun to talk about, that's a good sign that the position is not just practical, but signaling related.

2) There is no need to point out just how unlikely it is that such a number is correct, nor how unscientific the survey was.

3) One more time: the fact that those beliefs are in an order does not mean some of them are good and others are bad. For example, "5 year old child / pro-death / transhumanist" is a triad, and "warming denier / warming believer / warming skeptic" is a triad, but I personally support 1+3 in the first triad and 2 in the second. You can't evaluate the truth of a statement by its position in a signaling game; otherwise you could use human psychology to figure out if global warming is real!

4) This is my solution to the eternal question of why libertarians are always more hostile toward liberals, even though they have just about as many points of real disagreement with the conservatives.

5) To be fair to Patri, he admitted that those two posts were "trolling", but I think the fact that he derived so much enjoyment from trolling in that particular way is significant.

6) Worth a footnote: I think in a lot of issues, the original uneducated position has disappeared, or been relegated to a few rednecks in some remote corner of the world, and so meta-contrarians simply look like contrarians. I think it's important to keep the terminology, because most contrarians retain a psychology of feeling like they are being contrarian, even after they are the new norm. But my only evidence for this is introspection, so it might be false.

367 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2010-09-15T14:49:19.234Z · LW(p) · GW(p)

I also recently noticed this triad:

Seek sex + money / pursue only pure truth and virtue / seek sex + money

Replies from: MichaelVassar, AlexanderRM
comment by MichaelVassar · 2012-08-12T17:14:26.489Z · LW(p) · GW(p)

To be fair, I think that this triad is largely a function of the sort of society one lives in. It could be summarized as "submit to virtuous social orders, seek to dominate non-virtuous ones if you have the ability to discern between them"

Replies from: EditedToAdd
comment by EditedToAdd · 2017-03-26T18:10:27.421Z · LW(p) · GW(p)

I think it’s more along the lines of: people in the third stage have acquired and digested all the low-hanging and medium-hanging fruit that those in the second stage are struggling to acquire, that advancing further is now really hard. So they now seek sex and money/power partly because acquiring those will (in the long run) help them further advance in the areas that they have currently put on hold. And partly because of course it’s also nice to have them.

comment by AlexanderRM · 2015-03-25T04:51:51.943Z · LW(p) · GW(p)

Could anyone elaborate on this? All the ones listed in the article seem fairly obvious or well-explained, but nothing jumps out to me on this one. I think the problem is that I don't see what positions these are occupying or signaling: The clothing stuff is about wealth, while all the political ones are about intelligence (apparent intelligence, specifically). My assumption is that the first is someone who has very little money and the last is someone who has a lot, but then I'm not sure where the middle one would be.

That and perhaps that Yvain didn't list any distinguishing features between the first and last ones. I'm noticing now that all the counter-signaling ones tend to be slightly different- I'm sure the Old Rich didn't wear the exact same things as the poor, but rather nicer but less showy clothes. All the political examples have the third-stage ones usually acknowledging the existence of and problems with the lowest stage, often with significant differences. Likewise Hipsters have a lot of distinctly hipster traits that don't make them look like any particular non-mainstream group, although my knowledge of Hipsters comes almost entirely from jokes about Hipsters rather than having seen the phenomenon much.

comment by simplicio · 2010-09-14T02:07:19.446Z · LW(p) · GW(p)

Implementing your suggestion is easy. Just keep going "meta" until your opinions become stupid, then set meta = meta - 1.

There's an art to knowing when;

Never try to guess.

Toast until it smokes & then

20 seconds less.

Replies from: CronoDAS
comment by CronoDAS · 2010-09-14T02:42:41.570Z · LW(p) · GW(p)

I'm reminded of some "advice" I read about making money in the stock market:

Buy a stock, wait until it goes up, and then sell it. If it doesn't go up, then don't have bought it.

Replies from: Clippy
comment by Clippy · 2010-09-15T17:07:43.264Z · LW(p) · GW(p)

That strategy requires an impossible action in the case that the stock does not go up.

Replies from: homunq
comment by homunq · 2012-05-20T13:57:54.725Z · LW(p) · GW(p)

That comment made me smile. I didn't upvote it, but I just hid a paperclip, making the moment when I'll have to buy another box that much closer.

edit: actually, I wrote the above before I actually did it. But when I looked in the place I expected to find paperclips, and didn't find any, making the probability that I'll buy paperclips in the near future somewhat higher. So it's all good.

Replies from: Clippy
comment by Clippy · 2012-06-08T04:23:40.817Z · LW(p) · GW(p)

I am soooooo wasted right now...

comment by Emile · 2010-09-14T08:04:58.919Z · LW(p) · GW(p)

One more time: the fact that those beliefs are in an order does not mean some of them are good and others are bad. For example, "5 year old child / pro-death / transhumanist" is a triad, and "warming denier / warming believer / warming skeptic" is a triad, but I personally support 1+3 in the first triad and 2 in the second. You can't evaluate the truth of a statement by its position in a signaling game; otherwise you could use human psychology to figure out if global warming is real!

Well worth stressing.

It's possible to go meta on nearly any issue, and there are a lot of meta-level arguments - group affiliation, signaling, rationalization, ulterior motives, whether a position is contrarian or supported by the majority, who the experts are and how much we should trust them, which group is persecuted the most, straw man positions and whether anybody really holds them, slippery slopes, different ways to interpret statements, who is working under which cognitive bias ...

Which is why I prefer discussions to stick to the object level rather than go meta. It's just too easy to rationalize a position in meta, and to find convincing-sounding arguments as to why the other side mistakenly disagrees with you. And meta-level disagreements are more likely to persist in the long run, because they are hard to verify.

Sure, meta-level arguments are very valuable in many cases, we shouldn't drop them altogether. But we should be very cautious while using them.

Replies from: minusdash, Will_Newsome
comment by minusdash · 2015-01-03T01:42:31.230Z · LW(p) · GW(p)

That's a triad too: naive instinctive signaling / signaling-aware people disliking signaling / signaling is actually a useful and necessary thing.

comment by Will_Newsome · 2010-09-18T08:26:25.028Z · LW(p) · GW(p)

Going meta often introduces burdensome details. This will only lead you closer to truth when your epistemic rationality is strong enough to shoulder the weight.

comment by Daniel_Burfoot · 2010-09-14T19:10:27.938Z · LW(p) · GW(p)

One element of meta-contrarian reasoning is as follows. Consider a proposition P, that is hard for a layperson to assess. Because of this difficulty, an individual must rely on others for information. Now, a reasonable layperson might look around and listen with an open mind to all the arguments, and choose the one that seems most plausible to assign a probability to P.

The problem is that certain propositions have large corps of people whose professions depend on the proposition being true, but no counterforce of professional critics. So there is a large group of people (priests) who are professionally committed to the proposition "God exists". The existence of this group causes an obvious bias in the layperson's decision algorithm. Other groups, like doctors, economists, soldiers, and public school teachers, have similar commitments. Consider the proposition "public education improves national academic achievement." It could be true, it could be false - it's an empirical question. But all public school teachers are committed to this proposition, and there are very few people committed to the opposite.

So meta-contrarians explicitly correct for this kind of bias. I don't necessarily think that the public school proposition is false, but it should be thoroughly examined. I don't necessarily think that the nation would be safer if we abolished the Army and Marine Corps, but it might be.

Replies from: komponisto
comment by komponisto · 2010-09-14T21:23:19.168Z · LW(p) · GW(p)

The problem is that certain propositions have large corps of people whose professions depend on the proposition being true, but no counterforce of professional critics.

This really is a very good point.

comment by HughRistik · 2010-09-14T00:59:40.180Z · LW(p) · GW(p)

I think this post speaks of an interesting signaling element in societal dialectics. Let's call your hypothesis the "contrarian signaling" hypothesis. But to me, your post also hints at a couple other hypotheses behind this behavior.

The first hypothesis is the mundane one: that people end up in this groups with positions contrary to other positions, because those are just the positions that are more plausible to them, and they decide their subcultures out of their actual tastes. The reason that people divide themselves into groups with contrary views is because people have different phenotypes. I'm sure you've already thought of it, but I want to say a little more about it.

Under this hypothesis, hipster are hipsters primarily because they like retro clothes (and other aspects of the culture). They would have worn these same clothes back when they were in fashion; whereas true contrarians wouldn't have. This might be easier to imagine with another overlapping subculture: hippies. Hippies don't idolize the 60's to be contrarian: they idolize the 60's because they like the ideals of the 60's and feel nostalgic for them.

Now, you may say that the 60's were a contrarian time (which could make idolizing it still evidence contrarianism), so let's look at some other examples: steampunks and cyberpunks, who idolize technology (of the past/alternative past, or of the future, respectively). Are these folks romanticizing technology to be contrarian? Maybe, but I think there is a good chance that they just have some personality trait that leads them to think technology is cool. Similarly, I bet a lot of goths would enjoy the macabre and wear black clothing regardless of whether those tastes were contrary to the mainstream culture.

A good test of this hypothesis would be to ask, if the culture became more like your group, would you act the same way, or would you feel motivated to join some sort of new group that's contrarian to the way your culture now works? If everyone started dressing retro, would the hipsters start wearing suits? If everyone dressed in black, would the goths start wearing white? If most intelligent, educated people starting being skeptical of global warming, would current global warming skeptics quietly switch sides?

My second hypothesis is a social affiliation hypothesis: People divide themselves into subcultures/groups with contrary views not (just) because they want to signal their greater intelligence and taste than people of the inferior group of people, but because they like the people in that group. Under this hypothesis, being contrarian "isn't about being contrarian": it's about affiliating with other people who you like for other reasons.

Under this hypothesis, hipsters are hipsters not primarily because they want to show themselves as contrarian, or because they even like hipster aesthetics, but because they like the other people who are hipsters and want to affiliate with them. Goths wear black clothes not because they want to be different or because they like black, but because they like the sorts of people who also wear black clothes. Global warming skeptics/believers profess their beliefs because they want to affiliate and interact with other people who are global warming skeptics/believers.

(For instance, you might have some other position that you want to convince these people of, and then you use your shared position on global warming as a bridge to believe your other position... this I think is part of the mechanism by which we see correlations between different political and policy beliefs. Your friend comes over and says "hey yo, you know we agree on X which shows that I'm epistemically trustworthy and the type of person who you agree with... check out this other cool belief Y!").

A good test of this hypothesis would be if you're in a group, and your friends migrated to a different group, would you follow them, or would you stay? If you ran into another group of people who were more like you (but their group had different aesthetics, or wasn't contrarian), would you join their group, or stay with your current one?

None of these hypotheses are mutually exclusive: typically, people join a group because they like the practices and ideas of the group, because they like the personality traits of the people, and because they want to be contrarians who define themselves as different and superior to another group. These factors probably have different weights for different people in different groups.

Replies from: Relsqui, FrogSaga, Orfell
comment by Relsqui · 2010-09-14T09:59:03.057Z · LW(p) · GW(p)

But to me, your post also hints at a couple other hypotheses behind this behavior.

My reading of the post was not so much that it proposed contrarianism as an explanation for other cultural divisions, but that peoples' inclination towards a given level of contrarianism is itself a cultural division. We don't need to hypothesize about why people are metacontrarians; we're defining them by the habit of being metacontrary.

However, your hypotheses are still interesting in their own right. I predict that, were we to run your experiments, the first one would tend to describe the early adopters of a given subculture--the first hipster actually liked those dumb glasses, etc.--and later members would increasingly be described by the latter.

This is roughly what Gladwell's Tipping Point is about, actually.

check out this other cool belief Y!

I think that this is how all debates (and evangelism) should sound.

comment by FrogSaga · 2013-03-05T16:53:47.112Z · LW(p) · GW(p)

The account of the nouveau riche's ostentatious behavior and appearance compared to the relatively subtle expressions exhibited by the old-money generation has causes and explanations far beyond "counter-signaling". I do not mean to say that counter-signaling doesn't play a part; however it's a small facet and not nearly as important as other factors.

(I realize that this may come off as overly nit-picky or outright derailing. However, as the bit I am critiquing is one of your foundational points to your article; I feel there is value in calling attention to it.)

You did not account for the nouveau riche generation's updated social conditioning factors such as the increase in the volume and effectiveness of mass-marketing. It's important to know what sort of films, books, advertising trends, etc were prevalent and popular during the nouveau riche's formative years. What sort of values became most important in society? So much changed in people psychologically with the rise of consumer culture, such that it is impossible to track human behavior unless we take that rather sudden cultural evolution into account.

A person does not need to be counter-signaling when she or he identifies with a particular demographic. A very simple example: The child with enormous wealth watches the same cartoons as the middle class child and learns a similar set of social standards and values; and both children remain in a similar marketing demographic as they age. When the wealthy child becomes an adolescent, she or he will still attribute value to certain types of behaviors and appearances.

comment by Orfell · 2010-09-14T07:36:40.166Z · LW(p) · GW(p)

I have a strong urge to signal my difference to the Lesswrong crowd. Should I be worried?

comment by steven0461 · 2010-09-13T23:16:48.981Z · LW(p) · GW(p)

Here's a different hypothesis that also accounts for opinions reverting in the direction of the original uneducated position. Suppose "uneducated" and "contrarian" opinion are two independent random (e.g. normal) variables with the same mean representing the truth (but maybe higher variance for "uneducated"); and suppose what you call "meta-contrarian" opinion is just the truth. Then if you start from "contrarian" it's more likely that "meta-contrarian" opinion will be in the direction of "uneducated" than in the opposite direction, simply because "uneducated" contains nonzero information about where the truth is. I think you can also see this as a kind of regression to the mean.

Replies from: VAuroch, ciphergoth, Will_Newsome
comment by VAuroch · 2014-01-06T02:47:31.369Z · LW(p) · GW(p)

I don't see why we should expect the random variables to be based around "truth". I'd believe in a common centerpoint, but I think it would be more usefully labeled "human-intuitive position" than "truth".

Replies from: AlexanderRM
comment by AlexanderRM · 2015-03-25T05:07:13.381Z · LW(p) · GW(p)

It seems to me that uneducated-person opinion would be the "human-intuitive position", and the educated person opinion would be... changed off from that, with a tendency to be somewhere in the direction of truth. Although the uneducated-person opinion won't always be constant across different times and cultures (although it varies; Death is Bad is probably a universal for 5-year-olds, Racism might be pretty universal outside of isolated groups with populations too small to have any racial diversity), so I don't think it will usually be an inherent position.

I think steven0461's statement makes some sense though if you talk about the average position among many different issues, and also if you only look at issues where things like evidence and human reasoning can tell you about the truth. I expect that the uneducated opinion will appear distributed randomly around the truth (although not absurdly far from it, when you consider the entirety of possibility-space), and the educated opinion will diverge from it in a way that will usually be towards what the evidence supports, but often overshooting, undershooting, or going far to the side. Likewise the 3rd-stage opinion should diverge from the educated opinion in a similar manner, except... by definition it will be in the rough direction of the original position, or we'd just call it a more radical version of the educated position.

However there seems to be a MAJOR potential pitfall in reasoning about where they're located, since all the examples listed tend to align politically (roughly conservative/liberal/LessWrong-type). So trying to reason by looking at those examples and seeing which one is true, and then trying to derive a theory on the tendencies involved based on that, will tend to give you a theory which supports your position being right.

comment by Paul Crowley (ciphergoth) · 2011-01-07T15:35:54.312Z · LW(p) · GW(p)

I think that this only works if positions are in one dimension. If they are in many dimensions then I suspect that the truth and the uneducated opinion are on the same side of the contrarian opinion as often as they are on opposite sides.

EDIT: I no longer think the above makes any sense. I'm tired, sorry!

comment by Will_Newsome · 2011-01-07T14:42:17.637Z · LW(p) · GW(p)

I'm a little sad that I've integrated this pretty thoroughly into my epistemology because it's a very good point and yet most people probably missed this comment.

Replies from: shokwave
comment by shokwave · 2011-01-07T15:14:15.847Z · LW(p) · GW(p)

Thank-you for commenting and bringing this to my attention. This also makes for a fantastic "shut down the contrarian" response when your meta-contrarianism is questioned.

comment by Will_Newsome · 2010-09-13T22:11:48.995Z · LW(p) · GW(p)

Yet another thought-provoking post from Yvain.

I've implicitly noticed the meta-contrarian trend on Less Wrong and to a lesser extent in SIAI before, and I think it's led me to taking my meta-meta-contrarianism a little far sometimes. I get a little too much enjoyment out of trolling cryonicists and libertarians: indeed, I get a feeling of self-righteousness because it seems that I'm doing a public service by pointing out what appears to be a systematic bias and flaw of group epistemology in the Less Wrong belief cluster. This feeling is completely disproportionate to the extent that I'm actually helping: in general, the best way to emphasize the weaker points of an appealing argument isn't to directly troll the person who holds it. Steve Rayhawk is significantly better than me in this regard. So thanks, Yvain, for pointing out these different levels of meta and how the sense of superiority they give can lead to bad epistemic practice. I'll definitely check for signs of this next time I'm feeling epistemically self-righteous.

Replies from: Relsqui
comment by Relsqui · 2010-09-14T09:09:02.230Z · LW(p) · GW(p)

I'll definitely check for signs of this next time I'm feeling epistemically self-righteous.

A friend of mine likes to say that, if you find that your personal opinion happens to align perfectly with what popular culture tells you to think, you should examine that opinion really closely to make sure it's really yours. It's a similar heuristic to the self-righteousness one, applied specifically to the first-level or "uninformed" position (since "uninformed" is really a lot closer to "only informed subconsciously, by local culture and media").

comment by RobinZ · 2011-02-09T15:51:58.331Z · LW(p) · GW(p)

Belatedly, a quotation to hang at the top of the post:

There is a great difference between still believing something and believing it again. Still to believe that the moon affects the plants reveals stupidity and superstition, but to believe it again is a sign of philosophy and reflection.

Lichtenberg, Georg Christoph, 1775

comment by loqi · 2010-09-14T17:19:41.320Z · LW(p) · GW(p)

Whenever holding a position makes you feel superior and is fun to talk about, that's a good sign that the position is not just practical, but signaling related.

Readers be warned: Internalizing this insight may result in catastrophic loss of interest in politics.

Replies from: Vladimir_M, AlexanderRM
comment by Vladimir_M · 2010-09-14T18:44:38.057Z · LW(p) · GW(p)

Perhaps for some people -- but on the other hand, it creates an even higher intellectual challenge to achieve accurate understanding. Understanding hard and complicated things in math and science is extremely challenging, but ultimately, you still have fully reliable trusted authorities to turn to when you're lost, and you know they won't lie and bullshit you. In politics and heavily politicized fields in general, there is no such safety net; you are completely on your own.

comment by AlexanderRM · 2015-03-25T05:09:44.563Z · LW(p) · GW(p)

I've known politics is largely about status signaling (which hasn't caused any reduction of interest in issues which our society politicizes, however, just in elections and the like) since I started reading LW, but I just realized that reading LessWrong makes me feel superior (although I've noticed this before, but it seems hard to avoid) and it's fun to talk about. That's horrifying.

comment by mattnewport · 2010-09-14T16:29:37.875Z · LW(p) · GW(p)

Here's my alternative explanation for your triads which, while obviously a caricature, is no more so than yours and I think is more accurate: un-educated / academic / educated non-academic.

Essentially your 'contrarian' positions are the mainstream positions you are more or less required to hold to build a successful academic (or media) career. Some academics can get away with deviation in some areas (at some cost to their career prospects) but relatively few are willing to risk it. Intelligent, educated individuals who have not been subject to excessive exposure to academic groupthink are more likely to take your meta-contrarian positions.

See also Moldbug's thoughts on the University.

Replies from: nick012000
comment by nick012000 · 2010-09-29T11:22:30.384Z · LW(p) · GW(p)

Seems to me like a (hopefully Friendly) seed AI is more likely to provide the "Schelling point" that'd provide an alternative to the modern US government than any sort of reactionary "antiversity".

EDIT: Come to think of it, a libertarian space society could probably do it, too, much the same way as the Soviet Union always had "surrender to the US" as an eject button.

comment by Apprentice · 2010-09-14T00:04:03.450Z · LW(p) · GW(p)

A while back the "Steveosphere" had a list of items for which "the masses display more common sense than the smarties do". These suggest that they think they have located Yvain-clusters of the following type:

  1. Troglodyte position.
  2. Liberal position.
  3. Troglodyte position held for sophisticated reasons.
Replies from: AlexanderRM
comment by AlexanderRM · 2015-03-25T05:47:30.473Z · LW(p) · GW(p)

...many of those questions are rather odd. I went in expecting things like "are the tides controlled by the oceans", questions phrased in a way that sounds stupid but are actually correct (or the "is it scientific to say"), which would have shown that deliberately avoiding stupid statements can lead smart people to make incorrect statements.

And some, like "Genes play a major role in determining personality." and "Things for blacks in the US have improved over time.", fall into that category. I particularly liked "Whites are hurt by affirmative action policies that favor blacks". However, many of the questions outright contain "should" statements where you actually cannot say that the answers the "smarties" gave were factually incorrect, because they require getting into our goals and morality, or at least talking about very complicated things from that angle.

comment by steven0461 · 2010-09-13T23:37:25.415Z · LW(p) · GW(p)

Global warming was the least controversial example you could think of? Seriously?

Replies from: Yvain, ciphergoth, Relsqui
comment by Scott Alexander (Yvain) · 2010-09-14T19:10:24.514Z · LW(p) · GW(p)

Well, the example was to show that there are certain meta-contrarian views held by a big part of this community which are trivially wrong and proof that they have gone too far. Given that restriction, what less controversial example would you have preferred?

I really would have liked to use the racism example, because it's most elegant. The in-group bias means people will naturally like their own race more than others. Some very intelligent and moral people come up with the opposing position that all races are equal; overcoming one's biases enough to believe this becomes (rightly) correlated with high intelligence and morality. This contrarian idea spreads until practically everyone believes it and signals it so much as to become annoying and inane. This creates a niche for certain people to signal their difference to the majority by becoming pro-racial differences. But taken too far, this meta-contrarian position could easily lead to racism.

But any post that includes a whole paragraph on racism automatically ends up with the comments entirely devoted to discussing racism, and the rest of the post completely ignored. Feminism would also have worked, but I would have to be dumb as a rock to voluntarily bring up gender issues on this blog. Global warming seemed like something that Less Wrong is generally willing to admit is correct and doesn't care that much about, while still having enough of an anti-global-warming faction to work as an example.

comment by Paul Crowley (ciphergoth) · 2010-09-14T08:02:28.157Z · LW(p) · GW(p)

What less controversial example should have been used instead?

comment by Relsqui · 2010-09-14T04:46:28.315Z · LW(p) · GW(p)

I've been lurking and reading for a few days--interested in a few things, thinking about a few things, but not quite ready to jump in and start participating yet. This comment cracked me up enough to make an account and upvote it.

comment by knb · 2010-09-14T06:12:59.799Z · LW(p) · GW(p)

conservative / liberal / libertarian

No way, I don't buy this one at all. I find that most little kids are essentially naive liberals. We should give poor sick people free medicine! We should stop bad polluters from hurting birds and trees! Conservatism/libertarianism is the contrarian position. Everything has a cost! There are no free lunches! Managerial-technocratic liberals are the meta-contrarians. So what about the costs? We've got 800 of the smartest guys from Yarvard and Oxbridge to do cost-benefit analyses for us!

Of course there are meta-meta-contrarians as well: reactionaries, meta-libertarians (Patri Friedman is a good example of a metalibertarian IMO), anarchists, etc.

It's contrarians all the way down.

Replies from: Yvain, Relsqui, kodos96, Mercy
comment by Scott Alexander (Yvain) · 2010-09-14T19:34:15.153Z · LW(p) · GW(p)

I was thinking more in terms of conservative values like "My country is the best" and "Our enemies are bad people who hate our freedom", but your way makes a lot of sense too.

Although it's worth noting that all of what you say is obvious even to little kids are things no one had even thought of a hundred years ago. Rachel Carson and Silent Spring are remembered as iconic because they kick-started an environmentalist movement that just didn't really exist before the second half of the 20th century (although Thoureau and people like that get honorable mention). The idea of rich people paying to give poor sick people free medicine would have gotten you laughed out of most socially stratified civilizations on the wrong side of about 1850.

But I don't want to get too bogged down in which side is more contrarian, because it sounds too close to arguing whether liberalism or conservativism is better, which of course would be a terribly low status thing to do on a site like this :)

I think it was probably a mistake to include such large-scale politics on there at all. Whether a political position seems natural or contrarian depends on what social context someone's in, what age they are, and what the particular issue involved is.

What about this: moderately smart teenagers become extreme liberals to be contrary to the conservative ideals of their elders; excessively smart teenagers become extreme libertarians to be contrary to moderately smart teenagers and their elders, and older people become conservative (or moderate liberals) to signal they're not teenagers :)

comment by Relsqui · 2010-09-14T09:15:19.426Z · LW(p) · GW(p)

I think you're right about the chronological sequence of kids as "naive liberals" to adults as conservative (more so than the kids, anyway), but not about the rationale. Positioning oneself on the contrarian hierarchy is about showing off that your intellect is greater than the people below you on it. It's the rare adult who feels a need to explicitly demonstrate their intellectual superiority to children--but the common adult who has a job and pays taxes and actually ever thinks about the cost of things, as opposed to the kids, who don't need to.

In short, adults don't oppose free medicine etc. to be contrary to the position of naive children; they oppose it because they're the ones who'd have to pay for it.

comment by kodos96 · 2010-09-14T18:49:43.709Z · LW(p) · GW(p)

I think the takeaway from this is just that classification of phenomena into these triads is a very subjective business. That's not necessarily a bad thing, since the point of this (if I'm reading Yvain correctly) is not to determine the correctness of a position by its position in a triad, but simply to encourage people to notice when their own thinking is motivated by a desire to climb the triad, rather than pursue truth, and to be skeptical of yourself when you detect yourself trying to triad-climb.

comment by Mercy · 2010-09-14T11:40:40.580Z · LW(p) · GW(p)

Ah thanks that position makes more sense to me now, you mean what most people call social democracy, not liberalism as it is understood outside the US? Because at least in britain, libertarian's align with liberals/conservatives against socialists and social democrats.

But to be honest, they are a good example of a flaw in the setup, which is that people tend to define themselves against imaginary enemies that believe everything they do only backwards, rather than naively dispute everything their enemy says. So libertarians are more likely to complain about "statists", than come out in favour of taxes or wars because socialists are against them.

comment by Spurlock · 2010-09-13T22:19:46.822Z · LW(p) · GW(p)

I think it's worth noting explicitly (though you certainly noted it implicitly) that meta-contrarianism does not simply agree with the original, non-contrarian opinion. Meta-contrarianism usually goes to great lengths to signal that it is indeed the level above, and absolutely not the level below, the default position.

An example, from a guy who lives in a local hipster capital:

People not interested in (or just unskilled at) looking cool will mostly buy their clothing at places like Wal-Mart. The "contrarian" cluster differentiates itself by shopping at very expensive, high-status stores (dropping $150 on a pair of jeans, say). Your hipster crowd does not respond to this by returning to Wal-Mart. Instead, they get very distinct retro or otherwise unusual clothing from thrift stores and the like, places that no one who simply, actually didn't care about signaling would never bother to seek out.

The counter-counter culture often cares just as much about differentiating itself from the culture as it does the counter-culture. The noveau-riche may not have to worry about this, if in their case it comes automatically, but other groups do.

Replies from: evgenit, sark
comment by evgenit · 2010-09-14T01:14:12.314Z · LW(p) · GW(p)

The counter-counter culture often cares just as much about differentiating itself from the culture as it does the counter-culture.

Of course they do, otherwise their signalling would be indistinguishable from the culture's, and thus useless.

comment by sark · 2010-09-14T07:26:30.267Z · LW(p) · GW(p)

The counter-counter culture often cares just as much about differentiating itself from the culture as it does the counter-culture.

There are other dimensions in which the counter-counter people can signal their difference from the non-counters (e.g. hipsters are already living in upper class neighborhoods, have upper class mannerisms etc.). This makes it possible for a simple reversal in the uninformed/contrary/meta-contrary dimension to differentiate them from the counters.

Replies from: Spurlock
comment by Spurlock · 2010-09-14T12:13:49.108Z · LW(p) · GW(p)

Absolutely. This is the sort of thing I was referring to in the last sentence.

The point being, just because they don't seem to go to great pains to distinguish themselves from the non-counters, doesn't mean they're only trying to differentiate from one group: status above both is still the goal, even if they don't have to actively "seek" it.

Replies from: sark
comment by sark · 2010-09-14T14:45:27.499Z · LW(p) · GW(p)

No problem. I was just providing a live example of metacontrarianism ;)

comment by Raw_Power · 2011-07-07T17:35:30.767Z · LW(p) · GW(p)

Could it be that the entire history of philosophy and its "thesis, antithesis, synthesis" recurring structure is an instance of this? Not to mention other liberal arts, and the development of the cycles of fashion.

comment by komponisto · 2010-09-16T03:51:41.930Z · LW(p) · GW(p)

According to the survey, the average IQ on this site is around 145^2

I can't possibly have been the only one to have been amused by this.

(Well, doesn't Clippy claim to be a superintelligence?)

Replies from: Vladimir_M
comment by Vladimir_M · 2010-09-16T04:13:20.840Z · LW(p) · GW(p)

According to the survey, the average IQ on this site is around 145

I can't possibly have been the only one to have been amused by this.

The really disturbing possibility is that average people hanging out here might actually be of the sort that solves IQ tests extremely successfully, with scores over 140, but whose real-life accomplishments are far below what these scores might suggest. In other words, that there might be a selection effect for the sort of people that Scott Adams encountered when he joined Mensa:

I decided to take an I.Q. test administered by Mensa, the organization of geniuses. If you score in the top 2% of people who take that same test, you get to call yourself a “genius” and optionally join the group. I squeaked in and immediately joined so I could hang out with the other geniuses and do genius things. I even volunteered to host some meetings at my apartment.

Then, the horror.

It turns out that the people who join Mensa and attend meetings are, on average, not successful titans of industry. They are instead – and I say this with great affection – huge losers. I was making $735 per month and I was like frickin’ Goldfinger in this crowd. We had a guy who was some sort of poet who hoped to one day start “writing some of them down.” We had people who were literally too smart to hold a job. The rest of the group dressed too much like street people to ever get past security for a job interview. And everyone was always available for meetings on weekend nights.

Replies from: komponisto, NancyLebovitz, Relsqui, BillyOblivion
comment by komponisto · 2010-09-16T04:23:56.259Z · LW(p) · GW(p)

I should clarify that I was specifically referring to the interesting placement of that superscript 2. :-)

EDIT: Though actually, this is probably the perfect opportunity to wonder if the reason people join this community is that it's probably the easiest high-IQ group to join in the world: you don't have to pass a test or earn a degree; all you have to do is write intelligent blog comments.

Replies from: Vladimir_M, Relsqui
comment by Vladimir_M · 2010-09-16T04:31:25.661Z · LW(p) · GW(p)

Oh, then it was a misunderstanding. I thought you were (like me) amused by the poll result suggesting that the intelligence of the average person here is in the upper 99.865-th percentile.

(Just to get the feel for that number, belonging to the same percentile of income distribution in the U.S. would mean roughly a million dollars a year.)

Replies from: blogospheroid, Relsqui, komponisto, faul_sname
comment by blogospheroid · 2010-09-27T11:06:10.800Z · LW(p) · GW(p)

Hmm.. Isn't the intelligence distribution more like a bell curve and the distribution of income more like a power law?

Replies from: BillyOblivion
comment by BillyOblivion · 2010-10-05T04:58:09.737Z · LW(p) · GW(p)

Both can be power-law or Gaussian depending on your "perspective".

There are roughly as many people with a IQ over 190 as there are people with an income over 1 billion USD per annum. By roughly I mean an order of magnitude.

Generally IQ is graphed as a Gaussian distribution because of the way it's measured--the middle of the distribution is defined as 100. Income is raw numbers.

(edited to move a scare quote)

comment by Relsqui · 2010-09-16T04:42:44.100Z · LW(p) · GW(p)

Upvoted for the quality of the analogy, although I also agree with you.

comment by komponisto · 2010-09-16T04:34:15.993Z · LW(p) · GW(p)

Well I'm also amused by that, to be sure.

comment by faul_sname · 2012-11-16T23:09:12.753Z · LW(p) · GW(p)

And since the correlation between the two is about 0.4, that would suggest an income of 1.2 standard deviations above the mean, or about $80,000 a year in the US, not controlling for age. Controlling for age, I suspect LWers have approximately average income for their level of intelligence (and because regression to the mean is not intuitive, it feels like we should be doing far better than that).

comment by Relsqui · 2010-09-16T04:48:04.672Z · LW(p) · GW(p)

the reason people join this community is that it's probably the easiest high-IQ group to join in the world

I find this sort of puzzling. There is clearly a demand for organizations which provide opportunities to interact and socialize with people carefully selected for their ability to solve clever puzzles (and whatever else is on the IQ test--I haven't taken a real one). Why is that? Does anybody here specifically seek out high-IQ friends? Do you feel like trying to explain the appeal to me? Intelligence is one of my criteria for my companions, to be sure, but I'm not sure it's in the top three, and I certainly wouldn't settle for it alone.

Also, I'm not sure that earning a degree is harder than writing an intelligent blog post. Not for everyone, anyway.

Replies from: komponisto, cata
comment by komponisto · 2010-09-16T05:26:10.686Z · LW(p) · GW(p)

There is clearly a demand for organizations which provide opportunities to interact and socialize with people carefully selected for their ability to solve clever puzzles (and whatever else is on the IQ test--I haven't taken a real one)

That's not the sense of IQ that I mean; rather, I mean the underlying thing which that ability is supposed to be an indicator of.

(My guess would be that this underlying thing is probably something like "richness of mental life".)

Does anybody here specifically seek out high-IQ friends? Do you feel like trying to explain the appeal to me?

My experience suggests that it makes a significant difference to one's quality of life whether the people in one's social circle are close to one's own intelligence level.

Not too long ago I spent some time at the SIAI house; and even though I was probably doing more "work" than usual while I was there, it felt like vacation, simply because the everyday task of communicating with people was so much easier and more efficient than in my normal life.

Replies from: Relsqui
comment by Relsqui · 2010-09-16T06:11:57.074Z · LW(p) · GW(p)

That's not the sense of IQ that I mean; rather, I mean the underlying thing which that ability is supposed to be an indicator of.

See my response to cata.

My experience suggests that it makes a significant difference to one's quality of life whether the people in one's social circle are close to one's own intelligence level.

I suppose it's possible that I'm merely spoiled in this regard, but I'm not sure. Yes, most of the people I've spent a lot of time with in my life have been some kind of intelligent--my parents are very smart, and I was taught to value intellect highly growing up. But some of the folks who've really made me glad to have them around have been less educated and less well-read than I am, which isn't trivial (I'm a high school dropout, albeit one who likes to do some learning on her own time).

I'm thinking particularly of my coworkers at my last job. We worked behind the counter at a dry cleaner. These were not people with college educations, or who had learned much about critical thinking or logic or debate. This is not to say they had below average intelligence--just not particularly higher, either. They were confused as to why I was working this dead-end job with them instead of going to college and making some of myself; I was clearly capable of it.

But those people made the job worthwhile. They were thoughtful, respectful, often funny, and supportive. They were good at their jobs--on a busy day, it felt like being part of a well-oiled machine. There isn't one quality in that list you could have traded for outstanding intelligence and made them better people, nor made me happier to be around them.

If your point is right, maybe all that means is that my brain is nothing to write home about. But I'm fonder of the theory that there are other qualities that have at least as much value in terms of quality of life. Would you be happy living in a house of smart people who were all jerks?

Replies from: komponisto
comment by komponisto · 2010-09-16T16:27:43.980Z · LW(p) · GW(p)

Would you be happy living in a house of smart people who were all jerks?

Of course not. What caused your probability of my saying "yes" to be high enough to make this question worth asking?

I could with more genuine curiosity ask you the following: would you be happy spending your life surrounded by nice people who understood maybe 20% of your thoughts?

Replies from: Relsqui
comment by Relsqui · 2010-09-16T20:39:27.249Z · LW(p) · GW(p)

What caused your probability of my saying "yes" to be high enough to make this question worth asking?

It was rhetorical, and meant to support the point that intelligence alone does not make a person worthwhile.

Would you be happy spending your life surrounded by nice people who understood maybe 20% of your thoughts?

I'd rather have more kindness and less intelligence than the reverse. I think it's clear we'd both prefer a balance, though, and that's really all my point was: intelligence is not enough to qualify a person as worthwhile. Which is why social groups with that as the only criterion confuse me. :)

Replies from: None
comment by [deleted] · 2010-09-17T02:47:43.981Z · LW(p) · GW(p)

Here I go, speaking for other people, but I'm guessing that people at the LessWrong meetup at least met some baseline of all those other qualities, by komponisto's estimation, and that the difference of intelligence allowed for such a massive increase in ability to communicate made talking so much more enjoyable, given that ey was talking to decent people.

Each quality may not be linear. If someone is "half as nice" as another person, I don't want to talk to them at half the frequency, or bet that I'll fully enjoy conversation half of the time. A certain threshold of most qualities makes a person totally not worth talking to. But at the same time, a person can only be so much more thoughtful, respectful, funny, supportive, before you lose your ability to identify with them again! That's my experience anyhow - if I admire a person too much, I have difficulty imagining that they identify with me as I do with them. Trust needs some symmetry. And so there are probably optimal levels of friendship-worthy qualities (very roughly by any measure), a minimum threshold, and a region where a little difference makes a big difference. The left-bounded S-curves of friendship.

Then there is order. For different qualities, the difference between a person at minimum-threshold and at optimal is worth very different amounts of satisfaction to you. Some qualities probably have a threshold so low, you don't think about it. Not having inexplicable compulsions to murder is a big plus on my list. When that's the case, the quality seems to vary so slightly over most people, you almost take it for granted that people have enough of that quality. The more often you meet people at the minimum, the more amazing it will seem to meet someone at optimal. If you spend a long time surrounded by jerks, meeting supportive people is probably more amazing than usual. If you grow up surrounded by supportive people who have no idea what you're talking about half of the time, gaining that ability to communicate is probably worth a lot.

Finally, there's the affect heuristic. If a personality quality gain compared to the experienced average is worth a lot, of course it can distort your valuation of the difference of other qualities. If I were trapped all my life in a country whose language could capture only 1% of the ideas mine did, filled with good people who mostly just don't care about those other 99% of ideas, I would still feel greatly relieved to meet someone who spoke my language. Even if the person was a little bit below the threshold that marks em a jerk. But why is the person more likely to be a jerk anyhow? What if the person is actually really good and decent as well? I might propose.

I don't know if komponisto had the urge to marry anyone at the meetup. But I'm sure it happens.

Replies from: Relsqui, komponisto
comment by Relsqui · 2010-09-17T03:02:12.735Z · LW(p) · GW(p)

I think this is a really excellent analysis and I agree with just about all of it.

I suspect that the difference in our initial reactions had to do with your premise that intelligent people are easier to communicate with. This hasn't been true in my experience, but I'd bet that the difference is the topics of conversation. If you want to talk to people about AI, someone with more education and intellect is going to suit you better than someone with less, even if they're also really nice.

I've definitely also had conversations where the guy in the room who was the most confused and having the least fun was the one with the most book smarts. I'm trying to remember what they were about ... off the top of my head, I think it tended to be social situations or issues which he had not encountered. Empathy would have done him more good than education in that instance (given that his education was not in the social sciences).

Replies from: None, wedrifid
comment by [deleted] · 2010-09-18T18:21:31.937Z · LW(p) · GW(p)

Your suspicion rings true. Having more intelligence won't make you more enjoyable to talk to on a subject you don't care about! It also may not make a difference if the topic is simple to understand, but still feels worth talking about (personal conversations on all sorts of things).

Education isn't the same as intelligence of course. Intelligence will help you gain and retain an education faster, through books or conversation, in anything that interests you.

Most of my high school friends were extremely intelligent, and mostly applied themselves to art and writing. A few mostly applied themselves to programming and tesla coils. I think a common characteristic that they held was genuine curiosity in exploring new domains, and could enjoy conversations with people of many different interests. The same was true for most of my college friends. I would say I selected for good intelligent people with unusually broad interests.

I still care a great deal for my specialist friends, and friends of varying intelligence. It's easy for me to enjoy a conversation with almost anyone genuinely interested in communicating, because I'll probably share the person's interest to some degree.

Roughly, curiosity overlap lays the ground for topical conversation, education determines the launching point on a topic, and intelligence determines the speed.

comment by wedrifid · 2010-09-17T06:14:17.311Z · LW(p) · GW(p)

I've definitely also had conversations where the guy in the room who was the most confused and having the least fun was the one with the most book smarts.

Isn't that what you would expect for most conversations, when all else is equal? This is an effect I expect to in general and I attribute it both due to self selection and causation.

Replies from: Relsqui
comment by Relsqui · 2010-09-17T06:23:26.242Z · LW(p) · GW(p)

I've definitely also had conversations where the guy in the room who was the most confused and having the least fun was the one with the most book smarts.

Isn't that what you would expect for most conversations, when all else is equal?

... well, it isn't what I do expect, so I guess I wouldn't. The thought never crossed my mind, so I don't really have anything more insightful to say about it yet. Let me chew on it.

I suspect that I mostly socialize with people I consider equals.

comment by komponisto · 2010-09-17T03:58:16.483Z · LW(p) · GW(p)

I'm guessing that people at the LessWrong meetup at least met some baseline of all those other qualities

Actually, I was talking about my two-week stay as an SIAI Visiting Fellow. (Which is kind of like a Less Wrong meetup...)

But, yeah.

Replies from: wedrifid, None
comment by wedrifid · 2010-09-17T06:03:16.605Z · LW(p) · GW(p)

I'm quite curious about what benefits you experienced from your two week visit... anything you can share or is it all secret and mysterious?

Not that I am considering applying. If I was I would have had to refrain from telling Eliezer (and probably Alicorn) whenever they are being silly. The freedom to speak ones mind without the need for securing approval is just too attractive to pass up! :)

Replies from: LucasSloan, komponisto, None
comment by LucasSloan · 2010-09-17T06:58:38.331Z · LW(p) · GW(p)

Not that I am considering applying. If I was I would have had to refrain from telling Eliezer (and probably Alicorn) whenever they are being silly. The freedom to speak ones mind without the need for securing approval is just too attractive to pass up! :)

Neither of these should stop you. Alicorn lives on the other side of the country from the house, and Eliezer is pretty lax about criticism (and isn't around much, anyway).

Replies from: wedrifid
comment by wedrifid · 2010-09-17T07:08:37.564Z · LW(p) · GW(p)

Oh, there's the thing with being on the other side of the world too. ;)

Replies from: LucasSloan
comment by LucasSloan · 2010-09-17T07:09:32.784Z · LW(p) · GW(p)

They pay for airfare, you know...

Replies from: wedrifid
comment by wedrifid · 2010-09-17T07:13:03.257Z · LW(p) · GW(p)

Damn you and your shooting down all my excuses! ;)

Not that I'd let them pay for my airfare anyway. I would only do it if I could pay them for the experience.

Replies from: randallsquared
comment by randallsquared · 2010-09-27T17:54:10.961Z · LW(p) · GW(p)

Damn [LucasSloan] and your shooting down all my excuses! ;)

Fortunately, you appear to be able to rationalize more quite easily. ;)

comment by komponisto · 2010-09-17T18:27:25.896Z · LW(p) · GW(p)

I'm quite curious about what benefits you experienced from your two week visit... anything you can share or is it all secret and mysterious?

Perhaps the most publicly noticeable result was that I had the opportunity to write this post (and also this wiki entry) in an environment where writing Less Wrong posts was socially reinforced as a worthwhile use of one's time.

Then, of course, are the benefits discussed above -- those that one would automatically get from spending time living in a high-IQ environment. In some ways, in fact, it was indeed like a two-week-long Less Wrong meetup.

I had the opportunity to learn specific information about subjects relating to artificial intelligence and existential risk (and the beliefs of certain people about these subjects), which resulted in some updating of my beliefs about these subjects; as well as the opportunity to participate in rationality training exercises.

It was also nice to become personally acquainted with some of the "important people" on LW, such as Anna Salamon, Kaj Sotala, Nick Tarleton, Mike Blume, and Alicorn (who did indeed go by that name around SIAI!); as well as a number of other folks at SIAI who do very important work but don't post as much here.

Conversations were frequent and very stimulating. (Kaj Sotala wasn't lying about Michael Vassar.)

As a result of having done this, I am now "in the network", which will tend to facilitate any specific contributions to existential risk reduction that I might be able to make apart from my basic strategy of "become as high-status/high-value as possible in the field(s) I most enjoy working in, and transfer some of that value via money to existential risk reduction".

Not that I am considering applying. If I was I would have had to refrain from telling Eliezer (and probably Alicorn) whenever they are being silly.

Eliezer is uninvolved with the Visiting Fellows program, and I doubt he even had any idea that I was there. Nor is Alicorn currently there, as I understand.

comment by [deleted] · 2010-09-17T06:20:18.367Z · LW(p) · GW(p)

I hear that the secret to being a fellow is show rigorously that the probability that one of them is being silly is greater than 1/2. Just a silly math test.

comment by [deleted] · 2010-09-17T06:13:45.296Z · LW(p) · GW(p)

Ah, you lucky fellow!

comment by cata · 2010-09-16T05:02:48.457Z · LW(p) · GW(p)

There is clearly a demand for organizations which provide opportunities to interact and socialize with people carefully selected for their ability to solve clever puzzles (and whatever else is on the IQ test--I haven't taken a real one).

Really? I don't think that's true; I think people just tend to assume that IQ is a good proxy for general intellectualism (e.g. highbrow tastes, willingness to talk and debate a lot, being well-read.) Since it's easier to score an IQ test than a test judging political literacy, education, and favorite novels, that's what organizations like Mensa use, and that's the measuring stick everyone trots out. Needless to say, it's not a very good one, but it's made its way into the culture.

I mean, even in casual usage, when most people talk about someone's high IQ, they probably aren't talking about focus, memory, or pattern recognition. They're likely actually talking about education and interests.

Replies from: Relsqui
comment by Relsqui · 2010-09-16T05:54:47.411Z · LW(p) · GW(p)

I mean, even in casual usage, when most people talk about someone's high IQ, they probably aren't talking about focus, memory, or pattern recognition. They're likely actually talking about education and interests.

That's precisely what troubles me. I don't like that we use a term which actually only means the former to refer to how "smart" someone is in vague, visceral sense--nor the implied equation of either IQ or smartness with utility.

I'm not accusing you of that necessarily, it's just a pattern I see in the world and fret about. Actually, it reminds me of something which might make a good article in its own right; I'll ruminate on it for a bit while I'm still getting used to article etiquette.

Replies from: None, cata
comment by [deleted] · 2010-09-17T03:11:26.223Z · LW(p) · GW(p)

I definitely agree on this. It's an abused and conflated word, though I don't know if that's more of a cause than an effect of problems society has with thinking about intelligence. I wonder how we could best get people to casually use a wider array of words and associations to distinguish the many different things we mean by "smart".

Replies from: Relsqui
comment by Relsqui · 2010-09-17T03:24:09.229Z · LW(p) · GW(p)

I don't know if that's more of a cause than an effect of problems society has

You've hit an important point here, and not just about the topic in question. Consider body image (we want to see people on TV we think are pretty, but we get our ideas of what's pretty in part from TV) and media violence (we want to depict the world as it really is, but we also want to impart values that will change the world for the better rather than glorifying people and events which change it for the worse). How, in general, do we break these loops?

I wonder how we could best get people to casually use a wider array of words and associations to distinguish the many different things we mean by "smart".

So far, I haven't thought of anything better than choosing to be precise when I'm talking about somebody's talents and weaknesses, so I try to do that.

comment by cata · 2010-09-16T06:09:17.684Z · LW(p) · GW(p)

I don't like that we use a term which actually only means the former to refer to how "smart" someone is in vague, visceral sense--nor the implied equation of either IQ or smartness with utility.

Well, me neither; I think it's a reflection of how people would like to imagine other humans as being much simpler and more homogeneous than they actually are. I look forward to your forthcoming post.

Replies from: Relsqui
comment by Relsqui · 2010-09-16T06:16:14.727Z · LW(p) · GW(p)

Well, me neither

That's reassuring. :)

I look forward to your forthcoming post.

Me too. I don't have a post's worth of idea yet. But there's cud yet to chew. (Ruminate has one of my favorite etymologies.)

comment by NancyLebovitz · 2010-09-17T18:33:17.831Z · LW(p) · GW(p)

This surprises me. One explanation for the mismatch between my experience with Mensa and Adams' is that local groups vary a lot. Another is that he's making up a bunch of insults based on a cliche.

What I've seen of Mensa is people who seemed socially ordinary (bear in mind, my reference group is sf fandom), but not as intelligent as I hoped. I went to a couple of gatherings-- one had pretty ordinary discussion of Star Trek. Another was basically alright, but had one annoying person who'd been in the group so long that the other members didn't notice how annoying he was-- hardly a problem unique to Mensa.

Kate Jones, President of Kadon Games, is a Mensan and one of the more intelligent people I know. I know one other Mensan I consider intelligent, and there's no reason to think I have a complete list of the Mensans in my social circle.

I was in Mensa for a while-- I hoped it would be useful for networking, but I didn't get any good out of it. The publications were generally underwhelming-- there was a lot of articles which would start with more or less arbitrary definitions for words, and then an effort to build an argument from the definitions. This was in the 80s, and I don't know whether the organization has changed.

Still, if I'd lived in a small town with no access to sf fandom, Mensa might have been a best available choice for me.

These days, I'd say there are a lot of online communities for smart people.

All this being said, I suspect that IQ tests the like select for people with mild ADD (look! another question! no need to stay focused on a project!) and against people who want to do things which are directly connected to their goals.

Replies from: Vladimir_M, komponisto
comment by Vladimir_M · 2010-09-17T19:09:58.254Z · LW(p) · GW(p)

I'd say that the problem is the selection effect for intelligent underachievers. People who are in the top 2% of the population by some widely recognized measure of intellectual accomplishment presumably already have affiliations, titles, and positions far more prestigious than the membership in an organization where the only qualification is passing a written test could ever be. Also, their everyday social circles are likely to consist of other individuals of the same caliber, so they have no need to seek them out actively.

Therefore, in an organization like Mensa, I would expect a strong selection effect for people who have the ability to achieve high IQ scores (whatever that might specifically imply, considering the controversies in IQ research), but who lack other abilities necessary to translate that into actual accomplishment and acquire recognition and connections among high-achieving people. Needless to say, such people are unlikely to end up as high-status individuals in our culture (or any other, for that matter). People of the sort you mention, smart enough to have flashes of extraordinary insight but unable to stay focused long enough to get anything done, likely account for some non-trivial subset of those.

That said, in such a decentralized organization, I would expect that the quality of local chapters and the sort of people they attract depends greatly on the ability and attitudes of the local leadership. There are probably places both significantly better and worse than what you describe.

comment by komponisto · 2010-09-17T20:03:19.641Z · LW(p) · GW(p)

I suspect that IQ tests [and] the like select for people with mild ADD

I'm not sure about this. I doubt I would do all that well on a Mensa-type IQ test, and I suspect ADD may be part of the reason. (Though SarahC has raised the possibility of motivated cognition interfering with mathematical problem solving, which I hadn't really considered.)

and against people who want to do things which are directly connected to their goals.

This, however, I do believe.

Despite Richard Feynman's supposedly low IQ score, and Albert Einstein's status as the popular exemplar of high-IQ, my impression (prejudice?) regarding traditional "IQ tests" is that they would in fact tend to select for people like Feynman (clever tinkerers) at the expense of people like Einstein (imaginative ponderers).

Replies from: gwern, NancyLebovitz
comment by gwern · 2011-11-26T10:19:24.426Z · LW(p) · GW(p)

Despite Richard Feynman's supposedly low IQ score

While I'm passing through looking for something else: http://news.ycombinator.com/item?id=1159719

comment by NancyLebovitz · 2010-09-18T03:22:27.954Z · LW(p) · GW(p)

I was generalizing from one example-- it's easier for me to focus on a series of little problems. If I have ADD, it's quite mild as such things go.

comment by Relsqui · 2010-09-16T04:29:20.184Z · LW(p) · GW(p)

That's fairly analogous to my worries about joining LW. I was afraid it would be full of extremely intelligent, very dumb people. ;)

Replies from: Raw_Power
comment by Raw_Power · 2011-07-07T17:44:57.626Z · LW(p) · GW(p)

How do you know this isn't the case?

comment by BillyOblivion · 2010-10-05T04:51:18.479Z · LW(p) · GW(p)

Intelligence is but one measure of mental ability. One of the critical ones for modern life goes by "Executive Function" http://en.wikipedia.org/wiki/Executive_functions it seems to be moderately independent of IQ. It could also be called "Self Discipline".

It is why really bright kids get lousy grades. Why kids who do well in High School, but never seem to study, tank when they hit college, or when the get out of college and actually have to show up for work clean, neat and on time.

I don't CARE if you can solve a rubics cube in 38 seconds, I need those TPS reports NOW.

Replies from: wedrifid
comment by wedrifid · 2010-10-05T05:24:59.364Z · LW(p) · GW(p)

One of the critical ones for modern life goes by "Executive Function" http://en.wikipedia.org/wiki/Executive_functions it seems to be moderately independent of IQ. It could also be called "Self Discipline".

It's correlated with self discipline but it is actually a different ability. In fact, some with problems with executive function compensate by developing excessive self discipline. (Having a #@$%ed up system for dealing with prioritisation makes anxiety based perfectionism more adaptive.)

comment by DSimon · 2010-09-13T22:38:47.815Z · LW(p) · GW(p)

herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson

Can you link to a Robin Hanson article on this topic so that people who aren't already familiar with his opinions on this subject (read: LW newbies like me) know what this is about?

Or alternately, I propose this sequence:

regular medical care by default / alt-med / regular medical care because alt-med is unscientific

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-13T23:29:58.795Z · LW(p) · GW(p)

regular medical care by default / alt-med / regular medical care because alt-med is unscientific

This is more in line with the other examples. I second the request for an edit. Yvain, you could add "Robin Hanson" to the fourth slot: it would kinda mess up your triplets, but with the justification that it'd be a funny example of just how awesomely contrarian Robin Hanson is. :D

Also, Yvain, you happen to list what people here would deem more-or-less correct contrarian clusters in your triplet examples. But I have no idea how often the meta-level contrarian position is actually correct, and I fear that I might get too much of a kick out of the positions you list in your triplets simply because my position is more meta and I associate metaness with truth when in reality it might be negatively correlated. Perhaps you could think of a few more-wrong meta-contrarian positions to balance what may be a small affective bias?

Replies from: FAWS
comment by FAWS · 2010-09-14T00:17:06.128Z · LW(p) · GW(p)

Also, Yvain, you happen to list what people here would deem more-or-less correct contrarian clusters in your triplet examples.

Huh? In all of those examples the unmentioned fourth level is correct and the second and third level both about equally useless.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-14T00:39:51.423Z · LW(p) · GW(p)

Half-agree with you, as none of the 18 positions are 'correct', but I don't know what you mean by 'useless'. Instead of generalizing I'll list my personal positions:

  • KKK-style racist / politically correct liberal / "but there are scientifically proven genetic differences"

If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress. That said, if most people took this position, it'd result in a horrible tragedy of the commons situation, which is why most social scientists cooperate on the 'let's not promote racism' dilemma. I'm not a social scientist so I get to defect and study some of the more interesting aspects of human evolutionary biology.

  • misogyny / women's rights movement / men's rights movement

No opinion. Women seem to be doing perfectly fine. Men seem to get screwed over by divorce laws and the like. Tentatively agree more with third level but hey, I'm pretty ignorant here.

  • conservative / liberal / libertarian

What can I say, it's politics. Libertarians in charge would mean more drugs and ethically questionable experiments of the sort I promote, as well as a lot more focus on the risks and benefits of technology. Since the Singularity trumps everything else policy-wise I have to root for the libertarian team here, even if I find them obnoxiously pretentious. (ETA: Actually, maybe more libertarians would just make it more likely that the 'Yeah yeah Singularity AI transhumanism wooooo!' meme would get bigger which would increase existential risk. So uh... never mind, I dunno.)

  • herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson

Too ignorant to comment. My oxycodone and antiobiotics sure did me good when I got an infection a week ago. My dermatologist drugs didn't help much with my acne. I've gotten a few small surgeries which made me better. Overall conventional medicine seems to have helped me a fair bit and costs me little. I don't even know what Robin Hanson's claims are, though. A link would be great.

  • don't care about Africa / give aid to Africa / don't give aid to Africa

Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa. Therefore position 3 is essentially correct, but maybe it's really position 4 (give aid to Earth) that's the correct one, I dunno.

  • Obama is Muslim / Obama is obviously not Muslim, you idiot / Patri Friedman5

Um, Patri was just being silly. Obama is obviously not a Muslim in any meaningful sense.

In conclusion, I think that there isn't any real trend here, but maybe we're just disputing ways of carving up usefulness? It is subjective after all.

Added: Explanations for downvotes are always welcome. Lately I've decided to try less to appear impressive and consistently rational (like Carl Shulman) and try more to throw lots of ideas around for critique, criticism, and development (like Michael Vassar). So although downvotes are useful indicators of where I might have gone wrong, a quick explanatory comment is even more useful and very unlikely to be responded to with indignation or hostility.

Replies from: FAWS, Eliezer_Yudkowsky, NancyLebovitz, multifoliaterose, waveman, Relsqui
comment by FAWS · 2010-09-14T02:52:24.049Z · LW(p) · GW(p)

My comment was largely tongue in cheek, but:

  • KKK-style racist / politically correct liberal / "but there are scientifically proven genetic differences"

If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress. That said, if most people took this position, it'd result in a horrible tragedy of the commons situation, which is why most social scientists cooperate on the 'let's not promote racism' dilemma. I'm not a social scientist so I get to defect and study some of the more interesting aspects of human evolutionary biology.

Awareness of genetic differences between races constitutes negative knowledge in many cases, that is it leads to anticipations that match the outcomes more badly than they would have otherwise. If everyone suspects that genetically blue-haired people are slightly less intelligent on average for genetic reasons, you want to hire the most intelligent person for a job and after a very long selection process (that other people were involved in) and you are left with two otherwise equally good candidates one blue-haired and one not, the egoistically rational thing is not to pick the non-blue haired person on account of that genetic difference. The other evidence on their intelligence is not independent of the genetic factors that correlate with blue hair, so any such genetic disadvantages are already figured in. If anything you should pick the blue haired person because extreme sample selection bias is likely and any blue haired person still left at the end of the selection process needed to be very intelligent to still be in the race. (so no, this isn't a tragedy of the commons situation)

It's pretty much never going to be the case that the blue hair is your best information on someone's intelligence, even their clothes or style of speech should usually be a better source.

Even for groups "genetic differences" can be pretty misleading, tallness is a strongly heritable trait and nevertheless differences in tallness can easily be dominated by environmental factors.

  • misogyny / women's rights movement / men's rights movement

No opinion. Women seem to be doing perfectly fine. Men seem to get screwed over by divorce laws and the like. Tentatively agree more with third level but hey, I'm pretty ignorant here.

Depends on what is meant with womens and mens right movement, really. The fact that men are treated unfairly on some issues does not mean that we have overshot in treating women fairly, weighting these off against each other is not productive and everyone should be treated fairly irrespective of gender and other factors, but since unfair treatment due to gender is still existent tracking how treatment varies by gender may still be necessary, though differences in outcome don't automatically imply unfairness, only that it's a hypothesis that deserves to be considered.

  • conservative / liberal / libertarian

What can I say, it's politics. Libertarians in charge would mean more drugs and ethically questionable experiments of the sort I promote, as well as a lot more focus on the risks and benefits of technology. Since the Singularity trumps everything else policy-wise I have to root for the libertarian team here, even if I find them obnoxiously pretentious. (ETA: Actually, maybe more libertarians would just make it more likely that the 'Yeah yeah Singularity AI transhumanism wooooo!' meme would get bigger which would increase existential risk. So uh... never mind, I dunno.)

(Not mentioning tragedy of the commons since non-crazy Libertarians usually agree that government of some level is necessary for those) Government competence vs. private sector competence is a function of organization size, productive selective pressures, culture etc. and even though the private sector has some natural advantages it doesn't dominate universally, particularly where functioning markets are difficult to set up (e. g. high speed railway lines). Regulation may be necessary to break out of some Nash equilibriums, and to overcome momentum in some cases (e. g. thermal insulation in building codes, though there should be ways to receive exemptions when sensible). I also don't see some level of wealth distribution as inherently evil.

  • herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson

Too ignorant to comment. My oxycodone and antiobiotics sure did me good when I got an infection a week ago. My dermatologist drugs didn't help much with my acne. I've gotten a few small surgeries which made me better. Overall conventional medicine seems to have helped me a fair bit and costs me little. I don't even know what Robin Hanson's claims are, though. A link would be great.

http://hanson.gmu.edu/EC496/Sources/sources.html

Basically: No evidence marginal heath spending improves health and some evidence against, cut US health spending in half. IMO the most sensible approach would be single payer universal health care for everything that is known to be effective and allowing people to purchase anything safe beyond that.

*don't care about Africa / give aid to Africa / don't give aid to Africa

Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa. Therefore position 3 is essentially correct, but maybe it's really position 4 (give aid to Earth) that's the correct one, I dunno.

I understood "don't give aid to Africa" as "don't give aid to Africa because it's counterproductive", which depends on the type of giving, so I would read your position as a position 4.

  • Obama is Muslim / Obama is obviously not Muslim, you idiot / Patri Friedman5

Um, Patri was just being silly. Obama is obviously not a Muslim in any meaningful sense.

Ok, useless is the wrong word here for position 2, but position 4 would be that it shouldn't even matter whether he is a Muslim, because there is nothing wrong with being a Muslim in the first place (other than being a theist).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-09-14T04:35:49.633Z · LW(p) · GW(p)

Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa.

But... but... but saving the world doesn't signal the same affiliations as saving Africa!

Replies from: Larks
comment by Larks · 2010-09-14T07:10:45.832Z · LW(p) · GW(p)

On LW, it signals better affiliations!

comment by NancyLebovitz · 2010-09-14T06:36:15.929Z · LW(p) · GW(p)

My impression is that Hanson's take on conventional medicine is that half the money spent is wasted. However, I don't know if he's been very specific about which half.

Replies from: Larks
comment by Larks · 2010-09-14T07:09:55.796Z · LW(p) · GW(p)

The RAND Health Experiment, which he frequently citied study didn't investigate the benefits of catastrophic medical insurance or that which people pay for from their own pocket, and found the rest useless.

comment by multifoliaterose · 2010-09-14T01:19:38.139Z · LW(p) · GW(p)

Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa. Therefore position 3 is essentially correct, but maybe it's really position 4 (give aid to Earth) that's the correct one, I dunno.

Why is giving money to x-risk charities conducive to saving the world? (I don't necessarily disagree, but want to see what you have to say to substantiate your claim.) In particular, what's your response to Holden's comment #12 at the GiveWell Singularity Summit thread ?

Replies from: Will_Newsome, wedrifid
comment by Will_Newsome · 2010-09-14T01:44:01.998Z · LW(p) · GW(p)

Sorry, I didn't mean to assume the conclusion. Rather than do a disservice to the arguments with a hastily written reply, I'm going to cop out of the responsibility of providing a rigorous technical analysis and just share some thoughts. From what I've seen of your posts, your arguments were that the current nominally x-risk-reducing organizations (primarily FHI and SIAI) aren't up to snuff when it comes to actually saving the world (in the case of SIAI perhaps even being actively harmful). Despite and because of being involved with SIAI I share some of your misgivings. That said, I personally think that SIAI is net-beneficial for their cause of promoting clear and accurate thinking about the Singularity, and that the PR issues you cite regarding Eliezer will be negligible in 5-10 years when more academics start speaking out publically about Singularity issues, which will only happen if SIAI stays around, gets funding, keeps on writing papers, and promotes the pretty-successful Singularity Summits. Also, I never saw you mention that SIAI is actively working on the research problems of building a Friendly artificial intelligence. Indeed, in a few years, SIAI will have begun the endeavor of building FAI in earnest, after Eliezer writes his book on rationality (which will also likely almost totally outshine any of his previous PR mistakes). It's difficult to hire the very best FAI researchers without money, and SIAI doesn't have money without donations.

Now, perhaps you are skeptical that FAI or even AGI could be developed by a team of the most brilliant AI researchers within the next, say, 20 years. That skepticism is merited and to be honest I have little (but still a non-trivial amount of knowledge) to go on besides the subjective impressions of those who work on the problem. I do however have strong arguments that there is a ticking clock till AGI, with the clock binging before 2050. I can't give those arguments here, and indeed it would be against protocol to do so, as this is Less Wrong and not SIAI's forum (despite it being unfortunately treated as such a few times in the past). Hopefully at some point someone, at SIAI or no, will write up such an analysis: currently Steve Rayhawk and Peter de Blanc of SIAI are doing a literature search that will with luck end up in a paper of the current state of AGI development, or at least some kind of analysis besides "Trust us, we're very rational".

All that said, my impression is that SIAI is doing good of the kind that completely outweighs e.g. aid to Africa if you're using any kind of utilitarian calculus. And if you're not using anything like utilitarian calculus, then why are you giving aid to Africa and not e.g. kittens? FHI also seems to be doing good, academically respectable, and necessary research on a rather limited budget. So if you're going to donate money, I would first vote SIAI, and then FHI, but I can understand the position of "I'm going to hold onto my money until I have a better picture of what's really important and who the big players are." I can't, however, understand the position of those who would give aid to Africa besides assuming some sort of irrationality or ignorance. But I will read over your post on the matter and see if anything there changes my mind.

Replies from: multifoliaterose, timtyler
comment by multifoliaterose · 2010-09-14T02:07:20.404Z · LW(p) · GW(p)

Reasonable response, upvoted :-).

•As I said, I cut my planned sequence of postings on SIAI short. There's more that I would have liked to say and more that I hope to say in the future. For now I'm focusing on finishing my thesis.

•An important point that did not come across in my postings is that I'm skeptical of philanthropic projects having a positive impact on what they're trying to do in general (independently of relation to existential risk). One major influence here has been my personal experience with public institutions. Another major influence has been reading the GiveWell blog. See for example GiveWell's page on Social Programs That Just Don't Work. At present I think that it's a highly nonobvious but important fact that those projects which superficially look to be promising and which are not well-grounded by constant feedback from outsiders almost always fail to have any nontrivial impact on the relevant cause.

See the comment here by prase which I agree with.

•On the subject of a proposed project inadvertently doing more harm than good, see the last few paragraphs of the GiveWell post titled Against Promise Neighborhoods. Consideration of counterfactuals is very tricky and very smart people often get it wrong.

•Quite possibly SIAI is having a positive holistic impact - I don't have confidence that this is so, the situation is just that I don't have enough information to judge from the outside.

•Regarding the time line for AGI and the feasibility of FAI research, see my back and forth with Tim Tyler here.

•My thinking as to what the most important causes to focus at present are is very much in flux. I welcome any information that you or others can point me to.

•My reasons for supporting developing world aid in particular at present are various and nuanced and I haven't yet had the time to write out a detailed explanation that's ready for public consumption. Feel free to PM me with your email address if you'd like to correspond.

Thanks again for your thoughtful response.

Replies from: wedrifid
comment by wedrifid · 2010-09-14T02:22:54.360Z · LW(p) · GW(p)

An important point that did not come across in my postings is that I'm skeptical of philanthropic projects having a positive impact on what they're trying to do in general (independently of relation to existential risk). One major influence here has been my personal experience with public institutions. Another major influence has been reading the GiveWell blog. See for example GiveWell's page on Social Programs That Just Don't Work. At present I think that it's a highly nonobvious but important fact that those projects which superficially look to be promising and which are not well-grounded by constant feedback from outsiders almost always fail to have any nontrivial impact on the relevant cause.

If you had a post on this specifically planned then I would be interested in reading it!

comment by timtyler · 2010-10-01T16:56:07.303Z · LW(p) · GW(p)

I personally think that SIAI is net-beneficial for their cause of promoting clear and accurate thinking about the Singularity [...]

Is that what they are doing?!?

They seem to be funded by promoting the idea that DOOM is SOON - and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk.

One might naively expect such an organisation would typically act so as to exaggerate the risks - so as to increase the flow of donations. That seems pretty consistent with their actions to me.

From that perspective the organisation seems likely to be an unreliable guide to the facts of the matter - since they have glaringly-obvious vested interests.

Replies from: Will_Newsome, None
comment by Will_Newsome · 2010-10-01T17:53:20.102Z · LW(p) · GW(p)

/startrant

They seem to be funded by promoting the idea that DOOM is SOON - and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk.

Or, more realistically, the idea that DOOM has a CHANCE of happening any time between NOW and ONE HUNDRED YEARS FROM NOW but that small CHANCE has a large enough impact in EXPECTED UTILITY that we should really figure out more about the problem because someone, not necessarily SIAI might have to deal with the problem EVENTUALLY.

One might naively expect such an organization would typically act so as to exaggerate the risks -- but SIAI doesn't seem to be doing that so one's naive expectations would be wrong. It's amazing how people associate an aura of overconfidence coming from the philosophical positions of Eliezer with the actual confidence levels of the thinkers of SIAI. Seriously, where are these crazy claims about DOOM being SOON and that ELIEZER YUDKOWSKY is the MESSIAH? From something Eliezer wrote 10 years ago? The Singularity Institute is pretty damn reasonable. The journal and conference papers they write are pretty well grounded in sound and careful reasoning. But ha, who would read those? It's not like it'd be a good idea to actually read an organization's actual literary output before judging them based primarily on the perceived arrogance of one of their research fellows, that'd be stupid.

From that perspective the organisation seems likely to be an unreliable guide to the facts of the matter - since they have glaringly-obvious vested interests.

What vested interests? Money? Do you honestly think that the people at SIAI couldn't get 5 times as much money by working elsewhere? Status? Do you honestly think that making a seemingly crazy far mode belief that pattern matches to doomsdayism part of your identity for little pay and lots of hard work is a good way of gaining status? Eliezer would take a large status hit if he admitted he was wrong about this whole seed AI thing. Michael Vassar would too. But everyone else? Really good thinkers like Anna Salamon and Carl Shulman and Steve Rayhawk who have proved here on Less Wrong that they have exceptionally strong rationality, and who are consistently more reasonable than they have any right to be? (Seriously, you could give Steve Rayhawk the most retarded argument ever and he'd find a way to turn it into a reasonable argument worth seriously addressing. These people take their epistemology seriously.)

Maybe people at SIAI are, you know, actually worried about the problems because they know how to take ideas seriously instead of using the absurdity heuristic and personal distaste for Eliezer and then rationalizing their easy beliefs with vague outside view reference class tennis games or stupid things like that.

I like reading Multifoliaterose's posts. He raises interesting points, even if I think they're generally unfair. I can tell that he's at least using his brain. When most people criticize SIAI (really Eliezer, but it's easier to say SIAI 'cuz it feels less personal), they don't use any parts of their brain besides the 'rationalize reason for not associating with low status group' cognitive module.

timtyler, this comment isn't really a direct reply to yours so much as a venting of general frustrations. But I get annoyed by the attitude of 'haha let's be cynical and assume the worst of the people that are actually trying their hardest to do the most good they can for the world'. Carl Shulman would never write a reply anything like the one I've written. Carl Shulman is always reasonable and charitable. And I know Carl Shulman works incredibly hard on being reasonable, and taking into account opposing viewpoints, and not letting his affiliation with SIAI cloud his thinking, and still doing lots of good, reasonable, solid work on explaining the problem of Friendliness to the academic sphere in reasonable, solid journal articles and conference papers.

It's really annoying to me to have that go completely ignored just because someone wants to signal their oh-so-metacontrarian beliefs about SIAI. Use epistemic hygiene. Think before you signal. Don't judge an entire organization's merit off of stupid outside view comparisons without actually reading the material. Take the time to really update on the beliefs of longtime x-rationalists that have probably thought about this a lot more than you have. If you really think it through and still disagree, you should have stronger and more elegant counterarguments than things like "they have glaringly-obvious vested interests". Yeah, as if that didn't apply to anyone, especially anyone who thinks that we're in great danger and should do something about it. They have pretty obvious vested interests in telling people about said danger. Great hypothesis there chap. Great way to rationalize your desire to signal and do what is easy and what appeals to your vanity. Care to list your true rejections?

And if you think that I am being uncharitable in my interpretation of your true motivations, then be sure to notice the symmetry.

/endrant

Replies from: timtyler
comment by timtyler · 2010-10-01T19:00:01.155Z · LW(p) · GW(p)

That was quite a rant!

'haha let's be cynical and assume the worst of the people that are actually trying their hardest to do the most good they can for the world'.

I hope I don't come across as thinking "the worst" about those involved. I expect they are all very nice and sincere. By way of comparison, not all cults have deliberately exploitative ringleaders.

One might naively expect such an organization would typically act so as to exaggerate the risks - but SIAI doesn't seem to be doing that so one's naive expectations would be wrong.

Really? Really? You actually think the level of DOOM is cold realism - and not a ploy to attract funding? Why do you think that? De Garis and Warwick were doing much the same kind of attention-seeking before the SIAI came along - DOOM is an old school of marketing in the field.

You encourage me to speculate about the motives of the individuals involved. While that might be fun, it doesn't seem to matter much - the SIAI itself is evidently behaving as though it wants dollars, attention, and manpower - to help it meet its aims.

FWIW, I don't see what I am saying as particularly "contrarian". A lot of people would be pretty sceptical about the end of the world being nigh - or the idea that a bug might take over the world - or the idea that a bunch of saintly programmers will be the ones to save us all. Maybe contrary to the ideas of the true believers - if that is what you mean.

Anyway, the basic point is that if you are interested in DOOM, or p(DOOM), consulting a DOOM-mongering organisation, that wants your dollars to help them SAVE THE WORLD may not be your best move. The "follow the money" principle is simple - and often produces good results.

Replies from: Will_Newsome, orthonormal, khafra
comment by Will_Newsome · 2010-10-01T19:30:58.197Z · LW(p) · GW(p)

FWIW, I don't see what I am saying as particularly "contrarian". A lot of people would be pretty sceptical about the end of the world being nigh - or the idea that a bug might take over the world - or the idea that a bunch of saintly programmers will be the ones to save us all. Maybe contrary to the ideas of the true believers - if that is what you mean.

Right, I said metacontrarian. Although most LW people seem SIAI-agnostic, a lot of the most vocal and most experienced posters are pro-SIAI or SIAI-related, so LW comes across as having a generally pro-SIAI attitude, which is a traditionally contrarian attitude. Thus going against the contrarian status quo is metacontrarian.

You encourage me to speculate about the motives of the individuals involved. While that might be fun, it doesn't seem to matter much - the SIAI itself is evidently behaving as though it wants dollars, attention, and manpower - to help it meet its aims.

I'm confused. Anyone trying to accomplish anything is going to try to get dollars, attention, and manpower. I'm confused as to how this is relevant to the merit of SIAI's purpose. SIAI's never claimed to be fundamentally opposed to having resources. Can you expand on this?

I hope I don't come across as thinking "the worst" about those involved. I expect they are all very nice and sincere. By way of comparison, not all cults have deliberately exploitative ringleaders.

What makes that comparison spring to mind? Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status. Everyone at SIAI has different beliefs about the relative merits of different strategies for successful FAI development. That isn't a good thing -- fractured strategy is never good -- but it is evidence against cultishness. SIAI grounds its predictions in clear and careful epistemology. SIAI publishes in academic journals, attends scientific conferences, and hosts the Singularity Summit, where tons of prominent high status folk show up to speak about Singularity-related issues. Why is cult your choice of reference class? It is no more a cult than a typical global warming awareness organization. It's just that 'science fiction' is a low status literary genre in modern liberal society.

Replies from: ata, wedrifid, timtyler
comment by ata · 2010-10-02T18:16:37.282Z · LW(p) · GW(p)

Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status.

I don't know about anybody else, but I am somewhat disturbed by Eliezer's persistent use of hyphens in place of em dashes, and am very concerned that it could be hurting SIAI's image.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-02T19:00:35.611Z · LW(p) · GW(p)

And I say the same about his use of double spacing. It's an outdated and unprofessional practice. In fact, Anna Salamon and Louie Helm are 2 other SIAI folk that engage in this abysmal writing style, and for that reason I've often been tempted to write them off entirely. They're obviously not cognizant of the writing style of modern academic thinkers. The implications are obvious.

comment by wedrifid · 2010-10-02T04:24:51.810Z · LW(p) · GW(p)

Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status.

Another reason that I suspect is more important than trying to signal non-cult-victim status is that people who do want to be considered part of the cult believe that the cause is important and believe that Eliezer's mistakes could destroy the world (for example).

comment by timtyler · 2010-10-01T20:00:40.901Z · LW(p) · GW(p)

Anyone trying to accomplish anything is going to try to get dollars, attention, and manpower. I'm confused as to how this is relevant to the merit of SIAI's purpose.

To recap, the SIAI is funded by donations from those who think that they will help prevent the end of the world at the hands of intelligent machines. For this pitch to work, the world must be at risk - in order for them to be able to save it. The SIAI face some resistance over this point, and these days, much of their output is oriented towards convincing others that these may be the end days. Also there will be a selection bias, with those most convinced of a high p(DOOM) most likely to be involved. Like I said, not necessarily the type of organisation one would want to approach if seeking the facts of the matter.

You pretend to fail to see connections between the SIAI and an END OF THE WORLD cult - but it isn't a terribly convincing act.

For the connections, see here. For protesting too much, see You're calling who a cult leader?

Replies from: Will_Newsome, wedrifid
comment by Will_Newsome · 2010-10-01T21:17:56.427Z · LW(p) · GW(p)

You pretend to fail to see connections between the SIAI and an END OF THE WORLD cult - but it isn't a terribly convincing act.

No, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations.

Hmuh, I guess we won't be able to make progress, 'cuz I pretty much wholeheartedly agree with Vladimir when he says:

This whole "outside view" methodology, where you insist on arguing from ignorance even where you have additional knowledge, is insane (outside of avoiding the specific biases such as planning fallacy induced by making additional detail available to your mind, where you indirectly benefit from basing your decision on ignorance).

and Nick Tarleton when he says:

We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.

Replies from: wedrifid, timtyler
comment by wedrifid · 2010-10-02T01:58:23.648Z · LW(p) · GW(p)

No, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations.

"This one is right" for example. ;)

comment by timtyler · 2010-10-02T02:31:50.446Z · LW(p) · GW(p)

The groupies never seem to like the comparison with THE END OF THE WORLD cults. Maybe it is the "cult" business - or maybe it is because all of their predictions of the end of the world were complete failures.

Replies from: pjeby, Will_Newsome, CarlShulman, Will_Newsome
comment by pjeby · 2010-10-02T03:41:58.405Z · LW(p) · GW(p)

all of their predictions of the end of the world were complete failures.

If they weren't, we wouldn't be here to see the failure.

It therefore seems to me that using this to "disprove" an end-of-the-world claim makes as much sense as someone trying to support a theory by saying, "They laughed at Galileo, too!"

IOW, you are simply placing the prediction in a certain outside-view class, without any particular justification. You could just as easily put SIAI claims in the class of "predictions of disaster that were averted by hard work", and with equal justification. (i.e., none that you've given!)

[Note: this comment is neither pro-SIAI nor anti-SIAI, nor any comment on the probability of their claims being in any particular class. I'm merely anti-arguments-that-are-information-free. ;-) ]

Replies from: wedrifid
comment by wedrifid · 2010-10-02T04:31:39.081Z · LW(p) · GW(p)

I'm merely anti-arguments-that-are-information-free.

The argument is not information free. It is just lower on information than implied. If people had never previously made predictions of disaster and everything else was equal then that tells us a different thing than if humans predicted disaster every day. This is even after considering selection effects. I believe this applies somewhat even considering the possibility of dust.

Replies from: timtyler
comment by timtyler · 2010-10-02T11:38:19.699Z · LW(p) · GW(p)

Uh, it wasn't given as an "argument" in the first place. Evidence which does more strongly relate to p(DOOM) includes the extent to which we look back and see the ashes of previous failed technological civilisations, and past major mishaps. I go into all this in my DOOM video.

comment by Will_Newsome · 2010-10-02T03:20:08.081Z · LW(p) · GW(p)

No, wait, there's still something I just don't understand. In a lot of your comments it seems you do a good job of analyzing the responses of 'normal people' to existential risks: they're really more interested in lipstick, food, and sex, et cetera. And I'm with you there, evolution hasn't hardwired us with a 'care about low probabilities of catastrophe' desire; the problem wasn't really relevant in the EEA, relatively speaking.

But then it seems like you turn around and do this weird 'ought-from-is' operation from evolution and 'normal people' to how you should engage in epistemic rationality, and that's where I completely lose you. It's like you're using two separate but to me equally crazy ought-from-is heuristics. The first goes like 'Evolution didn't hard code me with a desire to save the world, I guess I don't actually really want to save the world then.' And the second one is weirder and goes more like 'Oh, well, evolution didn't directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don't really want good epistemic rationality then'.

It ends up looking like you're using some sort of insane bizarre sister of the outside view that no one can relate with.

It's like you're perfectly describing the errors in most peoples' thinking but then at the end right when you should say "Haha, those fools", you instead completely swerve and endorse the errors, then righteously champion them for (evolutionary psychological?) reasons no one can understand.

Can you help me understand?

Replies from: timtyler
comment by timtyler · 2010-10-02T11:29:00.337Z · LW(p) · GW(p)

"'Oh, well, evolution didn't directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don't really want good epistemic rationality then'."

...looks like it bears very little resemblance to anything I have ever said. I don't know where you are getting it from.

Perhaps it is to do with the idea that not caring about THE END OF THE WORLD is normally a rational action for a typical gene-propagating agent.

Such agents should normally be concerned with having more babies than their neighbours do - and should not indulge in much paranoia about THE END OF THE WORLD. That is not sticking with poor quality cognition, it is often the correct thing to do for an agent with those aims.

If p(DOOM) gets really large, the correct strategy might change. If it turns into a collective action problem with punishment for free riders, the correct strategy might change. However, often THE END OF THE WORLD can be rationally perceived to be someone else's problem. Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.

The DOOM enthusiasts typically base their arguments on utilitarianism. A biologist's perspective on that is that it is sometimes an attempt to signal unselfishness - albeit usually a rather unbelievable one - and sometimes an attempt to manipulate others into parting withe their cash.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-02T18:34:11.361Z · LW(p) · GW(p)

...looks like it bears very little resemblance to anything I have ever said. I don't know where you are getting it from.

Looking back I think I read more into your comments than was really there; I apologize.

Such agents should normally be concerned with having more babies than their neighbours do - and should not indulge in much paranoia about THE END OF THE WORLD. That is not sticking with poor quality cognition, it is often the correct thing to do for an agent with those aims.

I agree here. The debate is over whether or not the current situation is normal.

However, often THE END OF THE WORLD can be rationally perceived to be someone else's problem.

Tentatively agreed. Normally, even if nanotech's gonna kill everyone, you're not able to do much about it anyway. But I'm not sure why you bring up "Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool." when most people aren't at all trying to optimize the amount of copies of their genes in the gene pool.

The DOOM enthusiasts typically base their arguments on utilitarianism. A biologist's perspective on that is that it is sometimes an attempt to signal unselfishness - albeit usually a rather unbelievable one - and sometimes an attempt to manipulate others into parting withe their cash.

Generally this is true, especially before science was around to make such meme pushing low status. But it's also very true of global warming paranoia, which is high status even among intellectuals for some reason. (I should probably try to figure out why.) I readily admit that certain values of outside view will jump from that to 'and so all possible DOOM-pushing groups are just trying to signal altruism or swindle people' -- but rationality should help you win, and a sufficiently good rationalist should trust themselves to try and beat the outside view here.

So maybe instead of saying 'poor epistemology' I should say 'odd emphasis on outside view when generally people trust their epistemology better than that beyond a certain point of perceived rationality in themselves'.

comment by CarlShulman · 2010-10-02T10:17:19.666Z · LW(p) · GW(p)

The primary thing I find objectionable about your commenting on this subject is the persistent violation of ordinary LW etiquette, e.g. by REPEATEDLY SHOUTING IN ALL CAPS and using ad hominem insults, e.g. "groupies."

Replies from: timtyler
comment by timtyler · 2010-10-02T11:53:52.444Z · LW(p) · GW(p)

I'm sorry to hear about your issues with my writing style :-(

I have been consistently capitalising DOOM - and a few related terms - for quite a while. I believe these terms deserve special treatment - in accordance with how important everybody says they are - and ALL-CAPS is the most portable form of emphasis across multiple sites and environments. For the intended pronunciation of phrases like DOOM, SOON, see my DOOM video. It is not shouting. I rate the effect as having net positive value in the context of the intended message - and will put up with your gripes about it.

As for "groupies" - that does seem like an apt term to me. There is the charismatic leader - and then there is his fan base - which seems to have a substantial element of young lads. Few other terms pin down the intended meaning as neatly. I suppose I could have said "young fan base" - if I was trying harder to avoid the possibility of causing offense. Alas, I am poorly motivated to bother with such things. Most of the "insiders" are probably going to hate me anyway - because of my message - and the "us" and "them" tribal mentality.

Did you similarly give Yudkowsky a public ticking-off when he recently delved into the realm of BOLD ALL CAPS combined with ad-hominen insults? His emphasis extended to whole paragraphs - and his insults were considerably more personal - as I recall. Or am I getting special treatment?

Replies from: wedrifid, CarlShulman
comment by wedrifid · 2010-10-02T13:48:16.662Z · LW(p) · GW(p)

I have been consistently capitalising DOOM - and a few related terms - for quite a while. I believe these terms deserve special treatment - in accordance with how important everybody says they are - and all-caps is the most portable form of emphasis across multiple sites and environments.

May I suggest as a matter of style that "Doom" more accurately represents your intended meaning of specific treatment and usage as a noun that isn't just a description? Since ALL CAPS has the interpretation of mere shouting you fail to communicate your meaning effectively if you use all caps instead of Title Case in this instance. Consider 'End Of The World' as a superior option.

Did you similarly give Yudkowsky a public ticking-off when he recently delved into the realm of BOLD ALL CAPS combined with ad-hominen insults? His emphasis extended to whole paragraphs - and his insults were considerably more personal - as I recall. Or am I getting special treatment?

Let's be honest. If we're going to consider that incident as an admissible tu quoque to any Yudkowskian then we could justify just about any instance of obnoxious social behaviour thereby. I didn't object to your comments here simply because I didn't consider them out of line on their own merits. I would have no qualms about criticising actual bad behaviour just because Eliezer acted like a douche.

Mind you I am not CarlShuman and the relevance of hypocrisy to Carl's attempt of a status slap is far greater than if it was an attempt by me. Even so you could replace "Or am I getting special treatment?" with "Or are you giving me special treatment?" and so reduce the extent that you signal that it is ok to alienate or marginalise you.

Replies from: timtyler
comment by timtyler · 2010-10-02T14:00:09.183Z · LW(p) · GW(p)

Title Caps would be good too - though "DOOM" fairly often appears at the start of a sentence - and there it would be completely invisible. "Doom" is milder. Maybe "DOOM" is too much - but I can live with it. After all, this is THE END OF THE WORLD we are talking about!!! That is pretty ###### important!!!

If you check with the THE END IS NIGH placards, they are practically all in ALL CAPS. I figure those folk are the experts in this area - and that by following their traditions, I am utilizing their ancient knowledge and wisdom on the topic of how best to get this critical message out.

A little shouting may help ensure that the DOOM message reaches distant friends and loved ones...

Replies from: wedrifid
comment by wedrifid · 2010-10-02T16:34:22.211Z · LW(p) · GW(p)

A little shouting may help ensure that the DOOM message reaches distant friends and loved ones...

Or utterly ignored because people think you're being a tool. One or the other. (I note that this is an unfortunate outcome because apart from this kind of pointless contrariness people are more likely to acknowledge what seem to be valid points in your response to Carl. I don't like seeing the conversational high ground going to those who haven't particularly earned it in the context.)

Replies from: timtyler
comment by timtyler · 2010-10-02T17:11:57.818Z · LW(p) · GW(p)

Well, my CAPS are essentially a parody. If the jester capers in the same manner as the noble, there will often be some people who will think that he is dancing badly - and not understand what is going on.

Replies from: wedrifid
comment by wedrifid · 2010-10-02T19:07:36.592Z · LW(p) · GW(p)

There will be others who understand perfectly and think he's doing a mediocre job of it.

comment by CarlShulman · 2010-10-02T14:26:21.423Z · LW(p) · GW(p)

You ignored the word 'repetitive.' As you say, you have a continuing policy of carelessness towards causing offense, i.e. rudeness. And no, I don't think that the comment you mention was appropriate either (versus off-LW communication), but given that it was deleted I didn't see reason to make a further post about it elsewhere. Here are some recent comment threads in which I called out Eliezer and others for ad hominem attacks.

Replies from: timtyler
comment by timtyler · 2010-10-02T14:37:46.964Z · LW(p) · GW(p)

...not as much as you ignored the words "consistently" and "for quite a while".

I do say what I mean. For instance, right now you are causing me irritation - by apparently pointlessly wasting my time and trying to drag me into the gutter. On the one hand, thanks for bothering with feedback... ...but on the other, please go away now, Carl - and try to find something more useful to do than bickering here with me.

comment by Will_Newsome · 2010-10-02T02:40:58.557Z · LW(p) · GW(p)

I don't think it's that. I think it's just annoyance at perceived persistently bad epistemology in people making the comparison over and over again as if each iteration presented novel predictions with which to constrain anticipation.

Replies from: timtyler
comment by timtyler · 2010-10-02T13:03:42.742Z · LW(p) · GW(p)

If there really is "bad epistemology", feel free to show where.

There really is an analogy between the SIAI and various THE END OF THE WORLD cults - as I previously spelled out here.

You might like to insinuate that I am reading more into the analogy than it deserves - but basically, you don't have any case there that I can detect.

Replies from: Will_Newsome, Vladimir_Nesov
comment by Will_Newsome · 2010-10-02T18:23:38.521Z · LW(p) · GW(p)

Everyone knows the analogy exists. It's just a matter of looking at the details to see if that has any bearing on whether or not SIAI is a useful organization or not.

Replies from: timtyler
comment by timtyler · 2010-10-07T13:45:28.708Z · LW(p) · GW(p)

You asked: "What makes that comparison spring to mind?" when I mentioned cults.

Hopefully, you now have your answer - for one thing, they are like an END OF THE WORLD cult - in that they use fear of THE END OF THE WORLD as a publicity and marketing tool.

Such marketing has a long tradition behind it - e.g see the Daisy Ad.

comment by Vladimir_Nesov · 2010-10-02T17:47:17.731Z · LW(p) · GW(p)

There really is an analogy between the SIAI and various THE END OF THE WORLD cults - as I previously spelled out here.

Also, FOOM rhymes with DOOM. There!

Replies from: Perplexed
comment by Perplexed · 2010-10-02T20:09:55.079Z · LW(p) · GW(p)

Tyler: If there really is "bad epistemology", feel free to show where.

Nesov: Also, FOOM rhymes with DOOM. There!

And this response was upvoted ... why? This is supposed to be a site where rational discourse is promoted, not a place like Pharyngula or talk.origins where folks who disagree with the local collective worldview get mocked by insiders who then congratulate each other on their cleverness.

Replies from: timtyler, Will_Newsome, Vladimir_Nesov
comment by timtyler · 2010-10-03T12:59:16.198Z · LW(p) · GW(p)

I voted it up. It was short, neat, and made several points.

Probably the main claim is that that the relationship between the SIAI and previous END OF THE WORLD outfits is a meaningless surface resemblance.

My take of the issue is that DOOM is - in part - a contagious mind-virus, with ancient roots - which certain "vulnerable" people are inclined to spread around - regardless of whether it makes much sense or not.

With the rise of modern DOOM "outfits", we need to understand the sociological and memetic aspects of these things all the more:

  • Will we see more cases of "DOOM exploitation" - from those out to convert fear of the imminent end into power, wealth, fame or sex?

  • Will a paranoid society take steps to avoid the risks? Will it freeze like a rabbit in the headlights? Or will it result in more looting and rape cases?

  • What is the typical life trajectory of those who get involved with these outfits? Do they go on to become productive members of society? Or do they wind up having nightmares about THE END OF THE WORLD - while neglecting their interpersonal relationships and personal hygene - unless their friends and family stage an "intervention"?

...and so on.

Rational agents should understand the extent to which they are infected by contagious mind viruses - that spread for their own benefit and without concern for the welfare of their hosts. DOOM definitely has the form of such a virus. The issue as I see it is: how much of the observed phenomenon of the of modern-day DOOM "outfits" does it explain?

To study this whole issue, previous doomsday cults seem like obvious and highly-relevant data points to me. In some cases their DOOM was evidently a complete fabrication. They provide pure examples of fake DOOM - exactly the type of material a sociologist would need to understand that aspect of the DOOM-mongering phenomeon.

comment by Will_Newsome · 2010-10-03T04:53:13.689Z · LW(p) · GW(p)

I agree that it's annoying when people are mocked for saying something they didn't say. But Nesov was actually making an implicit argument here, not just having fun: he was pointing out that timtyler's analogies tend to be surface-level and insubstantive. The kind of thing that I've seen on Pharyngula are instead unjustified ad hominem attacks that don't shed any light on possible flaws in the poster's arguments. That said, I think Nesov's comment was flirting with the line.

comment by Vladimir_Nesov · 2010-10-02T20:16:57.334Z · LW(p) · GW(p)

In the case of Tim in particular, I'm way past that.

Replies from: Perplexed
comment by Perplexed · 2010-10-02T20:36:55.144Z · LW(p) · GW(p)

"Way past that" meaning "so exasperated with Tim that rational discourse seems just not worth it"? Hey, I can sympathize. Been there, done that.

But still, it annoys me when people are attacked by mocking something that they didn't say, but that their caricature should have said (in a more amusing branch of reality).

It annoys me more when that behavior is applauded.

And it strikes me as deeply ironic when it happens here.

Replies from: NancyLebovitz, Vladimir_Nesov
comment by NancyLebovitz · 2010-10-02T22:05:08.855Z · LW(p) · GW(p)

But still, it annoys me when people are attacked by mocking something that they didn't say, but that their caricature should have said (in a more amusing branch of reality)

That's very neatly put.

I'm not dead certain it's a fair description of Vladimir Nesov said, but describes a lot of behavior I've seen. And there's a parallel version about the branches of reality which allow for easier superiority and/or more outrage.

comment by Vladimir_Nesov · 2010-10-02T20:45:26.127Z · LW(p) · GW(p)

The error Tim makes time and again is finding shallow analogies between activity of people concerned with existential risk and doomsday cults, and loudly announcing them, lamenting that it's not proper that this important information is so rarely considered. Yet the analogies are obvious and obviously irrelevant. My caricature simply followed the pattern.

Replies from: wnoise, timtyler
comment by wnoise · 2010-10-02T20:52:10.733Z · LW(p) · GW(p)

The analogies are obvious. They may be irrelevant. They are not obviously irrelevant.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-02T20:54:57.813Z · LW(p) · GW(p)

The analogies are obvious. They may be irrelevant. They are not obviously irrelevant.

Too fine a distinction to argue, wouldn't you agree?

Replies from: Will_Newsome, Kingreaper
comment by Will_Newsome · 2010-10-03T04:44:34.233Z · LW(p) · GW(p)

Talking about obviousness as if it was inherent in a conclusion is typical mind projection fallacy. What it generally implies (and what I think you mean) is that any sufficiently rational person would see it; but when lots of people don't see it, calling it obvious is against social convention (it's claiming higher rationality and thus social status than your audience). In this case I think that to your average reader the analogies aren't obviously irrelevant, even though I personally do find them obviously irrelevant.

comment by Kingreaper · 2010-10-03T00:03:30.752Z · LW(p) · GW(p)

When you're trying to argue that something is the case (ie. that the analogies are irrelevant) the difference between what you are arguing being OBVIOUS and it merely being POSSIBLE is extremely vast.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-03T00:08:21.967Z · LW(p) · GW(p)

You seem to confuse the level of certainty with difficulty of discerning it.

Replies from: Kingreaper
comment by Kingreaper · 2010-10-03T00:21:42.394Z · LW(p) · GW(p)

You made a claim that they were obviously irrelevant.

The respondant expressed uncerainty as to their irrelevance "They may be irrelevant." as opposed to the certainty in "The analogies are obvious." and "They are not obviously irrelevant."

That is a distinction between something being claimed as obvious and the same thing being seen as doubtful.

If you do not wish to explain a point there are many better options* than inaccurately calling it obvious. For example, linking to a previous explanation.

*in rationality terms. In argumentation terms, these techniques are often inferior to the technique of the emperor's tailors

comment by timtyler · 2010-10-03T12:28:07.258Z · LW(p) · GW(p)

The error Tim makes time and again is finding shallow analogies between activity of people concerned with existential risk and doomsday cults, and loudly announcing them, lamenting that it's not proper that this important information is so rarely considered. Yet the analogies are obvious and obviously irrelevant.

Uh, they are not "obviously irrelevant". The SIAI behaves a bit like other DOOM-mongering organisations have done - and a bit like other FUD marketing organisations have done.

Understanding the level of vulnerability of the human psyche to the DOOM virus is a pretty critical part of assessing what level of paranoia about the topic is reasonable.

It is, in fact very easy to imagine how a bunch of intrepid "friendly folk" who think they are out to save the world - might - in the service of their cause - exaggerate the risks, in the hope of getting attention, help and funds.

Indeed, such an organisation is most likely to be founded by those who have extreme views about the risks, attract others who share similar extreme views, and then have a hard time convincing the rest of the world that they are, in fact, correct.

There are sociological and memetic explanations for the "THE END IS NIGH" phenomenon that are more-or-less independent of the actual value of p(DOOM). I think these should be studied more, and applied to this case - so that we can better see what is left over.

There has been some existing study of DOOM-mongering. There is also the associated Messiah complex - an intense desire to save others. With the rise of the modern doomsday "outfits", I think more study of these phenomenon is warranted.

Sometimes it is fear that is the mind-killer. FUD marketing exploits this to help part marks from their money. THE END OF THE WORLD is big and scary - a fear superstimulus - and there is a long tradition of using it to move power around and achieve personal ends - and the phenomena spreads around virally.

I appreciate that this will probably turn the stomachs of the faithful - but without even exploring the issue, you can't competently defend the community against such an analysis - because you don't know to what extent it is true - because you haven't even looked into it.

comment by wedrifid · 2010-10-02T04:25:03.070Z · LW(p) · GW(p)

Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status.

Another reason that I suspect is more important than trying to signal non-cult-victim status is that people who do want to be considered part of the cult believe that the cause is important and believe that Eliezer's mistakes could destroy the world (for example).

Replies from: timtyler
comment by timtyler · 2010-10-02T12:17:54.774Z · LW(p) · GW(p)

I didn't say anyone was "racing to be first to establish their non-cult-victim status" - but it is certainly a curious image! [deleted parent comment was a dupe].

Replies from: wedrifid
comment by wedrifid · 2010-10-02T13:25:43.114Z · LW(p) · GW(p)

Oops, connection troubles then missed.

comment by orthonormal · 2010-10-02T20:30:48.562Z · LW(p) · GW(p)

Tim, do you think that nuclear-disarmament organizations were inherently flawed from the start because their aim was to prevent a catastrophic global nuclear war? Would you hold their claims to a much higher standard than the claims of organizations that looked to help smaller numbers of people here and now?

I recognize that there are relevant differences, but merely pattern-matching an organization's conclusion about the scope of their problem, without addressing the quality of their intermediate reasoning, isn't sufficient reason to discount their rationality.

comment by khafra · 2010-10-01T19:24:40.371Z · LW(p) · GW(p)

I don't see what I am saying as particularly "contrarian".

Will said "meta-contrarian," which refers to the recent meta-contrarians are intellectual hipsters post.

I also think you see yourself as trying to help SIAI see how they look to "average joe" potential collaborators or contributors, while Will sees your criticisms as actually calling into question the motives, competence, and ingenuity of SIAI's staff. If I'm right, you're talking at cross-purposes.

Replies from: timtyler, Will_Newsome
comment by timtyler · 2010-10-01T19:45:35.916Z · LW(p) · GW(p)

I also think you see yourself as trying to help SIAI see how they look to "average joe" potential collaborators or contributors

Reforming the SIAI is a possibility - but not a terribly realistic one, IMO. So, my intended audience here is less that organisation, and more some of the individuals here who I share interests with.

comment by Will_Newsome · 2010-10-01T19:34:07.837Z · LW(p) · GW(p)

Oh, that might be. Other comments by timtyler seemed really vague but generally anti-SIAI (I hate to set it up as if you could be for or against a set of related propositions in memespace, but it's natural to do here, meh), so I assumed he was expressing his own beliefs, and not a hypothetical average joe's.

comment by [deleted] · 2010-10-02T21:25:36.834Z · LW(p) · GW(p)

This is an incredibly anti-name-calling community. People ascribe a lot of value to having "good" discussions (disagreement is common, but not adversarialism or ad hominems.) LW folks really don't like being called a cult.

SIAI isn't a cult, and Eliezer isn't a cult leader, and I'm sure you know that your insinuations don't correspond to literal fact, and that this organization is no more a scam than a variety of other charitable and advocacy organizations.

I do think that folks around here are over-sensitive to normal levels of name-calling and ad hominems. It's odd. Holding yourself above the fray comes across as a little snobbish. There's a whole world of discourse out there, people gathering evidence and exchanging opinions, and the vast majority of them are doing it like this: UR A FASCIST. But do you think there's therefore nothing to learn from them?

comment by wedrifid · 2010-09-14T02:18:30.246Z · LW(p) · GW(p)

Why is giving money to x-risk charities conducive to saving the world?

I think the reasoning goes something like:

  • Existential risks are things that could destroy the world as we know it.
  • Existential risk charities work to reduce such risks.
  • Existential risk charities use donations to perform said task
  • Giving to x-risk charities is conducive to saving the world.

Before looking at evidence for or against the effectiveness of particular x-risk charities our prior expectation should be that people who dedicate themselves to doing something are more likely to contribute progress towards that goal than to sabotage it.

comment by waveman · 2014-03-01T12:32:24.793Z · LW(p) · GW(p)

Libertarians in charge would mean more drugs

This is only true if it is the case that the first-order effect of legalizing drugs (legality would encourage more people to take them) outweighs second order effects. An example of the second order effects is the fact that the price is higher encourages production and distribution. Or the fact the that illegality allows them to be used as signals of rebellion. Legalizing drugs would potentially put distribution in the hands of more responsible people.And so forth.

As the evidence based altruism people have found, improving the world is a lot harder than it looks.

comment by Relsqui · 2010-09-14T09:49:21.247Z · LW(p) · GW(p)

If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress.

I actually disagree with this statement outright. First of all, ignoring the existence of a specific piece of evidence is not the same as being wholly ignorant of the workings of evolution. Second, I think that the use or abuse of data (false or true) leading to the mistreatment of humans is a worse outcome than the ignorance of said data. Science isn't a goal in and of itself--it's a tool, a process invented for the betterment of humanity. It accomplishes that admirably, better than any other tool we've applied to the same problems. If the use of the tool, or in this case one particular end of the tool, causes harm, perhaps it's better to use another end (a different area of science than genetics), or the same one in a different environment (in a time and place where racial inequality and bias are not so heated and widespread--our future, if we're lucky). Otherwise, we're making the purpose of the tool subservient to the use of the tool for its own sake--pounding nails into the coffee table.

Besides--anecdotally, people who think that the genetic differences between races are important incite less violence than people who think that not being a bigot is important. If, as you posited, one had to choose. ;)

I have a couple other objections (really? sex discrimination is over? where was I?) but other people have covered them satisfactorily.

x-risk charities

New here; can I get a brief definition of this term? I've gotten the gist of what it means by following a couple of links, I just want to know where the x bit comes from. Didn't find it on the site's wiki or the internet at large.

Replies from: ChristianKl, NancyLebovitz
comment by ChristianKl · 2010-09-14T11:47:47.719Z · LW(p) · GW(p)

X-risk stands for existential risk.

It about possible events that risk ending the existence of the human race.

Replies from: Relsqui
comment by Relsqui · 2010-09-14T19:51:14.961Z · LW(p) · GW(p)

Got it; thank you.

comment by NancyLebovitz · 2010-09-14T12:36:20.266Z · LW(p) · GW(p)

Besides--anecdotally, people who think that the genetic differences between races are important incite less violence than people who think that not being a bigot is important.

What do you have in mind?

Replies from: Relsqui
comment by Relsqui · 2010-09-14T19:53:47.022Z · LW(p) · GW(p)

I'm not sure what "what" would refer to here. I didn't have an incident in mind, I'm just giving my impression of public perception (the first person gets called racist, and the second one gets called, well, normal, one hopes). It wasn't meant to be taken very seriously.

comment by Bongo · 2010-09-13T23:21:39.995Z · LW(p) · GW(p)

Noticing a social cluster takes social savvy and intelligence.

Therefore, showing that you can see a social cluster makes you look good.

Maybe going up a level in one of Yvain's hierarchies is showing off that you've discovered a social cluster? It goes together with distancing yourself from that cluster, but I don't know why.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-14T00:10:05.095Z · LW(p) · GW(p)

I would like to announce that I have discovered the social cluster that has discovered the method of discovering all social clusters, and am now a postmodernist. Seriously guys, postmodernism is pretty meta. Update on expected metaness.

Replies from: Spurlock
comment by Spurlock · 2010-09-14T02:52:21.635Z · LW(p) · GW(p)

I'm confused. What point are you trying to make about postmodernism?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-14T04:05:28.333Z · LW(p) · GW(p)

None, really. I just like how its proponents can always win arguments by claiming to be more meta than their opponents. ("Sure, everything you made sense within your frame of reference, but there are no privileged frames of reference. Indeed, proving that there are privileged frames of reference requires a privileged frame of reference and is thus an impossible philosophical act. I can't prove anything I just said, which proves my point, depending on whether you think it did or not.")

(I don't take postmodernism seriously, but some of the ideas are philosophically elegant.)

Replies from: DSimon, None, None
comment by DSimon · 2010-09-14T14:39:27.834Z · LW(p) · GW(p)

I can't prove anything I just said, which proves my point, depending on whether you think it did or not.

I would like this on a t-shirt.

comment by [deleted] · 2014-06-06T20:05:58.827Z · LW(p) · GW(p)

Mmm, but isn't it true that "proving that there are privileged frames of reference requires a privileged frame of reference and is thus an impossible philosophical act."

comment by [deleted] · 2014-04-01T20:09:25.337Z · LW(p) · GW(p)

I think there's missed "you said" here: "everything you made sense".

comment by SilasBarta · 2010-09-13T22:39:05.585Z · LW(p) · GW(p)

I have to admit, this has definitely been a hazard for me. As I said to simplicio a few months ago, I've had a sort of tendency to be "too clever" by taking the "clever contrarian" position. This gets to the point where I'm fascinated by those who can write up defenses of ridiculous positions and significantly increase my exposure to them.

I think part of what made me stray from "the path" was a tendency to root for the rhetorical "underdog" and be intrigued -- excessively -- with brilliant arguments that could defend ridiculous positions

I have to wonder if I'm falling into the same trap with my "Most scientists only complain about how hard it is to explain their field because their understanding is so poor to begin with." (i.e., below Level 2, the level at which you can trace out the implications between your field and numerous others in both directions, possibly knowing how to trace back the basis of all specialized knowledge to arbitrary levels)

comment by knb · 2010-09-14T06:23:19.401Z · LW(p) · GW(p)

Does making fun of hipsters to seem cool make you a meta-hipster?

comment by Vladimir_Nesov · 2010-09-14T05:59:28.279Z · LW(p) · GW(p)

Very much related to The Correct Contrarian Cluster.

Also, we had a post specifically on countersignaling: Things You Can't Countersignal.

comment by Apprentice · 2010-09-13T23:57:46.065Z · LW(p) · GW(p)

One more cluster I can think of is attitude to copyright law. Something like:

  1. Huh? It's illegal for me to copy that song? How totally stupid, I'm not harming anyone.
  2. Strong intellectual property law is necessary to encourage innovation and protect artists.
  3. Copyright law does more harm than good and needs to be reformed or abolished.
Replies from: Relsqui, loqi
comment by Relsqui · 2010-09-14T04:54:13.279Z · LW(p) · GW(p)

This is actually an interesting example, because I think if you look at the patterns of contrarian and meta-contrarian groups--that is, the people who tend to prefer those attitudes--you actually flip the second two, which breaks the pattern of contradiction and counter-contradiction. That is to say,

  1. (ordinary people who don't worry too much about this) Huh? It's illegal for me to copy that song? How totally stupid, I'm not harming anyone.
  2. (people who are really into the torrent community) It's not/shouldn't be illegal! Information wants to be free!
  3. (meta) If nobody paid for music, no one could live off being a musician. Torrenters are just making excuses for the convenience of breaking the law.
  4. (approaching sense) We need to reform or abolish copyright law and replace it with a system that pays artists fairly while working with, not against, new technology.

At least, that's my experience; take it with a grain of bias in favor of position four.

Replies from: DSimon
comment by DSimon · 2010-09-14T14:38:26.504Z · LW(p) · GW(p)

That's an interesting example because 1 and 2 arrive at the same conclusion, but 2 might still want to signal themselves as being contrary to 1 (i.e. "It's not just that it's not harming anybody, but sharing information around freely is actually helping everybody!")

Replies from: Relsqui
comment by Relsqui · 2010-09-14T19:26:41.918Z · LW(p) · GW(p)

I agree with that, and you make a good point--it suggests that being contrarian doesn't require disagreeing with the position as disagreeing with the reasoning. In a lot of cases it'll amount to the same thing, or at least come off as the same thing, but the above is one where it doesn't.

comment by loqi · 2010-09-14T17:13:55.625Z · LW(p) · GW(p)

I basically object to copyright law because of 1. Clearly my opinion is transcendent, a least fixed point of meta-contrarianism.

comment by buybuydandavis · 2014-03-04T03:09:05.283Z · LW(p) · GW(p)

Not everything is signaling.

The intellectually compulsive are natural critics. You see something wrong in an argument, and you argue against it. The natural stopping point in this process is when you don't find significant problems with the theory, and that is more likely for a fringe theory that other's don't bother to critique. When no one is helping you find the flaws, it's less likely you'll find them. You'll win arguments, at least by your evaluation, because you are familiar with their arguments and can show flaws, but your argument is unfamiliar to them, so they can't show you flaws in your thinking. No one knows the counter, no one has spent the time analyzing your fringe idea. They likely will still dismiss your "wacky" idea, and conclude they won the argument by their dismissal of your position out of hand, but that only makes you feel more confident in your conclusion and in yourself, as it's apparent to you that they have on counter to your argument.

The problem is, you're not right just because people can't prove you wrong. If you have a fringe theory, you need to be aware that it has not been properly vetted by others around you.

comment by fburnaby · 2013-03-06T02:18:53.974Z · LW(p) · GW(p)

doesn't follow politics / political junkie / avoids talking about politics due to mind-killing

comment by Larks · 2010-09-16T18:02:08.714Z · LW(p) · GW(p)

This suggests that a common tactic (deliberate or otherwise) would be to represent your opponents as being the level below you, rather than the level above. For example this article, which treats Singularitarians as at level 1, rather than level 3, on

technology is great! -> but it has costs, like to the enviroment, and making social control easier -> Actually, the benefits vastly outweigh those.

Ironically, it's not that far off for SIAI, which is at level 4, 'certain technologies are existentially dangerous'

This seems to hold true for all the triads you mention, except possibly the medicine one: level 2 people falsely represent level 3 people as level 1.

Replies from: Will_Newsome, AlexanderRM
comment by Will_Newsome · 2010-09-18T08:08:25.758Z · LW(p) · GW(p)

Existentially dangerous doesn't mean the benefits still don't outweigh the costs. If there's a 95% chance that uFAI kills us all, that's still a whopping 5% chance at unfathomably large amounts of utility. Technology still ends up having been a good idea after all.

Each level adds necessary nuance. Unfortunately, at each level is a new chance for unnecessary nuance. Strong epistemic rationality is the only thing that can shoulder the weight of the burdensome details.

Added: Your epistemic rationality is limited by your epistemology. There's a whole bunch of pretty and convincing mathematics that says Bayesian epistemology is the Way. We trust in Bayes because we trust in that math: the math shoulders the weight. A question, then. When is Bayesianism not the ideal epistemology? As humans the answer is 'limited resources'. But what if you had unlimited resources? At the limit, where doesn't Bayes hold?

comment by AlexanderRM · 2015-03-25T05:33:21.553Z · LW(p) · GW(p)

I've noticed that quite often long before seeing this article. There seems to be a strong tendency for people to try to present themselves as breaking old, established stereotypes even when the person they're arguing against says exactly the same thing, and in some cases where the stereotype has only been around for a very short time (I recall one article arguing against the idea of Afghanistan being "the graveyard of empires", which in my understanding was an idea that had surfaced around 6 months prior to that article with the publication of a specific book).

However, this does add an interesting dimension to it, with the fact that Type 2 positions actually were founded on a rejection of old, untrue beliefs of Type 1s, and Type 3s often resemble Type 1s. In fact I'd say that in every listed political example, the Type 2s who know about Type 3s will usually lump them in with Type 1s. This is, IMO, good in a way because it limits us from massive proliferation of levels over and over again and the resulting complications; instead we just get added nuance into the Type 2 and 3 positions.

comment by Apprentice · 2010-09-14T15:36:29.045Z · LW(p) · GW(p)

The pleasure I get out of trolling atheists definitely has a meta-contrarian component to it. When I was a teenager I would troll Christians but I've long since stopped finding that even slightly challenging or fun.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-09-14T19:43:12.407Z · LW(p) · GW(p)

Yes, I often find myself tempted to do that too. Although I understand on an intellectual level that creationism is stupid, it is hard for me to get worked up about it and I certainly don't have the energy to argue with creationists ad nauseum. I do find myself angry whenever an atheist makes a superficial or stupid point in defense of atheism, or when they get too smug about how much smarter they are than creationists.

My guess is that I have a sufficiently inflated view of my intelligence to be high enough that I have no need to differentiate myself intellectually from creationists, but I do feel a need to differentiate myself intellectually from the less intelligent sort of atheist.

comment by TobyBartels · 2010-09-17T23:02:26.969Z · LW(p) · GW(p)

As a mathematician, I offer my services for anybody who wants arguments (mathematical arguments, not philosophical ones) that 1+1 = 3. But beware: as a meta-contrarian mathematician, I will also explain why these arguments, though valid in their own way, are silly.

Replies from: army1987, RobinZ
comment by A1987dM (army1987) · 2012-02-28T23:50:05.255Z · LW(p) · GW(p)

1.3 + 1.4 = 2.7, which when reported to one significant figure...

Replies from: CronoDAS
comment by CronoDAS · 2012-02-29T00:30:09.668Z · LW(p) · GW(p)

As the "old" computer science joke goes, 2 + 2 = 5 (for extremely large values of 2).

Replies from: Manfred
comment by Manfred · 2012-02-29T01:08:21.214Z · LW(p) · GW(p)

The physicist-typical version is that 3=4, if you take lim(3->4).

Replies from: TobyBartels
comment by TobyBartels · 2012-03-01T05:15:34.111Z · LW(p) · GW(p)

This reminds me that the difference between a physicist and astronomer is that a physicist uses π ≈ 3 while an astronomer uses π ≈ 1.

Replies from: army1987
comment by A1987dM (army1987) · 2012-03-01T10:52:55.753Z · LW(p) · GW(p)

I remember someone in a newsgroup saying the average person is about one metre tall and weighs about 100 kilos, and when asked whether maybe they were approximately a bit too roughly, they answered “I'm an astronomer, not a jeweller.”

(And physicists sometimes use π ≈ 1 too -- that's called dimensional analysis. :-) The problem is when the constant factor dimensional analysis can't tell you turns out to be 1/(2π)^4 ≈ 6.4e-4 or stuff like that.)

comment by RobinZ · 2011-01-20T19:37:59.912Z · LW(p) · GW(p)

You know, I am seized with a sudden curiosity. You have arguments such that 1 is still the successor of 0 and 3 is still the successor of the successor of 1, where 0 is the additive identity?

Replies from: TobyBartels
comment by TobyBartels · 2011-01-25T18:03:51.965Z · LW(p) · GW(p)

Ah, now I have to remember what I was thinking of back in September! Well, let's see what I can come up with now.

One thing that I could do is to redefine every term in the expression. You tried to forestall this by insisting

1 is still the successor of 0 and 3 is still the successor of the successor of 1, where 0 is the additive identity

[Note: I originally interpreted this as "3 is still the successor of 2" for some dumb reason.] But you never insisted that 2 is the successor of 1, so I'll redefine 2 to be 1 and redefine 3 to be 2, and your conditions are met, while my theorem holds. (I could also, or instead, redefine equality.)

But this is silly; nobody uses the terms in this way.

For another method, I'll be a little more precise. Since you mentioned the successor of 0, let's work in Peano Arithmetic (first-order, classical logic, starting at zero), supplemented with the axiom that 0 = 1. Then 1 + 1 = 3 can be proved as follows:

  • 1 + 1 = 1 + S(0) by definition of 1;
  • 1 + S(0) = 1 + S(1) by substitution of equality;
  • 1 + S(1) = 3 by any ordinary proof in PA;
  • 1 + 1 = 3 by transitivity of equality (twice).

Of course, this is also silly, since PA with my new axiom is inconsistent. Anything in the language can be proved (by going through the axiom that 0 = S(n) is always false, combining this with my new axiom, and using ex contradictione quodlibet).

Here is a slightly less silly way: Modular arithmetic is very useful, not silly at all, and in arithmetic modulo 1, 1 + 1 = 3 is true.

But however useful modular arithmetic in general may be, arithmetic modulo 1 is silly (for roughly the same reasons that an inconsistent set of axioms is silly); everything is equal to everything else, so any equation at all is true. In other words, arithmetic modulo 1 is trivial.

You can get arithmetic modulo b by replacing the Peano axiom that 0 = S(n) is always false with the axiom that 0 = b and b − 1 additional axioms stating (altogether) that a = b is false whenever (in ordinary arithemetic) 0 < a < b. But you could instead add an arbitrary axiom of the form b = c (and another finite set of inequalities between smaller numbers). So let us use the arithmetic given by the axiom that 2 = 3. Then 1 + 1 = 3 is easy to prove (since the proof that 1 + 1 = 2 doesn't rely on the axiom that we've removed, and we still have transitivity of equality). And yet this system is not trivial; it is basically (0, 1, 2, 2, 2, …).

Actually, this example is minimal; let's go for a little overkill and instead use the axiom that 1 = 2. Of course, we can still prove that 1 + 1 = 3 (this time leaving the formal proof entirely to the reader). This system is a bit more trivial than the last one, but not quite trivial; it is basically (0, 1, 1, 1, …).

Now, although these systems of arithmetic are nontrivial, I really ought to give some mathematical reasons why anybody would be interested in them at all. I can give several profound reasons for the last one, which I will skip on the grounds that you can find them elsewhere; it boils down to this: this system is the system of truth values in classical logic (Boolean algebra). I don't even have to tell you how to interpret 0, 1, and + (much less =) in this system; I'm simply using them with their standard meanings in this context.

So now that you see that 1 + 1 = 3 in Boolean algebra, I need to turn around (as meta-contrarian) and explain why it is still silly. One reason is that nobody doing Boolean algebra (or even the slightly less trivial system of arithmetic based on 2 = 3) should ever want to write "3"; they should just write "1" (or "2" in the other system) instead. Another reason is that you shouldn't just throw "1 + 1 = 3" or even "1 + 1 = 1" out without explanation; the default meanings of those terms are in Peano arithmetic (or an extension thereof), not Boolean arithmetic. So in a general context, you really ought to say "1 + 1 = 3 in Boolean arithmetic", or something like that, instead. Just saying "1 + 1 = 3" and expecting people to know what the heck you're talking about is, well, silly.

I have no idea if that's what I was thinking in September, but that's what I thought of now. I hope that you like it.

Edit: Read-o fixed.

Replies from: RobinZ
comment by RobinZ · 2011-01-25T18:12:05.070Z · LW(p) · GW(p)

1 is still the successor of 0 and 3 is still the successor of the successor of 2 [you wrote 1 here, but I understand that this was a typo], where 0 is the additive identity

I wrote "successor of the successor of" - 3 is the successor of 2, which is the successor of 1. But I understand that this was a typo. :P

But yes, I enjoyed that. Thank you.

Replies from: TobyBartels
comment by TobyBartels · 2011-01-25T18:14:40.803Z · LW(p) · GW(p)

Ha, that would be a reado!

But seriously, I should have read that again. I got it in my head that you had done this while I spent time planning my response and forgot to verify.

comment by Orfell · 2010-09-14T07:41:10.131Z · LW(p) · GW(p)

I have a strong urge to signal my difference to the Lesswrong crowd. Should I be worried that all my positions may be just meta^2 contrarianism?

comment by multifoliaterose · 2010-09-13T22:03:24.671Z · LW(p) · GW(p)

people get deep personal satisfaction from arguing the positions even when their arguments are unlikely to change policy

I very much wish that intellectual debate was more effectiveness-oriented in general. I myself try to refrain from arguing about things that don't actually matter or that I can't hope to change (not always successfully).

comment by Catnip · 2015-03-05T19:19:38.642Z · LW(p) · GW(p)

I have a style question. Are there less grating ways to write gender neutral texts?

I, to my great surprise, was irritated to no end by "ey" and "eir". I always stumbled when reading it. I dislike it and think "he/she" or "they" may be more natural and cause less stumbling when reading the article.

So far, I am against all the invented gender-neutral pronouns. Most of them sound strange ("ey" and "eir" look like a typo or phonetic imitation of deep southern accent, "xe" and "xir" use "x" sound and are simply painful to pronounce)

As of now, I am willing to sacrifice gender neutrality in texts in favor of readability.

Replies from: g_pepper, MarkusRamikin
comment by g_pepper · 2015-03-05T20:00:21.228Z · LW(p) · GW(p)

Technically, "he" is perfectly acceptable for gender neutral texts. Merriam-Webster states that "he" can be "used in a generic sense or when the sex of the person is unspecified".

However, to avoid the appearance of non-neutral text, I usually use "he/she", "his/her", etc. "They" or "their" can be used, but these are not really appropriate referring to a singular antecedent, so I quite often use "his/her" rather than "their". Another technique that you see frequently and that I sometimes use is to use "he" sometimes and "she" other times. As long as these more-less balance out in your text, you should be OK from a neutrality standpoint.

Any of these alternatives is preferable IMO to "ey" and "eir".

Replies from: None
comment by [deleted] · 2015-03-05T21:01:08.649Z · LW(p) · GW(p)

The Eir of Slytherin has opened the Chamber of Socrates...

comment by MarkusRamikin · 2015-03-05T19:36:40.524Z · LW(p) · GW(p)

If you dislike zes, xes and eys and find them horrible little abominations that have no place among good and decent words, and suspect they were meant to trick us into unknowingly saying things that count as worship of Cthulhu...

...oh, wait, that's me. Let's try again.

If you dislike zes, xes and eys, then using "they" seems to me the best solution if you care about being gender-neutral.

comment by PhilGoetz · 2014-12-31T17:43:15.981Z · LW(p) · GW(p)

That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death.

This is an interesting hypothesis, but applying it to LessWrong requires that the LW community has a consensus on how people rank by intelligence, that that consensus be correct, and that people believe it is correct. My impression is that everybody thinks they're the smartest person in the room, and judges everyone else's intelligence by how much they agree. I don't believe there is any accurate LW consensus on the intelligence of its members. Person X will always rate people of similar intelligence to perself as having the highest intelligence.

Replies from: Vaniver
comment by Vaniver · 2014-12-31T19:03:07.887Z · LW(p) · GW(p)

My impression is that everybody thinks they're the smartest person in the room, and judges everyone else's intelligence by how much they agree.

I think this is generalizing from one example; I've certainly met people who didn't think they were the smartest person in the room, either because they're below median intelligence and reasonably expect that most people are smarter than them or because even though they're above median they've met enough people visibly smarter than them. (I've been in rooms where I wasn't the smartest person.)

I suspect that people may not be very good at ranking, and are mostly able to put people in buckets of "probably smarter than me," "about as smart as me," and "probably less smart than me" (that is, I think the 'levels below mine' blur together similarly to how the 'levels above mine' do).

I also suspect that a lot of very clever people think that they're the best at their particular brand of intelligence, but then it's just a question of self-awareness as to whether or not they see the reason they're picking that particular measure. I can recall, as a high schooler, telling someone at one point "I'm the smartest person at my high school" and then having to immediately revise that statement to clarify 'smartest' in a way that excluded a friend of mine who definitely had more subject matter expertise in several fields and probably had higher g but had (I thought, at least) a narrower intellectual focus.

comment by gwern · 2014-10-26T22:34:03.027Z · LW(p) · GW(p)

Ebola has offered a recent nice example of the triad. Mainstream: "be afraid, be very afraid"; contrarian: "don't be so gullible, why, hardly any more people have died from Ebola than have died from flu/traffic accidents/smoking/etc"; meta-contrarian: "what is to be feared is a super-lethal disease escaping containment & killing many more millions than the normal flu or traffic death toll".

Replies from: army1987, Confusion
comment by A1987dM (army1987) · 2014-10-27T16:35:25.580Z · LW(p) · GW(p)

meta-contrarian: "what is to be feared is a super-lethal disease escaping containment & killing many more millions than the normal flu or traffic death toll".

Meh. Mankind survived the mad cow, the SARS, the bird flu and the swine flu hardly scathed; why should it be different this time around?

Replies from: gwern
comment by gwern · 2014-10-27T21:34:03.058Z · LW(p) · GW(p)
  1. human deaths are not irrelevant; a million deaths != no deaths.
  2. pandemics are existential threats, which can drive species extinct; I trust you understand why 'mad cow, the SARS, the bird flu and the swine flu' are not counter-arguments to this point.
Replies from: army1987
comment by A1987dM (army1987) · 2014-10-28T08:20:05.914Z · LW(p) · GW(p)

I trust you understand why 'mad cow, the SARS, the bird flu and the swine flu' are not counter-arguments to this point.

No, I don't.

EDIT: Anthropics?

comment by Confusion · 2018-05-20T09:01:08.445Z · LW(p) · GW(p)

In that triad the meta-contrarian is broadening the scope of the discussion. They address what actually matters, but that doesn’t change that the contrarian is correct (well, a better contrarian would point out the number of deaths due to Ebola is far less than any of those examples and Ebola doesn’t seem a likely candidate to evolve into a something causing an epidemic) and that the meta-contrarian has basically changed the subject.

comment by Carinthium · 2010-11-10T22:33:54.647Z · LW(p) · GW(p)

BTW, I'm not actually that intelligent (IQ about 92 or 96 if I remember right) but pretending to adopt a meta-contrarian position might be a useful social tactic for me. Any advice from those who know the area on how to use it?

Replies from: HonoreDB, epursimuove
comment by HonoreDB · 2011-01-08T03:20:04.616Z · LW(p) · GW(p)

Advocate for the obvious position using the language and catchphrases of its opponents. I remember once saying, "Well, have we ever tried blindly throwing lots of money at the educational system?" Everyone agreed that this was a wise and sophisticated thing to say, even though I was by far the least knowledgeable person in the room on the subject and was just advocating the default strategy for improving public schools. Other examples:

"Greed is good."

"The chief virtue of a $professional is $vice."

"I'm a tax-and-spend liberal, and I think there should be much more government regulation. For example, the sad truth is that the realities of medical care require the existence of death panels, and I'd rather have them run by government bureaucrats than corporate accountants."

Replies from: glenra
comment by glenra · 2011-12-24T16:10:56.347Z · LW(p) · GW(p)

I remember once saying, "Well, have we ever tried blindly throwing lots of money at the educational system?"

Kansas City was one of the more notable examples of having tried that; it didn't work out well: http://www.cato.org/pubs/pas/pa-298.html

comment by epursimuove · 2013-11-30T22:39:21.368Z · LW(p) · GW(p)

I'm not actually that intelligent (IQ about 92 or 96 if I remember right)

This seems quite unlikely given your reasonably high-quality posting history. Is this number from a professionally administered test? Do you have a condition like dyslexia or dyscalculia that impairs specific abilities but not others?

Replies from: Carinthium
comment by Carinthium · 2013-12-01T02:03:04.190Z · LW(p) · GW(p)

I have Aspergers Syndrome, which affects things like this. Probably has something to do with it.

comment by [deleted] · 2010-09-14T15:06:17.888Z · LW(p) · GW(p)

I wonder if this means we should place more weight on opinions that don't easily compress onto this contrarianism axis, since they're less likely to be rooted in signalling/group affiliations, and more likely to have a non-trivial amount of thought put into them.

Replies from: AlexanderRM
comment by AlexanderRM · 2015-03-25T05:53:06.008Z · LW(p) · GW(p)

Another thing to take away from this is that we should be wary of any system that categorizes opinions based on sociology rather than direct measures of their actual truth. Contrarians and Meta-Contrarians both have similar explanations of why they go for their levels, by pointing out the flaws with the lower level.

comment by Matt_Simpson · 2010-09-14T21:50:24.866Z · LW(p) · GW(p)

I think about the counter-signaling game a bit differently. Consider some question that has a binary answer - e.g. a yes/no question?. Natural prejudices or upbringing might cause most people to say pick, say, yes. Then someone thinks about the question and for reason r1 switches to no. Someone else who agrees with r1 then comes up with reason r2, and switches back to yes. Then r3 causes a switch back to no, ad infinitum.

Even though the conclusion at each point in the hierarchy is indistinguishable from a conclusion somewhere else in the hierarchy, the reason someone holds their conclusion still separates them from people on other levels with the same conclusion. So the reasoning is the signal.

For example, consider the God question. The hierarchy might go something like this:

  1. Believe in God by default
  2. Don't believe because of the character of the typical believer
  3. Believe because character of believers is irrelevant to God's existence
  4. Don't believe because can't find a reason to believe (burden of proof on believers)
  5. Believe because of design argument
  6. Don't believe because of flaw in design argument
  7. Believe because of evidence
  8. Don't believe because of occam's razor

Someone on the 8th level of the hierarchy doesn't need to worry about being confused with someone on the 4th level since the 4th level doesn't properly understand occam's razor and couldn't use it as a reason for not believing.

Thinking about intellectual signalling like this, I definitely take pleasure in being higher in the hierarchy - i.e. being a counter-counter-counter-...-counter-signaler. And I also find it disturbing when someone has a reason that I hadn't considered yet. They're farther up than me!

Replies from: ChristianKl, nick012000
comment by ChristianKl · 2010-09-17T11:03:36.481Z · LW(p) · GW(p)

Nassim Taleb's makes an argument that he believes on God by default and he is widely seen as a rational person. I don't think it makes sense to see his position as lower in the hierarchy than people who believe based on the design argument.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-09-17T12:41:16.279Z · LW(p) · GW(p)

His belief by default is based on some sort of argument, not unthinking acceptance of whatever his parents told him. In other words, his "default belief" is not the same as my hierarchy's "default belief."

comment by nick012000 · 2010-09-29T10:01:31.853Z · LW(p) · GW(p)

So, where would "Believe because of generalised Pascal's Wager" be on your hierarchy? ;)

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-09-29T21:22:36.041Z · LW(p) · GW(p)

I'm not sure what the "generalized" is doing, but normal pascal's wager would probably be right before or right after the design argument.

comment by ikrase · 2013-07-07T02:45:08.102Z · LW(p) · GW(p)

6) Worth a footnote: I think in a lot of issues, the original uneducated position has disappeared, or been relegated to a few rednecks in some remote corner of the world, and so meta-contrarians simply look like contrarians. I think it's important to keep the terminology, because most contrarians retain a psychology of feeling like they are being contrarian, even after they are the new norm. But my only evidence for this is introspection, so it might be false.

Deserves MORE than a footnote.

comment by A1987dM (army1987) · 2012-02-28T23:41:32.932Z · LW(p) · GW(p)

conservative / liberal / libertarian

Liberal and libertarian don't mean the same thing in Europe as in America; keep that in mind when writing for international audiences. (Very roughly speaking, an European liberal is a moderate version of an American libertarian, and an American liberal is a moderate version of an European libertarian.)

Replies from: steven0461
comment by steven0461 · 2012-02-29T00:25:38.027Z · LW(p) · GW(p)

liberal [doesn't] mean the same thing in Europe as in America

Do European and American liberals advocate different policies, or is it just that the political spectrum in both places is different so the same policies appear at different relative positions?

libertarian [doesn't] mean the same thing in Europe as in America

As far as I can tell, while this was true in the 19th century, Europe has almost completely adopted the American use of the word. Here are some examples (is there a way to get markdown to work with links that end in parentheses?):

Replies from: army1987, thomblake, dbaupp
comment by A1987dM (army1987) · 2012-02-29T10:49:09.229Z · LW(p) · GW(p)
liberal [doesn't] mean the same thing in Europe as in America

Do European and American liberals advocate different policies, or is it just that the political spectrum in both places is different so the same policies appear at different relative positions?

If I understand correctly, in America liberal is essentially a synonym of ‘(moderate) left-winger’, and hence the antonym of conservative or ‘(moderate) right-winger’ wrt social values, though they often are in favour of greater economic regulation (e.g. the US Democratic Party), whereas in Europe liberals are those who favour greater economic freedom, though they are often conservative wrt social values (e.g. the Italian centre-right). Among mainstream parties, it appears to me to be the case both in Europe and in America that the political spectrum concentrates along a line with positive slope in the Political Compass, i.e. those who value free capitalism also value traditional social values and those who value economic equality also value social freedom (and liberal appear to refer to different directions along that line in Europe and in America), but Europe has (or should I say “used to have”?, I was not aware of the parties you mentioned) fewer extremists east of that line and America has fewer extremists west of it, AFAICT.

(is there a way to get markdown to work with links that end in parentheses?)

http://en.wikipedia.org/wiki/Libertarian_Party_%28Netherlands%29 (28 and 29 being the hex codes for ( and ) respectively).

Replies from: wallowinmaya, MugaSofer
comment by David Althaus (wallowinmaya) · 2012-02-29T10:54:06.195Z · LW(p) · GW(p)

FWIW I'm from Germany and tend to agree with the above comment.

comment by MugaSofer · 2013-03-06T11:59:28.025Z · LW(p) · GW(p)

(is there a way to get markdown to work with links that end in parentheses?)

http://en.wikipedia.org/wiki/Libertarian_Party_%28Netherlands%29 (28 and 29 being the hex codes for ( and ) respectively).

Would have upvoted just for this.

Replies from: wedrifid
comment by wedrifid · 2013-03-06T15:59:54.252Z · LW(p) · GW(p)

Alternately, note that the escape character in markdown is "\". Putting that before the (first) closing parenthesis works fine.

comment by thomblake · 2012-02-29T15:53:15.385Z · LW(p) · GW(p)

liberal [doesn't] mean the same thing in Europe as in America

Do European and American liberals advocate different policies, or is it just that the political spectrum in both places is different so the same policies appear at different relative positions?

In short, 'liberal' in the US is merely the opposite of 'conservative', matching the usage of "He was liberal with his praise"; 'liberal' in Europe for the most part retains the meaning specified by 'classical liberal' in the US - "in favor of individual liberty".

Replies from: steven0461
comment by steven0461 · 2012-02-29T19:16:04.101Z · LW(p) · GW(p)

It looks to me like it means something more specific than just the opposite of "conservative". For example, this article has a header "opposition to socialism". I'm aware that US liberals are less conservative than the US spectrum and that European liberals are more in favor of individual liberty than the European spectrum, but before concluding they're different, you'd first need to rule out the hypothesis that it's because the US spectrum is more conservative and the European spectrum is less in favor of individual liberty.

ETA: I don't think this is the whole explanation, but I think it's a large part of the explanation.

Replies from: thomblake
comment by thomblake · 2012-03-01T14:05:42.281Z · LW(p) · GW(p)

The thing is, "less conservative" doesn't actually mean anything in the US. "Conservative" and "liberal" are just pointers to the Republican and Democratic parties, respectively, which in turn are semi-permanent coalitions of people with vastly different (and often incompatible) ideologies, that end up being used for color politics. There isn't really a spectrum, but you can pretend there is - if you have 390 Green beliefs and 100 Blue beliefs, then you're clearly a Moderate Green (aquamarine?).

Whereas in most of Europe, parties actually represent ideologies to some extent, and so ideological terms don't get corrupted so much in favor of talking about the platform of a party. This is often because temporary coalitions happen between political parties, instead of within them.

England is a notable exception to this - it has more-or-less two parties, and they pretend to fall on a "political spectrum" like the US parties; thus, they even tend to echo US meaningless political rhetoric.

Replies from: army1987
comment by A1987dM (army1987) · 2012-03-01T15:36:46.783Z · LW(p) · GW(p)

Whereas in most of Europe, parties actually represent ideologies to some extent, and so ideological terms don't get corrupted so much in favor of talking about the platform of a party. This is often because temporary coalitions happen between political parties, instead of within them.

At least in Italy, “It's a complete mess” would be a more accurate (though less precise) description than that.

Replies from: thomblake
comment by thomblake · 2012-03-01T16:16:23.216Z · LW(p) · GW(p)

Agreed.

comment by dbaupp · 2012-03-01T13:43:56.259Z · LW(p) · GW(p)

Do European and American liberals advocate different policies, or is it just that the political spectrum in both places is different so the same policies appear at different relative positions?

FWIW, in Australia, there are two main political parties, Liberal and Labor. The Liberals are reasonably close to the Republicans (from what I can glean of US politics), and "liberals" (US meaning) seem to align with Labor or one of the other parties.

is there a way to get markdown to work with links that end in parentheses

A backslash in front of the offending punctuation should fix it.

comment by ata · 2010-09-14T00:41:31.896Z · LW(p) · GW(p)

Thus Eliezer's title for this mentality, "Pretending To Be Wise".

Have we broadened that term to refer to... well, lowercase pretending-to-be-wise in general? In the original post, he used it specifically to refer to those who try to signal wisdom by neutrality. (Though I did notice he used it in the broader sense in HPMoR. Is it thus officially redefined?)

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-09-14T19:14:31.975Z · LW(p) · GW(p)

Yeah, I was thinking of the HPatMOR usage. It's a good phrase, and it would be a shame not to use it.

comment by Guillaume Charrier (guillaume-charrier) · 2023-03-03T14:34:00.005Z · LW(p) · GW(p)

Death universally seems bad to pretty much everyone on first analysis, and what it seems, it is.

How can you know? Have you ever tried living a thousand years? Has anybody? If you had a choice between death and infinite life, where inifinite does mean infinite, so that your one-billion year birthday is only the sweet begining of it, would you find this an easy choice to make? I think that's big part of the point of people who argue that no - death is not necessarily a bad thing. 

To be clear, and because this is not about signalling: I'm not saying I would immediately choose death. I'm just saying: it would be an extraordinarily difficult choice to make.

comment by Yustynn Panicker (yustynn-panicker) · 2021-11-20T07:44:13.044Z · LW(p) · GW(p)

Funnily enough, triads became a meme format I've seen around recently (https://knowyourmeme.com/memes/iq-bell-curve-midwit)

comment by Lavender (Kevin92) · 2016-01-21T23:09:00.568Z · LW(p) · GW(p)

This triad was missed:

"Muslims are terrorists!" / "Islam is a religion of peace." / "Religion is problematic in general but Islam is the worst and I can back that claim up with statistics I read on Sam Harris' blog."

Replies from: ChristianKl, username2
comment by ChristianKl · 2017-07-04T14:41:59.879Z · LW(p) · GW(p)

Gwern's TERRORISM IS NOT ABOUT TERROR seems to me like a better candidate for the third.

comment by username2 · 2016-04-11T23:01:04.757Z · LW(p) · GW(p)

For the third slot I'd say "religious squabbles are the wrong problem to be thinking about".

comment by Jacob Falkovich (Jacobian) · 2014-12-28T03:56:25.148Z · LW(p) · GW(p)

It's always a bit of a shock when you're the contrarian and you discover someone meta-contrarianizing you on the outside lane. For example, here's an interesting triad I just recently became aware of:

Base: monogamy is assumed without discussion, cheating is the end of a relationship unless maybe if you confess and swear to never do it again.

Contrarian: open/poly relationship is agreed upon after discussion, it's not cheating if there's no lying.

Meta-con: non-exclusivity is assumed, no discussion. Cheating is whatever, just don't tell me about it.

I held the first position since I was a teenager, the second since my early twenties. The third one I have recently heard from a couple of young ladies in New York, where polyamory is quite popular. While it's hard for me to see rationally why the third option would be better (don't ask don't tell vs. open agreement), I find the meta-contrarianism of it extremely seductive... Yvain, you may have just saved my next relationship.

comment by David_J_Balan · 2010-09-15T02:31:58.796Z · LW(p) · GW(p)

There is a neat paper on this by Feltovich, Harbaugh, and To called "Too Cool for School? Signaling and Countersignaling."

http://www.jstor.org/pss/3087478

Replies from: toro
comment by toro · 2011-09-03T05:48:43.224Z · LW(p) · GW(p)

A little more from Harbaugh's home page http://www.bus.indiana.edu/riharbau#CS. Includes the Economist puff piece and an unedited version with some fun-but-unconvincing examples.

Fun fact: Harbaugh also made a searchable Chinese dictionary. (zhongwen.com)

comment by Mercy · 2010-09-14T12:50:58.732Z · LW(p) · GW(p)

I'm a little confused, what purpose does this distinction serve? That people like to define their opinions as a rebellion against received opinion isn't novel. What you seem to be saying is: defining yourself against an opinion which is seen as contrarian sends a reliably different social signal to defining yourself against an opinion which is mainstream, is that a fair assessment? Because this only works if there is a singular, visible mainstream, which is obviously available in fashion but rare in the realm of ideas.

Moreover, if order-of-contrariness doesn't convey information, I can't see any situation in which one it would be helpful to indicate a positions order, where it wouldn't be just as easy and far more informative to point out the specific chain of it's controversy.

In any case I take some issue with a bunch of your example.

Firstly on feminism the obvious mainstream controversy/metacontroversy dynamic for misogyny is between second and third wave feminism in academia, and between "all sex is rape" and "pole dancing is empowering/Madonna is a feminist icon" in the media. Picking an obscure internet phenomenon closer to the starting point is blatant cherry picking.

Similarly the Bad Samaritans/New Development argument has a lot more currency than the aid is the problem one, but again that's further from both positions. For that matter the same applies to liberterianism and it's real Laius, socialism.

The number of global warming skeptics who jumped straight from "it's not happening" to "well we didn't do it" to "well we can't do anything about it without doing more harm than good" should also, combined with the overlap in arguments between self identified MRAs and younger misogynists of the "straight white christian men are the most oppressed minority" variety, give us a bit of pause. If there's any use to identifying meta contrarian positions, it has to be in distinguishing between genuine attempts to correct falsehoods made in overeager argument with the old mainstream, and sophisticated apologetics for previously exploded positions.

On second thought, convincing as I find the Stern report, enough economists argued against reducing carbon emissions on cost-benefit grounds from the beginning that the meta position deserves honest consideration. I'd like to propose instead deism as the canonical example for bad faith apologia in meta-contrarianist drag, and third wave feminism for the honest position. Is this suitably uncontroversial?

Replies from: glenra, Vladimir_M, Yvain, cousin_it
comment by glenra · 2010-09-21T18:11:48.146Z · LW(p) · GW(p)

The number of global warming skeptics who jumped straight from "it's not happening" to "well we didn't do it" to "well we can't do anything about it without doing more harm than good" should also...give us a bit of pause.

Actually, that move is perfectly consistent with real skepticism applied to a complex assertion.

To see why, let's consider a different argument. Suppose a True Believer says we should punish gays or disallow gay marriage "because God hates homosexuality". You and I are skeptical that this assertion is rationally defensible so we attack it at what seems like the obvious first link in the logical chain. We say "I doubt that god exists. Prove to me that god exists, and then maybe we'll consider your argument." At this point you can divide the positions into:

"god hates X"/god doesn't exist

Now let us suppose TB actually does it. He does prove that god exists. Does this mean that we skeptics immediately have to accept his entire chain of reasoning? Of course not! We jump to the next weak link. To establish the original claim, one would need to prove god exists and is benevolent and wrote the bible and meant those passages in the way TB interprets as applied to our current situation. Anything less, and the original assertion remains Not Proven.

If any link in the chain fails, we don't have to accept the compound assertion "God hates X, therefore we should do Y". We can reasonably express skepticism towards any link that hasn't been proven until the whole chain is sound. Right?

Now returning to global warming, the larger claim that is implied by saying things like "global warming is real" is "greenhouse gases are warming the globe; this process will cause net-bad outcomes if we do nothing and net-less-bad outcomes (including all costs and opportunity costs) if we do X, therefore we should do X". The skeptical position is that not all the links in that chain of reasoning are strong and the warmists need to solidify a few weak links. I don't see how disagreeing over which link in the logical chain is weakest or focusing on the next weak link when one formerly-weak link is strengthened constitutes "sophisticated apologetics". I would have rather called it "rationalism".

Replies from: Mercy
comment by Mercy · 2010-09-23T10:11:50.149Z · LW(p) · GW(p)

This is a great point that's making me revise my position on some right wing commentators. Still, I'm struggling to think of any actual examples of this behavior in action: we don't actually tell religious people who believe wrong things "well god ain't real deal with it". We point out how their assertions are incompatible with their own teachings, and with the legal system, and scientific findings etc. We don't keep all the flaws we see in their position back in reserve.

Moreover most of the serious commentators on the skeptical side of the issue argued only one of the points in question, whether it was the statistics showing warming or the economics implied by it or (cue rim-shot) sunspots, it's only journalists and politicians who skipped from one to the other, which is where I got the impression they'd only looked at the issue long enough to find a contrarian position.

Replies from: glenra
comment by glenra · 2011-12-24T15:43:16.590Z · LW(p) · GW(p)

I'm struggling to think of any actual examples of this behavior in action

If you've ever said or thought "Okay, just for the sake of argument, I'll assume your point X is correct..." you were holding a position back in reserve.

One typical example is arguing with a religious nut that what he's saying is incompatible with the teachings in his own holy book. Suppose he wins this argument (unlikely, I know, but bear with me...) and demonstrates that you were mistaken and no, his holy book really does teach that we should burn scientists as witches. Do you immediately conclude that yes, we should burn scientists as witches? No, because you don't actually hold in high esteem the teachings in his holy book.

comment by Vladimir_M · 2010-09-14T19:13:12.371Z · LW(p) · GW(p)

Mercy:

Because this only works if there is a singular, visible mainstream, which is obviously available in fashion but rare in the realm of ideas.

However, it seems to me that such mainstream does exist. Compared to the overall range of ideas that have been held throughout the history of humanity, and even the overall range of ideas that I believe people could hold without being crazy or monstrous, the range acceptable in today's mainstream discourse looks awfully narrow to me. It also seems to me very narrow by historical standards -- for example, when I look at the 19th century books I've read, I see an immensely greater diversity of ideas than one can see from the modern authors that occupy a comparable mainstream range. (This of course doesn't apply to hard sciences, in which the accumulation of knowledge has a monotonous upward trend.)

Of course, like every human society, ours is also shaken by passionate controversies. However, most of those that I observe in practice are between currents that are overall very similar from a broader perspective.

Replies from: Mercy
comment by Mercy · 2010-09-15T00:22:21.937Z · LW(p) · GW(p)

Well I can see that in certain areas, but it depends on where you look. The range of held opinions on the construction of gender, criminal punishment and both the nature and the contents of history is much broader than one hundred years ago. The range of opinions on the morality of war is far narrower.

In any case, I meant mainstream in the sense that top 40 is mainstream, not in the sense that music is mainstream. Perhaps orthodoxy would be a better word? In fashion there is usually a single current orthodoxy about how people should dress, so it's easy to identify these circles of heterodoxy and reactionism. Other issues show multiple competing orthodoxies, each of which appears contrary to the other.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-09-15T01:33:32.884Z · LW(p) · GW(p)

Mercy:

The range of held opinions on the construction of gender, criminal punishment and both the nature and the contents of history is much broader than one hundred years ago.

Frankly, I disagree with that statement so deeply that I'm at a loss how to even begin my response to it. Either we're using radically different measures of breadth, or one (or both?) of us has had a grossly inadequate and unrepresentative exposure to the thought of each of these epochs.

Yes, certain ideas that were in the minority back then have been greatly popularized and elaborated in the meantime, and one could arguably even find an occasional original perspective developed since then. However, it seems evident to me that by any reasonable measure, this effect has been completely overshadowed by the sheer range of perspectives that have been ostracized from the respectable mainstream during the same period, or even vanished altogether.

In fashion there is usually a single current orthodoxy about how people should dress, so it's easy to identify these circles of heterodoxy and reactionism. Other issues show multiple competing orthodoxies, each of which appears contrary to the other.

But in the matters of opinion, there is also a clearly defined -- and, as I've argued, nowadays quite narrow -- range of orthodoxy, and it's common knowledge which opinions will be perceived as contrarian and controversial (if they push the envelope) or extremist and altogether disreputable (if they reach completely outside of it). I honestly don't see on what basis you could possibly argue that the orthodoxy of fashion is nowadays stricter and tighter than the orthodoxy of opinion.

Replies from: CronoDAS
comment by CronoDAS · 2010-09-15T02:49:07.072Z · LW(p) · GW(p)

Mercy:

The range of held opinions on the construction of gender, criminal punishment and both the nature and the contents of history is much broader than one hundred years ago.

Frankly, I disagree with that statement so deeply that I'm at a loss how to even begin my response to it. Either we're using radically different measures of breadth, or one (or both?) of us has had a grossly inadequate and unrepresentative exposure to the thought of each of these epochs.

Two hundred years ago, then?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-09-15T06:09:49.252Z · LW(p) · GW(p)

Two hundred years ago, the institutions were very different, and there was much less total intellectual output than a century ago, so it's much harder to do a fair comparison because it's less clear what counts as mainstream and significant.

However, the claim is still flat false at least when it comes to criminal punishment. In fact, in the history of the Western world, the period of roughly two hundred years ago was probably the very pinnacle of the diversity of views on legal punishment. On the one extreme, one could still find prominent advocates of brutal torturous execution methods like the breaking wheel (which were occasionally used in some parts of Europe well into the 19th century), and on the other, out-and-out death penalty abolitionists. (For example, the Grand Duchy of Tuscany abolished the death penalty altogether in 1786, and it was abolished almost completely in Russia around the mid-18th century.) One could also find all sorts of in-between views on all sides, of course. Admittedly, one would be hard-pressed to find someone advocating a prison system of the sort that exists nowadays, but that would have been economically impossible back in those far poorer times (modern prisons cost tens of thousands of dollars per prisoner-year, not even counting the cost of building them).

Depending on what exactly is meant by "the nature and the contents of history," one could certainly point out many interesting perspectives that could be found 200 years ago, but not today anymore. That, however, is a very complex question. As for gender, well, I'd better not go into that topic. I'll just point out that people have been writing about these matters since the dawn of history, and it's very naive (though sadly common nowadays) to believe that only our modern age has managed to achieve accurate insight and non-evil attitudes about them.

Replies from: wedrifid, Mercy
comment by wedrifid · 2010-09-15T08:03:56.160Z · LW(p) · GW(p)

As for gender, well, I'd better not go into that topic. I'll just point out that people have been writing about these matters since the dawn of history, and it's very naive (though sadly common nowadays) to believe that only our modern age has managed to achieve accurate insight and non-evil attitudes about them.

Dawn of history? Now I'm imagining uncovering writing on the wall of caves: "Why women make better hunters" and expressing indignation at under-representation of females in cave paintings of battles.

Replies from: Vladimir_M, None
comment by Vladimir_M · 2010-09-15T16:14:00.181Z · LW(p) · GW(p)

What Constant said. I meant "history" in the narrow technical sense of the word, i.e. the period since the invention of writing.

comment by [deleted] · 2010-09-15T08:57:31.717Z · LW(p) · GW(p)

You're mixing up history with prehistory.

Replies from: wedrifid
comment by wedrifid · 2010-09-16T02:15:59.224Z · LW(p) · GW(p)

No I'm not. The counterfactual referred to writing, writing which incidentally happened to be a commentary on the quality of the historical record keeping. (It is not my position that the counterfactual is particularly likely - if anything the reverse.)

comment by Mercy · 2010-09-15T11:21:07.137Z · LW(p) · GW(p)

People still argue those things nowadays though. Any remotely salacious criminal story has hacks crawling out of the woodwork to gloat about how the perpetrators will be raped, and the current Attorney General has deliberately delayed introduction of mechanisms to clamp down on the practice. For a long time one of the most popular proposal out of Britain's "let the public suggest policies" initiative was to send paedophiles to Iraq as human mine detectors.

And you're missing the major reason for the increase in variety of criminal punishments, which is that the increase in the number of non violent crimes. I don't think I'll run too much risk of embarrassing myself if I suggest that mephedrone clinics weren't considered an alternative to jail time 100 years ago.

As to gender, I was under the impression that radically post- and anti- gender views like those expressed by Julie Bindel and Donna Harroway were novel, if there are 19th century author's with similar viewpoints I'd be happy to hear them. Again this is an issue where I don't see any dead viewpoints, so even small increases in radical-ness increase the general width of ideas held.

It strikes me though from the prison issue that our differences are mostly over what qualifies a belief as respectable. There are many beliefs that are no longer taken seriously by liberal academics, if that's what you mean by mainstream then I agree the 19th century showed a much broader range of opinion then ours.

Getting back to my original point, just about everything in the OP is within the range of orthodoxy of public opinion, and everything except "obama is a muslim" within the academic one, and yet they can be modeled as contrary to one another.

Replies from: HumanFlesh
comment by HumanFlesh · 2010-09-15T12:15:22.403Z · LW(p) · GW(p)

Mephedrone clinics? Do you mean methadone clinics?

comment by Scott Alexander (Yvain) · 2010-09-14T19:38:28.156Z · LW(p) · GW(p)

I'm a little confused, what purpose does this distinction serve? That people like to define their opinions as a rebellion against received opinion isn't novel.

You're right, the examples were pretty cherry-picked.

My point was to show that, although we tend to celebrate our failure to be lured into holding contrarian positions for the sake of contrarianism, this can itself be a trap that we need to watch out for. I think the idea of meta-contrarian-ness is novel in a way the idea of contrarian-ness is not.

comment by cousin_it · 2010-09-14T14:23:02.127Z · LW(p) · GW(p)

Why do you want to define "genuine meta-contrarianness" based on correctness/merit? It will cause endless flamewars. Yvain's recipe, on the other hand, is relatively uncontroversial.

Replies from: Mercy
comment by Mercy · 2010-09-14T18:21:36.828Z · LW(p) · GW(p)

As far as I can see, it's uncontroversial because it doesn't add any information in the first place, compared to just including the norm in question when describing something as contrarian, which takes a similar number of words, less effort and is less subjective.

But I'm not suggesting double contrarian opinions must be better than unrecontructed ones, rather that if they are distinguishable they should have different bottom lines: they shouldn't just be better arguments for the same thing. We see this in the race example: modern genetics recognises very different ethnic distributions to those of classical racialist science, or modern derivations thereof.

Replies from: cousin_it
comment by cousin_it · 2010-09-14T18:30:44.964Z · LW(p) · GW(p)

I think the post was a guideline to help you catch yourself when you write the bottom line of your position for signaling reasons (contrarian or meta-contrarian). If you never experience that problem, more power to you. I do have it and the post was helpful to me.

Replies from: Mercy
comment by Mercy · 2010-09-14T18:51:56.134Z · LW(p) · GW(p)

Hah, I'm sure I do, I guess the point then is that just because your position is counter-revolutionary, doesn't mean you haven't adopted it out of rebelliousness. Um, assuming that revolutionary zeal as a potential source of bottom lines was taken for granted. I think I knew that already, if only through hatred of South Park style antagonistic third way-ism, and so have spent these last few responses training on straw.

comment by HughRistik · 2010-09-13T23:47:41.122Z · LW(p) · GW(p)

Are there also meta-meta-contrarians?

Replies from: Yvain, multifoliaterose, steven0461, Will_Newsome
comment by Scott Alexander (Yvain) · 2010-09-14T19:12:49.003Z · LW(p) · GW(p)

Maybe it's context dependent. If I am hanging around a lot of contrarians, I usually end up looking for a meta-contrarian position. If I'm hanging around a lot of meta-contrarians who I think aren't as smart as me, and those meta-contrarians are being really smug and annoying, I become meta-meta-contrarian. I fondly remember a period of my life when I went to my college's Objectivist club every week to argue vehemently against everything they said. I think that qualifies as meta-meta-contrarian if anything does.

comment by multifoliaterose · 2010-09-13T23:51:28.757Z · LW(p) · GW(p)

The game theory goes as deep as people are inclined to take it. In practice, I'm not sure.

Replies from: whpearson
comment by whpearson · 2010-09-14T00:11:47.819Z · LW(p) · GW(p)

I'm somewhat of one for peak oil.

What oil problem? > peak oil "we are doomed" > synthetic fuel created using nukes/humanity will find a way > likely to be a bumpy ride as alternatives take a while to ramp up in scale

comment by steven0461 · 2010-09-13T23:51:37.034Z · LW(p) · GW(p)

I was wondering the same thing. Scott Aaronson and Cosma Shalizi come to mind.

Replies from: Liron
comment by Liron · 2010-09-14T00:21:30.171Z · LW(p) · GW(p)

Since he doesn't present sophisticated meta-meta-arguments, to me it just seems like Scott's beliefs are harder to shift from contrarian to meta-contrarian.

Replies from: steven0461
comment by steven0461 · 2010-09-14T00:24:59.598Z · LW(p) · GW(p)

That sounds right, though maybe as you go more meta it just gets harder to distinguish between any level and the level two levels down.

comment by Will_Newsome · 2010-09-14T00:18:00.721Z · LW(p) · GW(p)

I don't mean to toot my own meta (especially as metaness isn't directly correlated with truth), but me with respect to cryonics. Carl Shulman? Michael Vassar? Most people who think about things? In general, people who think about any given topic more than the average LW poster are likely to be meta-LW-contrarian on that topic, for better or worse.

comment by TropicalFruit · 2022-05-19T05:00:12.142Z · LW(p) · GW(p)

Here's a good one:

Inflation is good because give everyone money / inflation is bad, deflation is good / small inflation may be necessary to offset human loss aversion

comment by EditedToAdd · 2018-04-15T15:22:04.149Z · LW(p) · GW(p)

A triad I just thought of today which seems definitely true:

Doesn't care to make small mistakes that might make him/her look somewhat silly / Cares to avoid every level of mistake to acquire the cleanest reputation possible / Doesn't care to make small mistakes that might make him/her look somewhat silly.

comment by akrates · 2017-07-03T19:36:07.226Z · LW(p) · GW(p)

A follow-up thought: This pattern seems to also work for life decisions, and not only for positions in debates or fashion choices. For example: A few years ago, I almost didn't take the offer to do a PhD at an Ivy League school, as opposed to a less highly ranked school, and to live in a mainstream popular city, as opposed to the middle of nowhere, because of my contrarianism. And then my meta-contrarianism kicked in, and I took the offer. I'm happy with the choice, but I do every once in a while have to remind myself of the fact that I consciously decided against my contrarianism, and sometimes I still wonder whether I should have gone with my contrarianism as opposed to my meta-contrarianism. (I.e. I sometimes still wonder whether I'm just buying into a naive narrative about 'good schools' and 'good cities' here.)

PS: This is my way of saying that I really really like Yvain's post. And I just realized it's already 7 years old.

comment by Lumifer · 2017-06-26T17:16:07.826Z · LW(p) · GW(p)

SPAMMITY SPAM SPAM

comment by Entraya · 2014-01-03T10:56:10.481Z · LW(p) · GW(p)

I think the whole thing about dwelling on the negatives of our society, is because there's a deeper level to the concerns. Like a sort of collective lacking of something, lacking in Romantic relations to nature and society and such things, but without knowing where such things can be found. Just basic yearning that shows up in the extremes of our modern society, which manifests in media and overromantized movies about how 'spiritually connected' Indians were, or the peace of Buddhists, or the good old ways; you know what I mean. The movie Avatar is basically a big blop of such wishes, and I'd be lying if i claimed that living as a blue cat person in a hippies heaven doesn't sound nice as fuck. There's a big drive underneath it that goes beyond countersignaling, and the amount of that is what separates the countersignaling hippies from those simply yearning to the extreme

comment by Asymmetric · 2012-01-31T04:45:34.324Z · LW(p) · GW(p)

This is exactly how history is studied.

Historiography is how historical opinions have changed over time. It first begins with the Orthodox viewpoint, which is the first, generally accepted viewpoint of the events that arises. It is generally very biased because it comes about directly after the event has occurred, when feelings still run strong.

This Orthodox viewpoint is contrasted by several Revisionist viewpoints, which tend to make wildly different conclusions based upon new evidence in order to sell books (historical scandals are quite good for that). Sometimes a Revisionist viewpoint can become the new Orthodoxy if it has become entrenched in the public consciousness long enough.

Then there's Post-Revisionism, which, after the rancor has died down, aims to dispassionately weigh the evidence brought to the table by both the Revisionist viewpoints and the Orthodoxy (different Post-Revisionist conclusions arise from differing opinions on how reliable certain pieces of evidence are). While the Orthodoxy and especially the Revisionists tend to make strong statements about the controversy, Post-Revisionists rarely make statements that concede nothing to other viewpoints, and thus their arguments are "weaker", though Post-Revisionist opinions are seen generally as the least biased of the three.

The problem with the Post-Revisionist viewpoints is that, even though they don't arise from emotional attachment (or rejection of the same), they tend to have access to less evidence in total -- I mean, just look at all those Egyptologists. Or, really, anyone who wants to know about an ancient civilization.

comment by christina · 2011-10-05T05:29:41.146Z · LW(p) · GW(p)

I know this is an old post, but I wanted to ask a couple questions.

Can you clarify if this meta-contrarian hypothesis of human psychology makes predictions that distinguish it from other explanations for holding an idea to be true or communicating it to be true? I ask since from reading some of the comments, the classification of these triads seems like a fluid thing, and I can't think of anything offhand that might be used to constrain them. If you want to use your hypothesis merely to talk about the reasons for why confidence is assigned, do you think the ideas you've presented here can make more accurate predictions on that than those in an example journal article on psychology, such as this one by Kahneman and Tversky?

Also, I think it would be more helpful to depend on examining only the logic behind, and the evidence for one's beliefs (and ignoring how confident one feels about them) to determine if they are right. You state:

You can't evaluate the truth of a statement by its position in a signaling game; otherwise you could use human psychology to figure out if global warming is real!

I strongly agree with this statement. Which is why I also want to know how the triads you propose help people to examine the flaws in their beliefs better than other psychological theories or hypotheses. For example, I might say that one should examine any belief closely, even if one feels a high degree of confidence in it, because level of confidence felt does not predict level of truth. This is a hypothesis about how confidence felt for a belief correlates to its truth (and if you want to be meta about it, it's a belief that I currently believe to be true).

In summary, I would like to know 1.) how you use the hypothesis you've given to make predictions and 2.) how this can help people identify false high confidence beliefs better than other possible hypotheses (such as the one I gave). And if anyone besides Yvain can answer these questions, I would welcome your input as well.

comment by UnclGhost · 2011-09-05T08:07:35.577Z · LW(p) · GW(p)

Great post. I've had a similar idea for a while but didn't realize just how far it could be generalized.

I especially noticed this idea while reading C.S. Lewis' The Screwtape Letters, which seems to posit the hierarchy as being something like "Belief in Christianity because of social pressures / Disbelief in Christianity because who needs social pressures / Belief in Christianity because of comprehension of its 'true meaning' (or something)".

I guess when there are potentially a lot of layers of meta-contrarianism like in Matt_Simpson's example, that can easily lead to strawman arguments when you try to argue against a higher-level even (or odd) number as if it was a lower-level even (or odd) number.

comment by owevr · 2010-09-14T15:47:47.424Z · LW(p) · GW(p)

Great article. However, why do you call them "meta-contrarian", instead of "anti-contrarian"? I would not call something "meta-" unless it adds additional dimensions to the given context. For example, "meta-theory" is not about disputing particular theories but something totally different.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-09-14T19:49:03.985Z · LW(p) · GW(p)

I interpret meta- to mean "one level above"; thus for example Douglas Hofstadter's "meta-agnostic", someone who is agnostic about agnosticism, and your own mention of "meta-theory", a theory about a theory.

I use "meta-contrary" because it's a position deliberately taken to be contrary to a position deliberately taken to be contrary.

comment by CronoDAS · 2010-09-14T02:34:39.186Z · LW(p) · GW(p)

I advocate majoritarianism on most topics related to science.

Including nutrition.

I'll take Gary Taubes seriously when the NIH does.

Replies from: Kutta, gwern
comment by Kutta · 2010-09-14T05:50:37.522Z · LW(p) · GW(p)

All I can say is that actual studies' results and science is the thing making it rationally possible to discern someone pulling a Semmelweis or being a quack. You can definitely do better than the outside view if you're willing to expend at least some personal effort to investigate. Especially if a quick meta-glance (you can think of Hanson here, among others) on medicine suggests that governmental medical institutes and guidelines are a lot less trustworthy than what is usual in non-medical domains.

Replies from: CronoDAS
comment by CronoDAS · 2010-09-14T06:24:05.689Z · LW(p) · GW(p)

Yeah, they also tend to be inconsistent over time. Consider: butter or margarine? The mainstream view isn't very solid, but the non-mainstream views don't seem like they're any better either. (If they were better, then why aren't they mainstream yet?)

Replies from: Relsqui
comment by Relsqui · 2010-09-14T10:10:03.531Z · LW(p) · GW(p)

I think the butter thing, like a lot of very specific dietary concerns, is hard to settle popularly because the answer may not be the same for everyone. Carbohydrate intake is another good example of that phenomenon (I hestitate to even call it a "problem"--the problem is the alleged need for a universal answer). A lot of people who live relatively sedentary lifestyles take in a lot more carbs than they use, and might reasonably be advised to cut back. That does not make it good advice for, say, a bike commuter who's actually getting a reasonable amount of cardiovascular exercise.

comment by gwern · 2014-10-25T18:55:22.492Z · LW(p) · GW(p)

Mainstream: 'correlation=causation' (almost all of nutrition research); contrarian: 'correlation!=causation' (Taubes); meta-contrarian: 'ah, but really, correlation~=causation!'

Replies from: gwern
comment by gwern · 2017-06-23T15:22:21.824Z · LW(p) · GW(p)

Computer chess: 'AIs will never master tasks like chess because they lack a soul / the creative spark / understanding of analogies' (laymen, Hofstadter etc); 'AIs don't need any of that to master tasks like chess but computing power and well-tuned search' (most AI researchers); 'but a human-computer combination will always be the best at task X because the human is more flexible and better at mega-cognition!' (Kasparov, Tyler Cowen).

Replies from: Vaniver, Decius
comment by Vaniver · 2017-07-01T01:15:49.065Z · LW(p) · GW(p)

3 has been empirically disproven at this point, I believe?

Replies from: arundelo
comment by arundelo · 2017-07-04T01:12:34.510Z · LW(p) · GW(p)

gwern on "centaurs" (humans playing chess with computer assistance):

Even by 2007, it was hard for anyone to improve, and after 2013 or so, the very best centaurs were reduced to basically just opening book preparation (itself an extremely difficult skill involving compiling millions of games and carefully tuning against the weakness of possible opponent engines), to the point where official matches have mostly stopped (making it hard to identify the exact point at which centaur ceased to be a thing at all).

comment by Decius · 2017-06-29T04:04:08.811Z · LW(p) · GW(p)

There will always be tasks at which better (Meta-)*Cognition is superior to the available amounts of computing power and tuning search protocols.

It becomes irrelevant if either humans aren't better than easily created AI at that level of meta or AI go enough levels up to be a failure mode.

Replies from: happywheels
comment by happywheels · 2017-06-29T08:21:15.972Z · LW(p) · GW(p)

The game is best known for its dark sense of humor and its graphic violence. Expect to see a lot of blood and guts. Your goal is to go far across each level without letting your character get hurt. The game is over even the smallest body part injuries. It takes much patience to finish the goal. Is this much challenging? Make your best efforts to survive in this glory and funny game. We can find a way to break through Even if we can't find heaven, I'll walk through hell with you. http://happy-wheelsgames.com =>happy wheels http://geometrydash-game.com =>geometry dash

comment by Ethos Castelli (ethos-castelli) · 2022-05-27T21:21:29.357Z · LW(p) · GW(p)

I'm new here and I have a question. If an overly intelligent person holds/displays meta-contrarian views in a group without deliberately trying to announce are they still signalling?

Also isn't First World/Third World an outdated classification?

Replies from: Benito
comment by Ben Pace (Benito) · 2022-05-27T21:29:39.763Z · LW(p) · GW(p)

Also isn't First World/Third World an outdated classification?

Post is 12 years old.

Replies from: ethos-castelli
comment by Ethos Castelli (ethos-castelli) · 2022-05-31T11:18:02.648Z · LW(p) · GW(p)

AHH, I am quite new to the site, didn't see that.

comment by kosmokrator · 2022-01-04T09:31:11.109Z · LW(p) · GW(p)

IMHO a good way to spot Meta-Contrarianism is when proponents of Popperian/Deutsch epistemology don't recognize that 'no final say' also applies to their own - even epistemic - beliefs.

comment by Juli654 · 2021-02-11T14:23:02.470Z · LW(p) · GW(p)

At the university I have met 2 types of professors. The first type will explain the new topic in very academic and strictly professional way. The second type will explain the topic in a plain language, in a very easy-to-understand way. If I look at it from the perspective of countersignaling, I would describe the first type as contrarian and the second type as meta-contrarian. My observation is also that the contrarian would be usually younger professor in his late thirties or forties, and meta-contrarian would be professor in his sixties and up.

comment by Kinrany · 2020-01-25T04:31:16.622Z · LW(p) · GW(p)

The third "related to" link is a bit broken: points to a Google redirect instead of the article [LW · GW] itself.

comment by eigen · 2019-06-25T13:17:16.126Z · LW(p) · GW(p)

What about the following triad?

life's good / nihilistic approach to life / I want to become a god on earth.

comment by stcredzero · 2013-03-04T18:42:49.083Z · LW(p) · GW(p)

A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

What does it mean when people hold difficult to understand moral opinions?

comment by timtyler · 2012-08-16T23:45:09.574Z · LW(p) · GW(p)

But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly.

Right - and let's not forget that it takes out of circulation a load of persistent parasites which have evolved over hundreds of generations to exploit your genome, which might otherwise find and attack your relatives and descendants.

comment by HughRistik · 2010-11-17T07:12:41.801Z · LW(p) · GW(p)

This post inspired me to write an article on Feminism, criticism of feminism, and contrarianism over at FeministCritics.org.

Replies from: fburnaby, NancyLebovitz, NancyLebovitz
comment by fburnaby · 2013-03-06T02:55:29.498Z · LW(p) · GW(p)

I identified very strongly with your article. I feel exactly the same way and suspect the same things are going on in my brain when I hear really bad feminist arguments. They're somehow more annoying than really bad (even worse!) gender regressive arguments.

This has lead me to question whether I should indulge myself in making my contrarian, actually-gender-progressive, arguments against what I perceive as mainstream opinion (feminism). Feminism really isn't nearly as mainstream as it feels to me. I'm just privileged as a member of the intellectual progressive elite - I got to go to good schools, I'm a professional, I select progressive friends and grew up with somewhat progressive parents. Yes, it was a revelation when I realized how many problems there are with mainstream feminism, but I'm also a product of a pretty rare selection bias in a society that's actually still racist. I actually buy the feminist narrative that there is still a lot of (level 1) sexism in our society, even though I tend to only see the problems with (level 2) mainstream feminism.

But there is a problem here for a consequentialist. No matter how clearly I put my criticisms, they're only understood as "some reactionary rationalization". People don't grasp the nuance and count one more head on the wrong side. It seems like it will lead to better consequences if I spend a majority of time "me too"ing mainstream feminism and biting my tongue about most of the issues in it. Or at least building more explicit feminist cred before pointing out some of the problems.

So this leads me to a question for you: why do you think that, in the face of your realization about why you criticize what you criticize, continuing to do it is the right thing to do?

Replies from: MugaSofer
comment by MugaSofer · 2013-03-06T13:52:26.970Z · LW(p) · GW(p)

So this leads me to a question for you: why do you think that, in the face of your realization about why you criticize what you criticize, continuing to do it is the right thing to do?

At risk of sounding tautological, that depends on whether it's the right thing to do.

If you have identified a systematic bias, try to remove it, then reevaluate your choices. You may still make he same ones; you cannot deduce reality from your bias. But you cannot know that if you're still biased.

comment by NancyLebovitz · 2011-11-18T12:13:29.692Z · LW(p) · GW(p)

I agree with roshni-- it would be better if you made your criticisms as you see them rather than as levels of a signalling game.

From my point of view, the PUA believers have the advantage at LW, and being gently told, no it's wonderful, and the non-wonderful bits (the worst of which I'd never heard of until you brought them up, something I'm never sure you quite believed) don't matter when so much of it is different and being in the brainfog business is best for everyone even though there's no careful way for you to check on the effects on people you're taking charge of for your own good, just leaves me feeling rather hopeless about that part of LW.

A specific example: I think you're one of the people who says that some men in PUA start out misogynistic, but become less so after they've had some success with attracting women. I wonder how they treat the women they're with before they've recovered from misogyny. Those women don't seem to be there in your calculus.

Replies from: HughRistik, NancyLebovitz
comment by HughRistik · 2011-11-26T09:39:32.963Z · LW(p) · GW(p)

Nancy, I'm a bit confused by your comment.

From my point of view, the PUA believers have the advantage at LW

What does "PUA believer" mean? Out of the folks who discuss pickup positively on LessWrong, I doubt any of them "believe" in it uncritically. However, they may feel motivated to defend pickup from inaccurate characterizations.

I do not see people who want to discuss pickup in a not-completely-negative way on LW as having an obvious advantage. The debate is not symmetrical. Anyone who can be painted as a defender of pickup is vulnerable to all sorts of stigma. Yet the worst they can say in their defense is to call the attackers close-minded or uneducated about pickup.

and being gently told, no it's wonderful, and the non-wonderful bits (the worst of which I'd never heard of until you brought them up, something I'm never sure you quite believed) don't matter when so much of it is different

Yes, different parts of pickup are different. No, the good parts don't necessarily justify the bad parts, but the presence of good parts means that pickup shouldn't be unequivocally dismissed.

being in the brainfog business is best for everyone even though there's no careful way for you to check on the effects on people you're taking charge of for your own good

There are lots of assumptions here to unpack, but I would rather hold off until I understand your views better.

just leaves me feeling rather hopeless about that part of LW.

Me too, but for different reasons.

A specific example: I think you're one of the people who says that some men in PUA start out misogynistic, but become less so after they've had some success with attracting women. I wonder how they treat the women they're with before they've recovered from misogyny. Those women don't seem to be there in your calculus.

I'm hurt that you don't think I've run the most basic consequentialist analyses on these sorts of questions. I've never stated my full moral calculus on pickup, so I don't know how you can say that it has gaps. That would be a complex subject, contingent on a lot of empirical and moral-philosophical questions that I don't know the answer to.

Luckily, since I'm not defending pickup in general, I don't have to know how to perform the moral calculus evaluating pickup in general. But I can assure you that I've thought about it. Nobody has asked me the right questions to learn my thoughts on the subject (well, some people have elsewhere... just not here).

In these discussions, sometimes I feel like some people consider pickup to be evil until proven otherwise, based on their initial impression. And that anyone who speaks positively about pickup in any way (or refutes any criticism) is a "defender" (or as you put it, "believer")... unless they write a long explication of all the problems with pickup that convinces that critics that it's not all bad, and that these believers are not completely horrible people.

Dealing with a biased, inaccurate, and polarized assessment of pickup doesn't exactly put me (and other people discussing pickup in a not-completely-negative way) in the right mood to talk about the practical and ethical problems we have with pickup. Just because we don't nail 95 theses to the door criticizing pickup before discussing it doesn't mean that we don't have problem with it, and that we haven't considered the consequences for women.

I suspect that our feelings about pickup are a lot more ambivalent and complex than you realize, but the discussion has become so polarized that people seem to feel like they are forced to pick "sides," and people who actually have very ambivalent feelings about pickup get thrust into the role of defending it.

I'm tired of defending pickup. I want to have a turn criticizing it! But I can't take my turn yet, because so much of my energy discussing pickup is getting consumed by correcting all the biased and wrong stuff that is written about it. If I wrote critical stuff about pickup, then biased people would just use it selectively as part of their hatchet job, rather than promoting a complete understanding of the subject.

How can we reduce this polarization?

Replies from: daenerys, NancyLebovitz
comment by daenerys · 2011-11-26T11:25:09.633Z · LW(p) · GW(p)

How can we reduce this polarization?

Maybe by moderates coming out of the closet, so to speak?

Hi, my name is Daenerys, and I have ambiguous views about PUA. My initial reaction was "Ew! Bad!" but after reading the debates here, talking with a friend, and learning more elsewhere, my views towards it have softened. I still do not think that all of it is 100% ok though. It is a complicated issue with many facets.

Mainly I wish it wouldn't hijack non-PUA discussions. I am seriously close to just starting a PUA discussion to keep all this stuff in one place, but I guess I feel if anyone should do it, it should be the mods.

PUA Moderates of the World, Unite!

Replies from: TheOtherDave, wedrifid, HughRistik
comment by TheOtherDave · 2011-11-26T17:38:37.637Z · LW(p) · GW(p)

Speaking as an indifferent moderate, I suspect that well over 90% of the value extractable from discussions of applications of evidence-based reasoning to dating is extractable with significantly less effort from discussions of applications of evidence-based reasoning to job interviews, used car purchases, getting along with parents and children and neighbors and classmates and coworkers, and other social negotiations.

That said, I also suspect that the far greater fascination the dating-related threads have for this site than the other stuff has more to do with various people's interests in dating than with their interest in evidence-based reasoning, so I expect we will continue to have the dating-related threads.

comment by wedrifid · 2011-11-26T16:07:59.410Z · LW(p) · GW(p)

but I guess I feel if anyone should do it, it should be the mods.

Moderators moving into a role of actively constructing official topics like that would be somewhat awkward. Moderation being damn near invisible for the most part is a feature.

comment by HughRistik · 2011-11-28T01:52:49.046Z · LW(p) · GW(p)

Hi daenerys! Welcome to the PUA Moderates club.

comment by NancyLebovitz · 2011-11-30T08:37:08.903Z · LW(p) · GW(p)

This is very much a first attempt at answering these matters.

I'm tired of defending pickup. I want to have a turn criticizing it! But I can't take my turn yet, because so much of my energy discussing pickup is getting consumed by correcting all the biased and wrong stuff that is written about it. If I wrote critical stuff about pickup, then biased people would just use it selectively as part of their hatchet job, rather than promoting a complete understanding of the subject.

How can we reduce this polarization?

I think more honesty on both sides (and you've made a good start) will help.

Part of what's been going on is that your advocacy has left me feeling as though my fears about PUA were being completely dismissed. On the other hand, when you've occasionally mentioned some doubts about aspects of PUA, I've felt better, but generally not posted anything about it.

I may have said something in favor when the idea of "atypical women" (more straightforward than the average and tending to be geeky) was floated. I'm pretty sure I didn't when someone (probably you) said something about some PUA techniques being unfair (certainly not the word used, but I don't have a better substitute handy) to women who aren't very self-assured, even though that's the sort of thing I'm concerned about.

Thanks for posting more about what's going on at your end.

As for stigma, I actually think it's funny that both of us feel sufficiently like underdogs that we're defensive. From my point of view, posting against PUA here leads to stigma not just for being close-minded and opposed to rational efforts to improve one's life (rather heavier stigmas here than in most places), but also for unkindness to men who would otherwise be suffering because they don't know how to attract women.

I don't know if it was unfair of me to assume that you hadn't performed a moral calculus-- from my point of view, the interests of women were being pretty much dismissed, or being assumed (by much lower standards of proof) to be adequately served by what was more convenient for men. Part of what squicks me about PUA is that it seems as though there's very careful checking about its effects (at least in the short term) on men, but, in the nature of things, much less information about its effects on women.

Replies from: HughRistik, None
comment by HughRistik · 2011-12-07T10:56:10.516Z · LW(p) · GW(p)

Part of what's been going on is that your advocacy has left me feeling as though my fears about PUA were being completely dismissed. ... I don't know if it was unfair of me to assume that you hadn't performed a moral calculus--

On LW in general I've spilled gallons of ink engaging in moral analyses of pickup, and of potential objections to pickup techniques. In my PUA FAQ, I made a whole section on ethics. In general, I have trouble reconciling your above perceptions with my participation in pickup discussions on LW.

But my memory of those discussions isn't perfect, so it's possible that I've been lax in replying to you personally. If you raised an issue that I didn't satisfactorily respond to, that's probably because I missed it, or left the thread, or had already talked about it elsewhere on LW, not because I didn't think it was important.

On the other hand, when you've occasionally mentioned some doubts about aspects of PUA, I've felt better, but generally not posted anything about it.

I'm glad that you noticed, even if you didn't comment much. Perhaps I'll talk more about those doubts when people engage me more about them.

I'm pretty sure I didn't when someone (probably you) said something about some PUA techniques being unfair (certainly not the word used, but I don't have a better substitute handy) to women who aren't very self-assured, even though that's the sort of thing I'm concerned about.

Yes, I believe that pickup can be harsh towards women who aren't very self-assured, and who don't have good boundaries. Yet that fact has to be taken in context.

Particular sexual norms and sexual cultures (e.g. high status, extraverted, and/or gender-traditional cultures) are harsh towards people of both sexes who aren't very self-assured, and who don't have good boundaries. Pickup is merely one example.

I have a shortlist of particular behaviors and mindsets that I find especially objectionable about pickup. Yet when trying to assess PUAs, who is the control group? Who are we comparing them to? Over the years, my ethical opinion of PUAs (on average) has fallen, but my ethical opinion of non-PUAs has been falling perhaps even faster. Criticizing PUAs for doing what everyone else is doing turns PUAs into scapegoats, and lets the rest of the culture off the hook.

As for stigma, I actually think it's funny that both of us feel sufficiently like underdogs that we're defensive. From my point of view, posting against PUA here leads to stigma not just for being close-minded and opposed to rational efforts to improve one's life (rather heavier stigmas here than in most places), but also for unkindness to men who would otherwise be suffering because they don't know how to attract women.

Thanks for filling me in on some of the stigmas on your end... I hadn't thought of the "unkind to men" one. Still, do you think those stigma as symmetrical in impact to charges of misogyny and not caring about women?

I don't know if it was unfair of me to assume that you hadn't performed a moral calculus-- from my point of view, the interests of women were being pretty much dismissed, or being assumed (by much lower standards of proof) to be adequately served by what was more convenient for men.

I am skeptical that you have sufficient data about people's view of pickup on LW to be able to make those judgments. I don't think people's views have had a chance to unfold yet. Or maybe your perception of past discussions is different, or we are both talking about different discussions, or your priors are just very different from mine.

Ultimately, I do consider it premature to suspect that I, or anyone else posting about pickup on LW, is so morally illiterate that they haven't performed a moral calculus of some sort about pickup. If we were off LW, that might be a different story.

They can correct me if I'm wrong, but I find it unlikely that people interested in pickup on LW are so ethically naive that they support pickup out of some form of egoism, or have a utility function that categorically places men's preferences above women's.

It's much more likely that they consider pickup (or more, a subset of pickup that appeals to them) consistent with their own moral theories and intuitions. Likewise, I don't agree that men discussing pickup on LW are mainly just checking its effects on men, but not on women. Perhaps I'm biased by my own views, but it seems more likely that they have thought about the effects on women. LW is not privy to their thought process, because nobody has asked the right questions. Actually, it's quite possible that they don't use, or even forgot about, some of the very things that outsiders might find objectionable about pickup.

Likewise, while I have a lot of problems with feminism, I would expect that a feminist on LW would have come to feminism through a cognitively sophisticated route (unless they proved otherwise), and that there are enough good things in feminism for a rationalist to believe that there is some value in it. I'm sure that feminists on LW would find it off-putting to have to articulate their moral calculus about how their activism treats men as a precondition to being considered reasonable. That doesn't mean that I expect to fully agree with the moral calculus of LW feminists, but it does mean that I would assume a basic level of moral sophistication on their part.

Part of what squicks me about PUA is that it seems as though there's very careful checking about its effects (at least in the short term) on men, but, in the nature of things, much less information about its effects on women.

You talk about checking the effects of pickup as if it's some sort of novel drug, but I don't see it that way. Most pickup behaviors are isomorphic to what people are already doing.

So it's not necessarily that we are being lax about checking; I think a lot of this stuff is already checked. Pickup techniques are not not as unique and special as PUA marketers or PUA critics make them sound, so they deserve the same level of consideration that anyone should do in their dating behavior, but they aren't so powerful or novel that they require some special moral scrutiny... at least, not separate from a larger moral debate about consent and sexual ethics that should examine the culture in general.

It is frustrating that pickup practitioners are getting held to a much higher moral standard than anyone else in the population, when they are simply doing a more systematized version of what large segments of the population are already doing.

I'm all for engaging in moral calculus about dating behavior. I do it all the time with mine, and I don't agree with all of the conclusions of the calculus of some people who practice pickup. But outside of (some) feminists and people who practice BDSM, who exactly does a rigorous moral calculus about the effects of their dating and sexual behavior? Most people don't calculate their dating ethics, they operate on cached ideas.

While it's understandable that critics of pickup focus on the most worrying aspects, that focus may not leave pickup practitioners on LW feeling like they are being treated as complex human beings who at least might have coherent ethical views supporting the subset of pickup that they practice.

Replies from: army1987
comment by A1987dM (army1987) · 2014-03-01T14:03:46.383Z · LW(p) · GW(p)

Over the years, my ethical opinion of PUAs (on average) has fallen, but my ethical opinion of non-PUAs has been falling perhaps even faster.

That's one of the best sentences I've read today, especially given what the title of this website is.

comment by [deleted] · 2011-11-30T21:52:24.259Z · LW(p) · GW(p)

I think I agree with this.

I think more honesty on both sides (and you've made a good start) will help.

We are already supposed to be honest here most of the time. I think something needs to be changed to facilitate such a debate, if we wish to have it.

I just think that while there are hopeful signs that we will chew through this with our usual set of tools and norms, but those hopeful signs have been around for years, and the situation dosen't seem to be improving.

Honestly I think our only hope of addressing this is having a farm more robust debating style, far more limited in scope than we are used to since tangents often peter out without follow up or any kind of synthesis or even a clear idea of what is and what isn't agreed upon in these debates.

Replies from: TheOtherDave, NancyLebovitz, lessdazed
comment by TheOtherDave · 2011-11-30T22:47:22.873Z · LW(p) · GW(p)

My $0.02:

It might help to state clearly what "addressing this" would actually comprise... that is, how could you tell if a discussion had done so successfully?

It might also help if everyone involved in that discussion (should such a discussion occur) agreed to some or all of the following guidelines:

  • I will, when I reject or challenge a conclusion, state clearly why I'm doing so. E.g.: is it incoherent? Is it dangerous? Is it hurtful? Is it ambiguous? Is it unsupported? Does it conflict with my experience? Etc.

  • I will "taboo" terms where I suspect people in the conversation have significantly different understandings of those terms (for example, "pickup"), and will instead unpack my understanding.

  • I will acknowledge out loud when a line of reasoning supports a conclusion I disagree with. This does not mean I agree with the conclusion.

  • I will, insofar as I can, interpret all comments without reference to my prior beliefs about what the individual speaker (as opposed to a generic person) probably meant. Where I can't do that, and my prior beliefs about the speaker are relevantly different from my beliefs about a generic person, I will explicitly summarize those beliefs before articulating conclusions based on them.

comment by NancyLebovitz · 2011-11-30T23:55:15.338Z · LW(p) · GW(p)

Honestly I think our only hope of addressing this is having a farm more robust debating style, far more limited in scope than we are used to since tangents often peter out without follow up or any kind of synthesis or even a clear idea of what is and what isn't agreed upon in these debates.

I don't know what you mean by that-- could you expand on the details or supply an example of a place that has the sort of style you have in mind?

My instincts are to go for something less robust. I know that part of what drives my handling of the subject is a good bit of fear, and I suspect there was something of the sort going on for HughRustik.

I'm not sure what would need to change at LW to make people more comfortable with talking about their less respectable emotions.

I'm contemplating using a pseudonym, but that might not be useful-- a number of people have told me that I write the way I talk.

You've probably got a point about synthesis. It might help if people wrote summaries of where various debates stand. I bet that such summaries would get upvoted.

Replies from: None, lessdazed
comment by [deleted] · 2011-12-01T07:52:16.745Z · LW(p) · GW(p)

I'm not sure what would need to change at LW to make people more comfortable with talking about their less respectable emotions.

I doubt talking about the emotions, specifically about individual's emotions, or even how each "side" (ugh tribalism) may feel about the matter, will improve the situation. If anything I suspect it will result in status games around signalling good tactically usefull emotions and people resenting others for their emotions.

You've probably got a point about synthesis. It might help if people wrote summaries of where various debates stand. I bet that such summaries would get upvoted.

Perhaps this should be a start.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-12-01T13:08:28.764Z · LW(p) · GW(p)

I doubt talking about the emotions, specifically about individuals emotions, or even how each "side" (ugh tribalism) will improve the situation. If anything I suspect it will result in status games around that and people resenting others for their emotions.

I think the last clause of the first sentence is missing some words.

Emotions are part of what's going on, and it's at least plausible that respect for truth includes talking about them.

Discussion which includes talk about emotions can blow up, but it doesn't have to. I suggest that there are specific premises that make talk about emotion go bad-- the idea that emotions don't change, that some people's emotions should trump other people's emotions, and that some emotions should trump other emotions. This list is probably not complete.

The challenge would be to allow territorial emotions to be mentioned, but not letting them take charge.

I think the crucial thing is to maintain an attitude of "What's going on here?" rather than "This is an emergency-- the other person must be changed or silenced".

Replies from: None
comment by [deleted] · 2011-12-01T13:31:50.852Z · LW(p) · GW(p)

I think the last clause of the first sentence is missing some words.

Correct, I was writing at a late hour. I've fixed the missing bits now.

Emotions are part of what's going on, and it's at least plausible that respect for truth includes talking about them.

Discussion which includes talk about emotions can blow up, but it doesn't have to. I suggest that there are specific premises that make talk about emotion go bad-- the idea that emotions don't change, that some people's emotions should trump other people's emotions, and that some emotions should trump other emotions. This list is probably not complete.

The challenge would be to allow territorial emotions to be mentioned, but not letting them take charge

I think the crucial thing is to maintain an attitude of "What's going on here?" rather than "This is an emergency-- the other person must be changed or silenced".

This has shifted my opinion more in favour of such a debate, I remain sceptical however. First identifying what exactly are the preconditions for such a debate (completing that list in other words) and second the sheer logistics of making it happen that way seem to me daunting challenges.

Replies from: NancyLebovitz, NancyLebovitz
comment by NancyLebovitz · 2011-12-01T13:50:33.887Z · LW(p) · GW(p)

More for the list, based on your point about groups: It's important to label speculations about the ill effects of actions based on stated emotions as speculations, and likewise for speculations about the emotions of people who aren't in the discussion.

Part of what makes all this hard is that people have to make guesses (on rather little evidence, really) about the trustworthiness of other people. If the assumption of good will is gone, it's hard to get it back.

If someone gives a signal which seems to indicate that they shouldn't be trusted, all hell can break loose very quickly. and at that point, a lesswrongian cure might be to identify the stakes, which I think are pretty low for the blog. The issues might be different for people who are actually working on FAI.

comment by NancyLebovitz · 2011-12-01T14:07:54.357Z · LW(p) · GW(p)

As for whether this kind of thing can be managed at LW, my answer is maybe tending towards yes. I think the social pressure which can be applied to get people to choose a far view and/or curiosity about the present is pretty strong, but I don't know if it's strong enough.

The paradox is that people who insist on naive territorial/status fights have to be changed or silenced.

comment by lessdazed · 2011-12-01T02:49:51.108Z · LW(p) · GW(p)

I'm contemplating using a pseudonym, but that might not be useful-- a number of people have told me that I write the way I talk.

We could have a pidgin language pseudonym thread.

comment by lessdazed · 2011-11-30T23:47:21.763Z · LW(p) · GW(p)

the situation dosen't seem to be improving.

What exactly do you mean? If the situation is getting no worse, notice the population is expanding.

Replies from: None
comment by [deleted] · 2011-12-01T08:03:10.848Z · LW(p) · GW(p)

What exactly do you mean?

It is not improving.

If the situation is getting no worse,

This is up for debate. Vladimir_M and others have argued that precisely the fact that blow ups are rarer means more uninterrupted happy death spirals are occurring and we are in the processes of evaporative cooling of group beliefs on the subject.

I think they are right.

notice the population is expanding.

LessWrong actually needs either better standards of rationality or better mechanisms to sort through the ever growing number of responses as it grows in order to keep the signal to noise ratio close to something worth our time. Also I'm confused as to why a larger population of LWers, would translate into this being something LWers can more easily make progress on.

comment by NancyLebovitz · 2011-11-18T12:45:49.517Z · LW(p) · GW(p)

As for contrarianism, I think of myself as a second-order curmudgeon. When people talk about how things are getting worse, I push for specific examples rather than just a claim that things are bad. People rarely have anything specific in mind.

comment by NancyLebovitz · 2010-11-17T11:18:23.727Z · LW(p) · GW(p)

You might be interested in this-- it's by a male feminist who's working on how to have feminism which is genuinely friendly to heterosexual men.

Another young guy, one of my best students, told me that he felt as if he’d been set up for failure, as if Jensen and I were positing abstinence from pornography as the sine qua non of being a decent male. “If I masturbate to porn can I still be a good man” was the question I got from more than one anguished participant in the class. And if several of the students were willing to divulge such private pain to me, I can only assume that still others felt the same way but kept silent.

I suspect the problem goes deeper than the specifics of feminism, though those are worth addressing. A lot of people interpret moral advice in self-damaging ways, and I'm not sure what's going on there. It seems like a taught vulnerability.

Replies from: MileyCyrus, HughRistik, wedrifid
comment by MileyCyrus · 2011-11-18T07:00:18.677Z · LW(p) · GW(p)

You might be interested in [Hugo Schwyzer's blog]-- it's by a male feminist who's working on how to have feminism which is genuinely friendly to heterosexual men.

If by "make feminism genuinely friendly to men" you mean "defend paternity fraud" and compare the "men's right movement to the KKK"...

Replies from: NancyLebovitz, waveman
comment by NancyLebovitz · 2011-11-18T12:19:22.516Z · LW(p) · GW(p)

For what it's worth, I never got around to reading much of Schwyser's blog. These days, I read No Seriously, What About Teh Menz?, and they're none too fond of Schwyzer either. I hate that there was a felt need to give it a jokey title more than I can say.

Replies from: MileyCyrus
comment by MileyCyrus · 2011-11-18T14:55:42.880Z · LW(p) · GW(p)

I like NSWATM too. I'm glad it's becoming more popular.

comment by waveman · 2014-03-01T12:40:03.189Z · LW(p) · GW(p)

Read the update here. Truth is stranger than fiction.

http://en.wikipedia.org/wiki/Hugo_Schwyzer

comment by HughRistik · 2010-11-18T07:48:53.704Z · LW(p) · GW(p)

Thanks, Nancy. I do find Hugo's blog interesting, and I post there sometimes.

comment by wedrifid · 2010-11-17T11:22:46.247Z · LW(p) · GW(p)

I suspect the problem goes deeper than the specifics of feminism, though those are worth addressing. A lot of people interpret moral advice in self-damaging ways, and I'm not sure what's going on there. It seems like a taught vulnerability.

You are saying, I take it, that the guy in question was mistaken in believing the advice prohibited the use of pornography? It isn't quite clear to me whether you were saying that he correctly understood the pornography related implications but ought not have considered it self-damaging. I have of course seen both, as well as those (not you) who suggest that following the ideals is actually beneficial to the individual as well, almost by definition.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-11-17T11:54:22.244Z · LW(p) · GW(p)

Yes, your first suggestion.

comment by jgoosse · 2010-09-17T22:01:47.346Z · LW(p) · GW(p)

To be contrarian, I think you're only portraying a subset of possible outcomes. We might say the following fits:

Kantian Deontological Ethics (all men are oblidged to) > Positivist Ethics (ethics don't exist as anything more than preference) > Modern Liberal Ethics (ethics exist as preference but preferences are important survival tools that can lead us to objective ethics),

But the truth is I don't see a necessary triad in any of this because there is no original position. In my example, we would find that Kantian Dialectical Ethics consumed prior theories or objects and, in fact, I think we could say that about most of your examples. Popper might argue that a dialectic is merely consuming the prior creation (a la Popper)... the process could continue for infinity until one approaches a complete model (assuming some discipline is in the mix).

Another thought: in reality arguments occur in multiple dimensions (real decisions often evaluate economic, political, health, legal and safety outcomes).. other dimensions canthrow off the pattern of contrarianism when there are trade-offs that need to be made. In that sense the contrarian model presented is rather simple.

All of that being said, I'm a little meta-meta-contrarian too... I like the analysis you've presented because the analogy seems to work as a simple cartoon explanation for hipsters. =)

Replies from: blacktrance, Raw_Power
comment by blacktrance · 2014-01-23T23:04:30.080Z · LW(p) · GW(p)

For ethics, I think it's more like (Divine Command/intuitionism)/(subjectivism/nihilism)/(other systems of objective morality).

comment by Raw_Power · 2011-07-07T17:34:58.146Z · LW(p) · GW(p)

Could it be that the entire history of philosophy and its "thesis, antithesis, synthesis" recurring structure is an instance of this? Not to mention other liberal arts, and the development of the cycles of fashion.

comment by spencerth · 2010-09-14T22:03:45.259Z · LW(p) · GW(p)

The bigger issue to me is the value system that makes this phenomenon exist in the first place. It essentially requires people to care more about signaling than seeking truth. Of course this makes sense for many (perhaps most) people since signaling can get you all sorts of other things you want, whereas finding the truth could happen in a vacuum/near vacuum (you could find out some fundamental truth and then die immediately, forget about it, tell it to people and have no one believe you, etc.)

It bothers me that extremely narrow self-interest (as indicated by "fun to argue") is so much more important to so many than truth seeking. Would it be so /wrong/ to seek truth, and THEN signal once you think you've found it (even if you're actually incorrect) than just taking up a contrary position for its own inherent "argumentative pleasure" value?

It seems intellectually lazy. Perhaps that's part of its appeal.

comment by thomblake · 2010-09-14T15:13:40.343Z · LW(p) · GW(p)

from my limited understanding a hipster is a person who deliberately uses unpopular, obsolete, or obscure styles and preferences in an attempt to be "cooler" than the mainstream.

Not to argue over definitions, but your use of "hipster" seems overly-narrow. As I understand it, it refers to those who deliberately appropriate styles used by old / other subcultures with concern for aesthetics rather than signaling (or, if you prefer, complex signaling rather than mere group-membership). Obviously some of those folks are doing it to try to 'be cooler', but it's not nearly a necessary condition.

There is certainly a notable sub-culture of "hipsters" who are known for being pretentious about aspects of their particular style. This should come as no surprise to anyone who's studied any other subculture.

comment by Liron · 2010-09-14T00:11:51.710Z · LW(p) · GW(p)

The contrarian treadmill for medicine is more like "conventional / alternative / conventional again / Robin Hanson"

comment by multifoliaterose · 2010-09-13T23:08:11.808Z · LW(p) · GW(p)

You can't evaluate the truth of a statement by its position in a signaling game; otherwise you could use human psychology to figure out if global warming is real!

Agree to some extent but not fully. In particular, I think that the fact the well documented phenomenon of illusory superiority gives rational grounds for skepticism when somebody claims that his or her abilities are greater than he or she able to effectively signal. In situations where there's great disparity between claimed ability and signaled ability, very high levels of skepticism are warranted.

comment by TropicalFruit · 2022-05-19T04:54:49.128Z · LW(p) · GW(p)

One that really irks me is compound noun with a pronoun usage:

Always "Me and X" /  Always "X and I" / "X and I" for an subject and "X and me" for an object 

The second one is being smart enough to know that "Me and X went to the store" is improper because you're using "Me" in a subject, but not smart enough to know how to fix it, and instead just replacing "Me and X" with "X and I" one for one in every sentence.

What makes my blood run hot (and gives me that "like to argue it" high, I think) is that the middle section are trying to signal they're "above the peons" by using "he and I" like a sophisticate, when in reality they are just using different improper grammar, and doing so just as often as the peons they want to be above. Even worse, the other mid-wit contrarians actually accept this signal of how sophisticated everyone is talking.

I don't think this should bother me so much, but it does.

comment by Tim Liptrot (rockthecasbah) · 2020-06-19T23:10:02.073Z · LW(p) · GW(p)

Stupid question for the guys here, but how long is optimal to counter-signal to a woman. i.e., how long do you pretend not to be interested in her, whether she is interested in you or not. Based on my non-trivial romantic experience, I have two theories.

1. Wait until she makes unusually-long eye contact with you. It should be pretty noticable, like ~5 seconds or longer, such that it would otherwise be unusual. Use Bayes theorem. THEN WAIT ANOTHER WEEK to stop countersignalling.

2. Three weeks. IDK it just seems to work that way.

3. You do not stop counter-signaling until you have been dating for several months. Just gradually decrease the amount of countersignaling by always signaling slightly less commitment to the relationship than your partner. You may be free to stop countersignaling.

I suspect that option 3 is the optimal strategy, but is taxing/emotionally draining. Any suggestions?

Follow up, it is not that hard to independently assess someone's quality as a partner. You could assign someone a percentile at intelligence, kindness, attractiveness, emotional stability after a two weeks of knowing them. Like "this person is kinder than 40% of people but less kind than 50%". So why rely do people rely so heavily on the weird countersignaling heuristic?

comment by Motasaurus · 2019-06-11T05:24:17.133Z · LW(p) · GW(p)

I can think of clear examples where a particular ideological foundation allows for death to be good, without requiring a contrarian or meta-contrarian position. One thought along such lines is whether religion would fall into the contrarian, or meta-contrarian view.

If you ask most 5 year olds, they believe in the metaphysical.

So could a triad be religious/atheist/religious? Or is there an extra level, where the first kind of atheist is the fedora tipping teenager on reddit, then there be a meta-meta atheist, or would perhaps the meta-meta position be agnostic?

Is religion too complex for such an simplification?

comment by Caspar Oesterheld (Caspar42) · 2017-11-11T08:56:34.630Z · LW(p) · GW(p)

Great post, obviously.

You argue that signaling often leads to distribution of intellectual positions following this pattern: in favor of X with simple arguments / in favor of Y with complex arguments / in favor of something like X with simple arguments

I think it’s worth noting that the pattern of position often looks different. For example, there is: in favor of X with simple arguments / in favor of Y with complex arguments / in favor of something like X with surprising and even more sophisticated and hard-to-understand arguments

In fact, I think many of your examples follow the latter pattern. For example, the market efficiency arguments in favor of libertarianism seem harder-to-understand and more sophisticated than most arguments for liberalism. Maybe it fits your pattern better if libertarianism is justified purely on the basis of expert opinion.

Similarly, the justification for the “meta-contrarian” position in "don't care about Africa / give aid to Africa / don't give aid to Africa" is more sophisticated than the reasons for the contrarian or naive positions.

But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly.

I’m not sure whether the overpopulation is a good example. I think in many circles that point would signal naivety and people would respond by something deep-sounding about how life is sacred. (The same is true for “it’s good if old people die because that saves money and allows the government to build more schools”.) Here, too, I would argue that your pattern doesn’t quite describe the set of commonly held positions, as it omits the naive pro-death position.

comment by Calorion · 2017-07-08T17:28:07.145Z · LW(p) · GW(p)

The Patri Friedman links are dead, and blocked from archive.org. Anyone have access to another archive, so I can see what he's talking about? There has got to be a better way to link. Has no one come up with a distributed archive of linked material yet?

Replies from: arundelo
comment by arundelo · 2017-07-09T03:01:31.611Z · LW(p) · GW(p)

archive.is has both things from Patri's LiveJournal:

(Unlike archive.org, archive.is does not, IIRC, respect robots.txt.)

Gwern Branwen has a page on link rot and URL archiving.

Replies from: arundelo
comment by arundelo · 2017-07-09T18:24:27.606Z · LW(p) · GW(p)

Why does archive.is not obey robots.txt?

Because it is not a free-walking crawler, it saves only one page acting as a direct agent of the human user.

--archive.is faq

A few months ago we stopped referring to robots.txt files on U.S. government and military web sites [...] As we have moved towards broader access it has not caused problems, which we take as a good sign. We are now looking to do this more broadly.

--archive.org blog, 2017-04-17

comment by jgoosse · 2010-09-17T22:01:26.790Z · LW(p) · GW(p)

To be contrarian, I think you're only portraying a subset of possible outcomes. We might say the following fits:

Kantian Deontological Ethics (all men are oblidged to) > Positivist Ethics (ethics don't exist as anything more than preference) > Modern Liberal Ethics (ethics exist as preference but preferences are important survival tools that can lead us to objective ethics),

But the truth is I don't see a necessary triad in any of this because there is no original position. In my example, we would find that Kantian Dialectical Ethics consumed prior theories or objects and, in fact, I think we could say that about most of your examples. Popper might argue that a dialectic is merely consuming the prior creation (a la Popper)... the process could continue for infinity until one approaches a complete model (assuming some discipline is in the mixed).

Another thought: in reality arguments occur in multiple dimensions (real decisions often evaluate economic, political, health, legal and safety outcomes).. other dimensions canthrow off the pattern of contrarianism when there are trade-offs that need to be made. In that sense the contrarian model presented is rather simple.

All of that being said, I'm a little meta-meta-contrarian too... I like the analysis you've presented because the analogy seems to work as a simple cartoon explanation for hipsters. =)

comment by jgoosse · 2010-09-17T22:01:23.067Z · LW(p) · GW(p)

To be contrarian, I think you're only portraying a subset of possible outcomes. We might say the following fits:

Kantian Deontological Ethics (all men are oblidged to) > Positivist Ethics (ethics don't exist as anything more than preference) > Modern Liberal Ethics (ethics exist as preference but preferences are important survival tools that can lead us to objective ethics),

But the truth is I don't see a necessary triad in any of this because there is no original position. In my example, we would find that Kantian Dialectical Ethics consumed prior theories or objects and, in fact, I think we could say that about most of your examples. Popper might argue that a dialectic is merely consuming the prior creation (a la Popper)... the process could continue for infinity until one approaches a complete model (assuming some discipline is in the mixed).

Another thought: in reality arguments occur in multiple dimensions (real decisions often evaluate economic, political, health, legal and safety outcomes).. other dimensions canthrow off the pattern of contrarianism when there are trade-offs that need to be made. In that sense the contrarian model presented is rather simple.

All of that being said, I'm a little meta-meta-contrarian too... I like the analysis you've presented because the analogy seems to work as a simple cartoon explanation for hipsters. =)

comment by jgoosse · 2010-09-17T22:00:54.127Z · LW(p) · GW(p)

To be contrarian, I think you're only portraying a subset of possible outcomes. We might say the following fits:

Kantian Deontological Ethics (all men are oblidged to) > Positivist Ethics (ethics don't exist as anything more than preference) > Modern Liberal Ethics (ethics exist as preference but preferences are important survival tools that can lead us to objective ethics),

But the truth is I don't see a necessary triad in any of this because there is no original position. In my example, we would find that Kantian Dialectical Ethics consumed prior theories or objects and, in fact, I think we could say that about most of your examples. Popper might argue that a dialectic is merely consuming the prior creation (a la Popper)... the process could continue for infinity until one approaches a complete model (assuming some discipline is in the mixed).

Another thought: in reality arguments occur in multiple dimensions (real decisions often evaluate economic, political, health, legal and safety outcomes).. other dimensions canthrow off the pattern of contrarianism when there are trade-offs that need to be made. In that sense the contrarian model presented is rather simple.

All of that being said, I'm a little meta-meta-contrarian too... I like the analysis you've presented because the analogy seems to work as a simple cartoon explanation for hipsters. =)

comment by Daniel_Burfoot · 2010-09-14T18:51:31.293Z · LW(p) · GW(p)

libertarians are always more hostile toward liberals, even though they have just about as many points of real disagreement with the conservatives.

I think it's because libertarians care a lot more about the points on which they disagree with the liberals. Issues like gay marriage and abortion don't seem to matter as much as economic rights.

Replies from: kodos96
comment by kodos96 · 2010-09-14T18:58:31.120Z · LW(p) · GW(p)

I don't think this is the case for most libertarians, especially the younger, internet based, ron paul oriented kind of libertarians - many of them are primarily motivated by the social issues.... and yet they still seem to prefer arguing with liberals rather than conservatives. I think it has more to do with the fact that they view liberals as smart people who believe stupid things, while they view conservatives as just stupid troglodytes not worth wasting time on.

comment by Mercy · 2010-09-14T12:50:31.149Z · LW(p) · GW(p)

I'm a little confused, what purpose does this distinction serve? That people like to define their opinions as a rebellion against received opinion isn't novel. What you seem to be saying is: defining yourself against an opinion which is seen as contrarian sends a reliably different social signal to defining yourself against an opinion which is mainstream, is that a fair assessment? Because this only works if there is a singular, visible mainstream, which is obviously available in fashion but rare in the realm of ideas.

Moreover, if order-of-contrariness doesn't convey information, I can't see any situation in which one it would be helpful to indicate a positions order, where it wouldn't be just as easy and far more informative to point out the specific chain of it's controversy.

In any case I take some issue with a bunch of your example.

Firstly on feminism the obvious mainstream controversy/metacontroversy dynamic for misogyny is between second and third wave feminism in academia, and between "all sex is rape" and "pole dancing is empowering/Madonna is a feminist icon" in the media. Picking an obscure internet phenomenon closer to the starting point is blatant cherry picking.

Similarly the Bad Samaritans/New Development argument has a lot more currency than the aid is the problem one, but again that's further from both positions. For that matter the same applies to liberterianism and it's real Laius, socialism.

The number of global warming skeptics who jumped straight from "it's not happening" to "well we didn't do it" to "well we can't do anything about it without doing more harm than good" should also, combined with the overlap in arguments between self identified MRAs and younger misogynists of the "straight white christian men are the most oppressed minority" variety, give us a bit of pause. If there's any use to identifying meta contrarian positions, it has to be in distinguishing between genuine attempts to correct falsehoods made in overeager argument with the old mainstream, and sophisticated apologetics for previously exploded positions.

On second thought, convincing as I find the Stern report, enough economists argued against reducing carbon emissions on cost-benefit grounds from the beginning that the meta position deserves honest consideration. I'd like to propose instead deism as the canonical example for bad faith apologia in meta-contrarianist drag, and third wave feminism for the honest position. Is this suitably uncontroversial?

comment by madair · 2010-09-14T04:12:55.012Z · LW(p) · GW(p)

atheist/libertarian/technophile/sf-fan/early-adopter/programmer = very large self selected group

college professor = very small highly filtered group

I believe this exemplifies a major weakness of this article and I'd actually like to hear a better analogy since I'm somewhat skeptical about a populist take down of a poorly defined subculture; one which I presume varies greatly from the authors' tribe(s). More vexing, but also harder to describe, is it's vaguely moralistic tone, but perhaps that's just the absence of evidence talking.

(Would like to be shown to be missing the point. I hope I'm not being trollish.)

[Edit:] That said, I think that regular old contrarianism is a major issue reducing quality of life in our highly connected culture, but I think that's more universal, "those trolls" are most of us at one time or another. (Or maybe it's just me ;-)

Replies from: madair
comment by madair · 2010-09-14T04:15:08.112Z · LW(p) · GW(p)

very small highly filtered group

i.e. needs extra study controls for group-think.

comment by MartinB · 2010-09-13T23:58:10.719Z · LW(p) · GW(p)

Word!

I went through a few of these on my way though idea-space and then it took a while to recognize the systematic.

comment by MugaSofer · 2013-03-06T11:54:38.228Z · LW(p) · GW(p)

I wonder if conspiracy theories could be a "middle-band" position? Any fool can see the WTC was destroyed by a plane...

comment by Multiheaded · 2014-03-03T09:08:30.121Z · LW(p) · GW(p)
#normcore
comment by Jimdrix_Hendri · 2019-09-21T12:06:23.373Z · LW(p) · GW(p)

1. The average IQ of visitors to this site is 145 squared? Impressive!

2. Are you trying to be subtly meta-contrarian with your idiosyncratic orthography, or are you just really glad to see me?

comment by fractalman · 2013-06-07T10:12:02.583Z · LW(p) · GW(p)

|so no one tries to signal intelligence by saying that 1+1 equals 3

oh, you are so asking for it, no matter how old this topic is...

There IS a sense in which 1+1=3. It is not particularly deep, or philosiphical, or even particulalry useful mathematically, except possibly to demonstrate a simple result of playing around with unusual axioms.

See, when one man and one woman....

snickers

comment by TraderJoe · 2012-03-05T19:49:15.961Z · LW(p) · GW(p)

[comment deleted]

comment by Plasticlizard · 2011-10-26T10:41:06.869Z · LW(p) · GW(p)

This is by far the most intelligent circle jerk I have witnessed on the Internet so far. What gloriously intricate jibber jabber everybody is making! You guys put Hacker News to shame.

comment by taw · 2010-09-26T23:42:17.169Z · LW(p) · GW(p)

I cannot distinguish level of libertarians and conservatives - they look like undifferentiated low status mob at the bottom of signaling hierarchy, with liberals being in the standard smart-people-cluster.

Most of your other examples of meta-contrarianism I might be sympathetic to or feel it's misguided attempt at being too smart for own good - but not libertarians - they're on the KKK, 9/11 truthers, climate change deniers, and Obama-is-a-Muslim level of respectability - and in fact these groups vargely overlap.

If you want to see some actual quality meta-contrarianism about economy, Scott Sumner's revival monetarism or perhaps folk versions of Hyman Minsky are just that - arguing against the smart contrarian cluster from position of epistemic and signaling superiority.

Quality meta-contrarianism about climate change is mostly geo-engineering and nuclear power - the "sure, IPCC is right, warming is real, but it doesn't matter as your solutions suck" position.

Replies from: mattnewport
comment by mattnewport · 2010-09-26T23:47:48.369Z · LW(p) · GW(p)

If we're going to start breaking the no politics rule can we at least avoid egregious trolling.

comment by kwinnbee · 2022-10-09T17:30:18.061Z · LW(p) · GW(p)
comment by [deleted] · 2022-10-24T12:30:33.574Z · LW(p) · GW(p)