How can we get more and better LW contrarians?

post by Wei Dai (Wei_Dai) · 2012-04-18T22:01:12.772Z · LW · GW · Legacy · 335 comments

Contents

335 comments

I'm worried that LW doesn't have enough good contrarians and skeptics, people who disagree with us or like to find fault in every idea they see, but do so in a way that is often right and can change our minds when they are. I fear that when contrarians/skeptics join us but aren't "good enough", we tend to drive them away instead of improving them.

For example, I know a couple of people who occasionally had interesting ideas that were contrary to the local LW consensus, but were (or appeared to be) too confident in their ideas, both good and bad. Both people ended up being repeatedly downvoted and left our community a few months after they arrived. This must have happened more often than I have noticed (partly evidenced by the large number of comments/posts now marked as written by [deleted], sometimes with whole threads written entirely by deleted accounts). I feel that this is a waste that we should try to prevent (or at least think about how we might). So here are some ideas:

I guess these ideas sounded better in my head than written down, but maybe they'll inspire other people to think of better ones. And it might help a bit just to keep this issue in the back of one's mind and occasionally think strategically about how to improve the person you're arguing against, instead of only trying to win the particular argument at hand or downvoting them into leaving.
P.S., after writing most of the above, I saw  this post:
OTOH, I don’t think group think is a big problem. Criticism by folks like Will Newsome, Vladimir Slepnev and especially Wei Dai is often upvoted. (I upvote almost every comment of Dai or Newsome if I don’t forget it. Dai makes always very good points and Newsome is often wrong but also hilariously funny or just brilliant and right.) Of course, folks like this Dymytry guy are often downvoted, but IMO with good reason.
To be clear, I don't think "group think" is the problem. In other words, it's not that we're refusing to accept valid criticisms, but more like our group dynamics (and other factors) cause there to be fewer good contrarians in our community than is optimal. Of course what is optimal might be open to debate, but from my perspective, it can't be right that my own criticisms are valued so highly (especially since I've been moving closer to the SingInst "inner circle" and my critical tendencies have been decreasing). In the spirit of making oneself redundant, I'd feel much better if my occasional voice of dissent is just considered one amongst many.

335 comments

Comments sorted by top scores.

comment by prase · 2012-04-18T23:03:58.964Z · LW(p) · GW(p)

I have significantly decreased my participation on LW discussions recently, partly for reasons unrelated to whatever is going on here, but I have few issues with the present state of this site and perhaps they are relevant:

  • LW seems to be slowly becoming self-obsessed. "How do we get better contrarians?" "What should be our debate policies?" "Should discussing politics be banned on LW?" "Is LW a phyg?" "Shouldn't LW become more of a phyg?" Damn. I am not interested in endless meta-debates about community building. Meta debates could be fine, but only if they are rare - else I feel I am losing purposes. Object-level topics should form an overwhelming majority both in the main section and in the discussion.
  • Too narrow set of topics. Somewhat ironically the explicitly forbidden politics is debated quite frequently, but many potentially interesting areas of inquiry are left out completely. You post a question about calculus in the discussion section and get downvoted, since it is "off topic" - ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted. But there is only so much one can say about AI and ethics and Bayesian epistemology and self-improvement on a level accessible to general internet audience. When I discovered Overcoming Bias (whose half later evolved into LW), it was overflowing with revolutionary and inspiring (from my point of view) ideas. Now I feel saturated as majority of new articles seem to be devoid of new insights (again from my point of view).

If you are afraid that LW could devolve into a dogmatic narrow community without enough contrarians to maintain high level of epistemic hygiene, don't try to spawn new contrarians by methods of social engineering. Instead try to encourage debates on diverse set of topics, mainly those which haven't been addressed by 246 LW articles already. If there is no consensus, people will disagree naturally.

Replies from: Wei_Dai, orthonormal, thomblake, John_Maxwell_IV, John_Maxwell_IV
comment by Wei Dai (Wei_Dai) · 2012-04-19T08:55:46.518Z · LW(p) · GW(p)

I'm not trying to spawn new contrarians for the sake of having more contrarians, nor want to encourage debate for the sake of having more disagreements. What I care about is (me personally as well as this community as a whole) having correct beliefs on the topics that I think are most important, namely the core rationality and Singularity-related topics, and I think having more contrarians who disagree about these core topics would help with that. Your suggestion doesn't seem to help with my goals, or at least it's not obvious to me how it would.

(BTW, I note that you've personally made 2 meta/community posts out of 7, whereas I've only made about 3 out of 58 (plus or minus a few counting errors). So maybe you can give me a pass on this one? :)

Replies from: prase, Viliam_Bur
comment by prase · 2012-04-19T17:09:40.818Z · LW(p) · GW(p)

I note that you've personally made 2 meta/community posts out of 7, whereas I've only made about 3 out of 58

I plead guilty and promise to avoid making meta posts in the future. (Edit: I don't object specifically to your meta-posts but to the overall relative number of meta discussions lately.)

Nevertheless, I doubt calling for more contrarians is helpful with respect to your purposes. The question how to increase the number of contrarians is naturally answered by proposals to create more contrarian-friendly environment, which, if implemented, attract disproportionally high amount of people who like to be contrarians, whatever the local orthodoxy is. My suggestion is, instead, to try to attract more diverse set of people, even those who are not interested in topics you consider important. You would profit indirectly, since some of them would get eventually engaged in your favourite discussions and bring fresh ideas. Incidentally they will also somewhat lower the level of discourse, but I am afraid it is an inevitable side effect of any anti-cult policy.

comment by Viliam_Bur · 2012-04-19T13:34:46.617Z · LW(p) · GW(p)

Do you also think that having more contrarians who disagree that "2+2=4" would increase our likelihood of having correct beliefs? I mean, if they are wrong, we will see the weakness in their arguments and refuse to update, so there is no harm; but if they are right and we are wrong, it could be very helpful.

More generally, what is your algorithm for deciding for which values of X we need more contrarians who disagree with X?

Replies from: TimS
comment by TimS · 2012-04-19T14:13:40.149Z · LW(p) · GW(p)

If people come to LessWrong thinking "2+2 != 4" or "computer manufacturing isn't science", is saying "You're stupid" really raising the sanity line in any way? In short, we should distinguish between punishing disagreement and punishing obstinate behavior/contrarianism.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-20T03:42:10.735Z · LW(p) · GW(p)

"computer manufacturing isn't science"

Well, computer manufacturing isn't science, it's engineering.

Replies from: TimS
comment by TimS · 2012-04-20T19:30:43.031Z · LW(p) · GW(p)

If someone says, "I believe in computers and GPS, but not quantum mechanics or science" then they are deeply confused.

Replies from: None
comment by [deleted] · 2012-04-24T00:48:32.833Z · LW(p) · GW(p)

Has there been a glut of those on LessWrong?

Replies from: TimS
comment by TimS · 2012-04-24T02:38:41.044Z · LW(p) · GW(p)

This. It's obviously very possible that this was a troll, but that's not my read.

Edit: There were one or two others talking a lot without contributing much that seemed to be the impetus for this discussion post. Wei Dai's post seems to be a reaction to that post.

comment by orthonormal · 2012-04-19T04:00:32.812Z · LW(p) · GW(p)

LW seems to be slowly becoming self-obsessed.

It waxes and wanes. Try looking at all articles labeled "meta"; there were 10(!) in April of 2009 that fit your description of meta-debates (arguing about the karma system, the proper use of the wiki, the first survey, and an Eliezer post about getting less meta).

Granted, that was near the beginning of Less Wrong... but then there was another burst with 5 such articles in April 2010 as well. (I don't know what it is about springtime...) Starting the Discussion area in September 2010 seems to have siphoned most of it off of Main; there have been 3-5 meta-ish posts per month since then (except for April 2011, in which there were 9... seriously, what the hell is going on here?)

Replies from: JenniferRM
comment by JenniferRM · 2012-04-19T05:34:23.818Z · LW(p) · GW(p)

Maybe April Fools day gets people's juices going?

comment by thomblake · 2012-04-19T00:42:00.870Z · LW(p) · GW(p)

LW seems to be slowly becoming self-obsessed.

I don't see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it's what we're best at, and that's always been so.

You post a question about calculus in the discussion section and get downvoted, since it is "off topic" - ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted.

Whut?

Links or it didn't happen.

Replies from: JenniferRM, orthonormal, Viliam_Bur
comment by JenniferRM · 2012-04-19T05:26:58.447Z · LW(p) · GW(p)

LW seems to be slowly becoming self-obsessed.

I don't see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it's what we're best at, and that's always been so.

Yes, but the real question is why we love going meta. What is it about going meta that makes it worthwhile to us? Some have postulated that people here are actually addicted to going meta because it is easier to go meta than to actually do stuff, and yet despite the lack of real effort, you can tell yourself that going meta adds significant value because it helps change some insight or process once but seems to deliver recurring payoffs every time the insight or process is used again in the future...

...but I have a sneaking suspicion that this theory was just a pat answer that was offered as a status move, because going meta on going meta puts one in a position of objective examination of mere object level meta-ness. To understand something well helps one control the thing understood, and the understanding may have required power over the thing to learn the lessons in the first place. Clearly, therefore, going meta on a process would pattern match to being superior to the process or the people who perform it, which might push one's buttons if, for example, one were a narcissist.

I dare not speculate on the true meaning and function of going meta on going meta on going meta, but if I were forced to guess, I think it might have something to do with a sort of ironic humor over the appearance of mechanical repetitiveness as one iterates a generic "going meta" operation that some might naively have supposed to be the essence of human mental flexibility. Mental flexibility from a mechanical gimmick? Never!

Truly, we should all collectively pity the person who goes meta on going meta on going meta on going meta, because their ironically humorous detachment is such a shallow trick, and yet it is likely to leave them alienated from the world, and potentially bitter at its callous lack of self-aware appreciation for that person's jokes.

Replies from: Will_Newsome, None
comment by Will_Newsome · 2012-04-19T05:51:19.236Z · LW(p) · GW(p)

Related question: If the concept of meta is drawn from a distribution, or is an instance of a higher-level abstraction, what concept is best characterized by that distribution itself / that higher-level abstraction itself? If we seek whence cometh "seek whence", is the answer just "seek whence"? (Related: Schmidhuber's discussion about how Goedel machines collapse all the levels of meta-optimization into a single level. (Related: Eliezer's Loebian critique of Goedel machines.))

Replies from: JenniferRM
comment by JenniferRM · 2012-04-19T17:44:17.784Z · LW(p) · GW(p)

I laughed this morning when I read this, and thought "Yay! Theism!" which sort of demands being shortened to yaytheism... which sounds so much like atheism that the handful of examples I could find mostly occur in the context of atheism.

It would be funny to use the word "yaytheism" for what could be tabooed as "anthropomorphizing meta-aware computational idealism", because it frequently seems that humor is associated with the relevant thoughts :-)

But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about "natural selection" (mechanistic) over either "Azathoth, the blind idiot god" (anthropomorphic with negative valence) or "Gaia" (anthropomorphic with positive valence).

Edited To Add: You can loop this back to the question about contrarians, if you notice how much friction occurs around the tone of discussion of mind-shaped-stuff. You need to talk about mind-shaped-things when talking about cogsci/AI/singularity topics, but it's a "mindfield" of lurking faux paus and tribal triggers.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-23T07:18:52.291Z · LW(p) · GW(p)

The following was hastily written, apologies for errors.

But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about "natural selection" (mechanistic) over either "Azathoth, the blind idiot god" (anthropomorphic with negative valence) or "Gaia" (anthropomorphic with positive valence).

(I would go farther, and suggest not even thinking about "natural selection" in the abstract, but specific ecological contingencies and selection pressures and especially the sorts of "pattern attractors" from complex systems. If I think about "evolution" I get this idea of a mysterious propelling force rather than about how the optimization pressure comes from the actual environment. Alternatively, Vassar's previously emphasized thinking of evolution as mere statistical tendency, not an optimizer as such;—or something like that.)

I think one thing to keep in mind is that there is a reverse case of the anthropomorphic error, which is the pantheistic/Gnostic error, and that Catholic theologians were often striving hard to carefully distinguish their conception of God from mystical or superstitious conceptions, or conceptions that assigned God no direct role in the physical universe. But yeah, at some point this emphasis seems to have hurt the Church, 'cuz I see a lot of atheists thinking that Christians think that God is basically Zeus, i.e. a sky father that is sometimes a slave to human passions, rather than a Being that takes game theoretic actions which are causally isomorphic to the outputs of certain emotions to the extent that those emotions were evolutionary selected for (i.e. given to men by God) for rational game theoretic reasons. The Church traditionally was good at toeing this line and appealing to people of very different intelligences, having a more anthropomorphic God for the commoners and a more philosophical God for the monks and priests, but I guess somewhere along the way this balance was lost. I'm tempted to blame the Devil working on the side of the Reformation and the Enlightenment but I suppose realistically some blame must fall on the temporal Church.

Alternatively, maybe you do accept Neoplatonist or Catharian thinking where we have infinitely meta-aware computational agents as abstractions without any direct physical effect that isn't screened off by the Demiurge (or cosmological natural selection or what have you). In that case I tentatively disagree, but my thoughts aren't organized well enough for me to concisely explain why.

comment by [deleted] · 2012-04-24T00:49:10.139Z · LW(p) · GW(p)

Damn. You just got metametameta.

comment by orthonormal · 2012-04-19T02:45:03.422Z · LW(p) · GW(p)

Links or it didn't happen.

I thought of this Mitchell Porter post on MWI and this puzzle post by Thomas. As it happens, I downvoted both (though after a while, I dropped the downvote from the latter) and would defend those downvotes, but I can see how prase gets the impression that we only upvote articles on a narrow subset of topics.

Replies from: thomblake, wedrifid
comment by thomblake · 2012-04-19T14:08:39.914Z · LW(p) · GW(p)

Yeah, both of those are low-quality.

Replies from: prase
comment by prase · 2012-04-19T17:51:16.705Z · LW(p) · GW(p)

As for physics, I was thinking more about this whose negative karma I have already commented on. In the meantime I have forgotten that the post managed to return to zero afterwards.

"Low-quality" is too general a justification to recognise the detailed reasons of downvotes. Among the more concrete criticisms I recall many "this is off-topic, hence my voting down" reactions. My memories may be subject to bias, of course, and I don't want to spend time making a more reliable statistics. What I am feeling more certain about is, however, that there are many people who wish to keep all debates relevant to rationality, which effectively denotes an accidental set of topics, roughly {AI, charity donations, meta-ethics, evolution psychology, self-improvement, cognitive biases, Bayesian probability}. No doubt those topics are interesting, even for me. But not so much to keep me engaged after three (or how much exactly) years of LW's existence. And since I disagree with many standard LW memes, I suppose there may be other potential "contrarians" (perhaps more willing to voice their disagreements than I am) becoming slowly disinterested for reasons similar to mine.

Replies from: steven0461
comment by steven0461 · 2012-04-20T02:54:24.345Z · LW(p) · GW(p)

As for physics, I was thinking more about this whose negative karma I have already commented on. In the meantime I have forgotten that the post managed to return to zero afterwards.

Yes, it's sitting at +1 here and sitting at +2 at physics stackexchange. This supports the opposite of your view, suggesting that physics questions are almost as on-topic here as they are at physics stackexchange -- which is surely too on-topic.

comment by wedrifid · 2012-04-19T02:57:19.627Z · LW(p) · GW(p)

I thought of this Mitchell Porter post on MWI and this puzzle post by Thomas. As it happens, I downvoted both (though after a while, I dropped the downvote from the latter) and would defend those downvotes, but I can see how prase gets the impression that we only upvote articles on a narrow subset of topics.

Wow. The first one is only at -2? That's troubling. Ahh, nevermind.

comment by Viliam_Bur · 2012-04-19T09:09:28.411Z · LW(p) · GW(p)

Going meta is not only what we love best, it's what we're best at, and that's always been so.

Do we love going meta? Yes, we do.

Are we good at it? Sometimes yes, sometimes no; it also depends on the individual. But going meta is good for signalling intelligence, so we do it even when it's just a waste of time.

Has it always been so? Yes, unpracticality and procrastination of many intelligent people is widely known.

Replies from: h-H
comment by h-H · 2012-06-24T21:21:14.167Z · LW(p) · GW(p)

The Akrasia you refer to is actually a feature, not a bug. Just picture the opposite: Intelligent people rushing to conclusions and caring more about getting stuff done instead of forsaking the urge to go with first answers and actually think.

My point is, we decry procrastination so much but the fact is it is good that we procrastinate, if we didn't have this tendency we would be doers not thinkers. Not that I'm disparaging either, but you can't rush math, or more generally deep, insightful thought, that way lies politics and insanity.

In an nutshell, perhaps we care more for thinking about things -or alternatively get a rush from the intellectual crack- so much that we don't really want to act, or at least don't want to act on incomplete knowledge, and hence the widespread procrastination, which given the alternative, is a very Good thing.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-24T22:22:04.065Z · LW(p) · GW(p)

It seems to follow from this model that if we measure the tendency towards procrastination in two groups, one of which is selected for their demonstrable capability for math, or more generally for deep, insightful thought, and the other of which is not, we should find that the former group procrastinates more than the latter group.

Yes?

Replies from: h-H
comment by h-H · 2012-06-24T22:35:37.971Z · LW(p) · GW(p)

Yes & I'd modify that slightly to "the former group needs to more actively combat procrastination".

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-24T22:40:59.806Z · LW(p) · GW(p)

Upvoted for not backing away from a concrete prediction.
I would be very surprised by that result.

Replies from: h-H
comment by h-H · 2012-06-24T22:50:02.113Z · LW(p) · GW(p)

Upvoted for good reasons for upvoting :)

For data, we could run a LW poll as a start and see. And out of curiosity, why would you be surprised?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-24T23:50:39.614Z · LW(p) · GW(p)

Hm. You seem to have edited the comment after I responded to it, in such a way that makes me want to take back my response. How would we tell whether the former group needs to more actively combat procrastination?

I would be surprised because it's significantly at odds with my experience of the relationship between procrastination and insight.

Replies from: h-H
comment by h-H · 2012-06-25T15:14:32.798Z · LW(p) · GW(p)

I have a habit of editing a comment for a bit after replying, actually I didn't see your response until after editing, I don't see how this changes your response in this instance though?

I added that caveat since the former group might have members who originally suffered more from procrastination as per the model, but eventually learned to deal with it, this might skew results if not taken into account.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-25T15:27:54.972Z · LW(p) · GW(p)

It changes my response because while I kind of understand how to operationalize "group A procrastinates more than group B" I don't quite understand how to operationalize "group A needs to more actively combat procrastination than group B." Since what i was approving of was precisely the concreteness of the prediction, swapping it out for something I understand less concretely left me less approving.

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-18T23:26:33.691Z · LW(p) · GW(p)

LW seems to be slowly becoming self-obsessed.

This is a good point. Maybe future meta-discussions could be on talk pages for wiki articles, about specific changes to those articles, especially the about page and the FAQ? These actually represent how LW culture is being codified for new users, but unfortunately none of the recent debates seem to of resulted in substantial modification to them.

It's too bad that automatic wiki editing privileges don't come with a certain level of karma; would remove a trivial inconvenience and eliminate wiki spam.

Replies from: matt
comment by matt · 2012-04-25T20:31:03.329Z · LW(p) · GW(p)

It's too bad that automatic wiki editing privileges don't come with a certain level of karma

Hmmm... you know that wouldn't be too hard to arrange. Keeping the passwords in sync after a change to one account would be much more work, but might be ignorable.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-28T20:57:44.439Z · LW(p) · GW(p)

Ideally it seems like you would get your wiki authentication cookie automatically after logging into Less Wrong, so you could log in once and use both. I don't know if that changes things regarding passwords.

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-18T23:20:51.128Z · LW(p) · GW(p)

You post a question about calculus in the discussion section and get downvoted, since it is "off topic" - ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted.

Do you have examples of this sort of stuff so I can go vote it up?

Replies from: prase
comment by prase · 2012-04-19T18:45:00.611Z · LW(p) · GW(p)

For example there are many posts tagged "physics", most of which hover around zero. A moderately interesting puzzle stands now at -7.

comment by cousin_it · 2012-04-18T23:16:35.736Z · LW(p) · GW(p)

Having more contrarians would be bad for the signal to noise ratio on LW, which is already not as high as I'd like it to be. Can we obtain contrarian ideas more cheaply? For example, one could ask Carl Shulman for a list of promising counterarguments to X, rated by strength, and start digging from there. I'd be pretty interested to hear his responses for X=utilitarianism, the Singularity, FAI, or UDT.

Replies from: CarlShulman, Vladimir_Nesov, lukeprog
comment by CarlShulman · 2012-05-08T09:35:04.176Z · LW(p) · GW(p)

I made a post on a personal blog on one of the more significant points against utilitarianism in my view. It's very rough, but I could cross-post it to Discussion if people wanted.

Replies from: cousin_it
comment by cousin_it · 2012-05-08T10:40:46.928Z · LW(p) · GW(p)

I really like how you frame the choice between altruism and selfishness as a range of different "original positions" an agent may assume. Thanks a lot, and please do more of this kind of work!

comment by Vladimir_Nesov · 2012-04-19T06:20:22.240Z · LW(p) · GW(p)

To generalize, this suggests re-purposing existing LWers to the role of contrarians, rather than looking for new people.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T06:53:54.286Z · LW(p) · GW(p)

Or designing a mechanism or environment that makes it easier for existent LW contrarians to express their ideas.

(My personal experience is that trying to defend a contrarian position on LW results in a lot of personal cheap shots, unnecessarily-aggressively-phrased counter-affirmations, or needless re-affirmations of the LW consensus. (E.g., I remember one LWer said he was trying to "tar and feather [me] with low-status associations". He was probably exaggerating, but still.) This stresses me out a lot and causes me to make errors in presentation and communication, and needlessly causes me to become adversarial. Now when discussing contrarian topics I start out adversarial in anticipation of personal cheap shots et cetera. Most of the onus is on me, but still, I think higher general standards or some sideways change in the epistemic environment could make constructive contrarianism a less stressful role for LWers to take up.)

Replies from: siodine
comment by siodine · 2012-04-19T22:00:54.211Z · LW(p) · GW(p)

Require X amount of karma to pay Y amount for an anonymous comment?

Require X amount of karma to pay for Y amount of karma added to your post so that it's more likely to be seen, or to counteract downvotes?

comment by lukeprog · 2012-04-18T23:27:59.254Z · LW(p) · GW(p)

Yes, a list of Carl's best arguments against standard positions is going to be of vastly higher quality than anything we would be likely to get from the best contrarians we can find.

Replies from: Will_Newsome, philh
comment by Will_Newsome · 2012-04-19T04:22:30.845Z · LW(p) · GW(p)

(FWIW Vassar, Carl, and Rayhawk (in ascending order of apparent neuroticism) are traditionally most associated with constructing steel men. (Or as I think Vassar put it, "steel men, adamantium men, magnetic monopolium men", respectively.))

comment by philh · 2012-04-19T00:53:48.318Z · LW(p) · GW(p)

If it's less signal but also less noise, it might be better overall. (And if we can't work out how to get more contrarians, this might be a useful suggestion anyway.)

Sarcasm is hard to respond to, because I don't know what your actual position is other than "not-that".

Replies from: thomblake
comment by thomblake · 2012-04-19T00:55:14.116Z · LW(p) · GW(p)

Sarcasm is hard to respond to, because I don't know what your actual position is other than "not-that".

I seriously doubt that was sarcasm.

Replies from: philh
comment by philh · 2012-04-19T01:10:46.328Z · LW(p) · GW(p)

Mm, on second reading I think you're right. "Vastly higher quality than anything we would be likely to get from the best contrarians we can find" comes across to me as having too many superlatives to be meant seriously. But "not-sarcastic" fits my model of lukeprog better.

(I was also influenced by it being at -1 when I replied. There's probably a lesson in contrarianism to be taken from that...)

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2012-04-19T21:26:02.763Z · LW(p) · GW(p)

"Vastly higher quality than anything we would be likely to get from the best contrarians we can find" comes across to me as having too many superlatives to be meant seriously.

Keep in mind that we're talking about Carl Shulman. If you know the guy it's pretty obvious that Lukeprog was dead serious.

comment by pragmatist · 2012-04-19T09:25:29.051Z · LW(p) · GW(p)

I disagree with quite a lot of the LW consensus, but I haven't really expressed my criticisms in the few comments I've made. I differ substantially from Sequence line on metaethics, reductionism, materialism, epistemology, and even the concept of truth. My views on these things are similar in many respects to those of Hilary Putnam and even Richard Rorty. Those of you familiar with the work of these gentlemen will know how far off the reservation this places me. For those of you who are not familiar with this stuff, I guess it wouldn't be stretch to describe me as a postmodernist.

I initially avoided voicing my disagreements because I suspect that my collection of beliefs is not only regarded as false by this community, but also as a fairly reliable indicator of woolly thinking and a lack of technical ability. I didn't want to get branded right off the bat as someone not worth engaging with. The thought was that I should first establish some degree of credibility within the community by restricting myself to topics where the inferential distance between the average LWer and me is small. I think wannabe contrarians entering into any intellectual community should be encouraged to expend some initial effort on credibility-building by talking about stuff on which they by and large agree with the community. I haven't been following LessWrong for that long, but I gather that there was a time when Will Newsome's comments were a lot more.... orthodox. I'm guessing that fact has a lot to do with the way his criticisms are received now.

Another big reason I avoid talking about my disagreements is that they are sufficiently fundamental that I expect a large amount of pushback. I know I find it very hard to disengage from argument, and I suspect that's also true of a significant proportion of the posters here, so I'm worried that the discussion will be a horrible time suck. I really can't afford that right now. Perhaps at some time in the future, when I have a little more time, I'll write a discussion post detailing some of my objections.

Replies from: wedrifid
comment by wedrifid · 2012-04-19T09:42:27.607Z · LW(p) · GW(p)

I haven't been following LessWrong for that long, but I gather that there was a time when Will Newsome's comments were a lot more.... orthodox. I'm guessing that fact has a lot to do with the way his criticisms are received now.

He can still be found on the SingInst about us page.

Another big reason I avoid talking about my disagreements is that they are sufficiently fundamental that I expect a large amount of pushback. I know I find it very hard to disengage from argument, and I suspect that's also true of a significant proportion of the posters here, so I'm worried that the discussion will be a horrible time suck. I really can't afford that right now. Perhaps at some time in the future, when I have a little more time, I'll write a discussion post detailing some of my objections.

You do your name justice.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T10:28:51.033Z · LW(p) · GW(p)

He can still be found on the SingInst about us page.

(In case it's not obvious the description is not at all currently accurate. I am currently in the process of doing nothing. At some point I firmly decided that doing things is evil, so I try not to do things anymore, at least as a stopgap solution till I better understand the relevant motivational dynamics and moral philosophy. I still talk to people sometimes though, obviously, but to some extent I feel guilty about that too.)

Replies from: TheOtherDave, wedrifid, michaelsullivan
comment by TheOtherDave · 2012-04-19T15:40:37.770Z · LW(p) · GW(p)

Would it help you behave more morally by your lights if nobody replied to you?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-20T04:12:47.014Z · LW(p) · GW(p)

Good question. I don't think so.

comment by wedrifid · 2012-04-19T13:31:33.220Z · LW(p) · GW(p)

At some point I firmly decided that doing things is evil, so I try not to do things anymore

I still act socially as a Christian in much of my social life so in a certain (not epistemically literal) sense hearing this from 'another believer' strikes me as sacrilege. The Parable of the Talents has a clear point to make on this subject! You are defying His will and teachings.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T13:51:26.362Z · LW(p) · GW(p)

If only it were so easy to tell righteous exploration from liberal folly. But anyway, it's just a stopgap solution. Likely preparation for a sojourn in the desert, and after that, God knows.

Replies from: wedrifid
comment by wedrifid · 2012-04-19T13:58:28.457Z · LW(p) · GW(p)

Likely preparation for a sojourn in the desert, and after that, God knows.

40 days and 40 nights?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T14:18:21.836Z · LW(p) · GW(p)

I don't yet understand the (Kabbalistic?) significance of the number 40. Haven't looked into it. Maybe if I figured it out then I'd find 40 days, 40 nights uniquely appealing.

Replies from: wedrifid, None, Hul-Gil
comment by wedrifid · 2012-04-19T14:22:13.507Z · LW(p) · GW(p)

I don't yet understand the (Kabbalistic?) significance of the number 40. Haven't looked into it. Maybe if I figured it out then I'd find 40 days, 40 nights uniquely appealing.

Worked for Elijah, Moses and Jesus. (I'd recommend eating food though - or at least drink gatorade.)

comment by [deleted] · 2012-04-24T00:45:24.330Z · LW(p) · GW(p)

Many languages, especially in antiquity, have colloquial ways of phrasing "forever" or "a long time" with a superficially-specific count. In Japanese, "ten thousand years" can be used to indicate an indefinitely long period; in Ancient Hebrew, "40 days and 40 nights" does that job.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-24T11:57:10.274Z · LW(p) · GW(p)

But is there any known reason for picking 40 specifically? I wouldn't expect the Jews to choose their numbers arbitrarily.

Replies from: None
comment by [deleted] · 2012-04-24T17:41:32.076Z · LW(p) · GW(p)

Given the number of such numerically-precise-but-pragmatically-vague sayings in many languages, and the apparent failure of them to converge beyond shared cultural contact (Classical Arabic has the same use pattern for "40", as do many Middle Eastern languages from antiquity, though I'll admit that my linguistic knowledge doesn't do more than touch on this region superficially, other'n a few years of Modern Hebrew), I don't think "arbitrary" quite captures it -- they simply adopted a use pattern that was widespread in the time and place where they were.

comment by Hul-Gil · 2012-04-19T18:33:14.344Z · LW(p) · GW(p)

What do you think about Kabbalah?

40 is sometimes used, in the Torah, to indicate a general large quantity - according to Google. It also has associations with purification and/or wisdom, according to my interpretation of the various places it appears in the Bible as a whole. (There are a lot of them.)

comment by michaelsullivan · 2012-04-20T19:22:44.172Z · LW(p) · GW(p)

After a long hiatus from deep involvement in comment threads here -- I actually can't tell if this is serious, or a brilliant mockery of Eliezer's decisions around creating AGI [*]

comment by orthonormal · 2012-04-18T23:51:49.581Z · LW(p) · GW(p)

There's one tactic that's worked well to get LW posts on neglected topics: having a competition for the best post on a subject. A $100 prize resulted in some excellent posts on efficient charity, and the Quantified Health Prize (substantially more money) led to some good analyses of the data on dietary supplementation.

What about having a contest for the best contrarian post on topic X? Personally, I'd chip in a few bucks for a good contrarian post on intelligence explosion, the mathematical universe, the expected value of x-rationality, and other topics.

(I had this idea after reading this comment, and now that I think of it I'm reminded of ciphergoth's survey of anti-cryonics writing as well.)

Replies from: Onelier, jsalvatier
comment by Onelier · 2012-04-19T05:06:19.335Z · LW(p) · GW(p)

Stream of consciousness. Judge me that ye may be judged. If you judge it by first-level Less Wrong standards, it should be downvoted (vague unjustifiied assertions, thoughtlessly rude), but maybe the information is useful. I look first for the heavily downvoted posts and enjoy the responses to them best.

I found the discussion on dietary supplementation interesting, in your link and elsewhere. As I recall, the tendency was for the responses (not entrants, but peoples comments around town) to be both crazy and stupid (with many exceptions, e.g., Yvain, Xacharaiah). I recall another thread on the topic where the correct comment ("careful!") was downvoted and its obvious explanation ("evolution works!") offered afterward was upvoted. Since I detected no secondary reasons for this, it was interesting in implying Less Wrongians did not see the obvious. Low certainties attached since I know I know nothing about this place. I'm deliberately being vague.

In general, Less Wrongians strike me as a group of people of impaired instrumental rationality who are working to overcome it. Give or take, most of you seem to be smarter than average but also less trustworthy, less able to exhibit strong commitments, etc. Probably this has been written somewhere hereabouts, but a lot of irrationalities are hard to overcome local optima; have you really gone far enough onto the other side? Incidentally, that could be a definition for x-rationality (if never actually done): Actually epistemically rational enough that it's instrumentally useful. Probably a brutally hard threshold to achieve and seems untrue of here, as I believe I've seen threads comment.

I was curious about the background of the people offering lessons at the rationality bootcamp, and saw some blog entry by one of them against, oh, being conservative in outlook (re: risk aversion). It was incredibly stupid; I mean, almost exclusively circular reasoning. You obviously deviate from the norm in your risk aversion. You're not obviously more successful than the norm (or are you? perhaps I'm mistaken). Maybe it's just a tough row to hoe, but that's the real task.

Personal comment: I realize Dmitry has been criticized a bit elsewhere and the voting trend doesn't support generalization to the community at large, but my conversation with him illustrates what I generally believe about this place. I knew more than he did. I said enough that he should realize this. He didn't realize it and shoehorned his response into a boring framework. I had specific advice to give, which I didn't get to, and realized I was reluctant to give (most Less Wrong stuff seems weak to me).

A whole lot of Less Wrong seems to be going for less detail, less knowledge, more use of frameworks of universal applicability and little precision. The sequences seem similar to me: Boring where I can judge meaning, meaningless where I can't. And always too long. I've read about four paragraphs of them in total. The quality of conversation here is high for a blog, of course, but low for a good academic setting. Some of the mild sneering at academics around here sounds ridiculous (an AI researcher believes in God). AI's a weak field. All round, papers don't quite capture any field and are often way way behind what people roughly feel.

Real question: Do you want me here?

I like you guys. I agree with you philosophically. I have nothing much to offer unless I put some effort into it (e.g., actually read what people write, etc). No confusion: You should be downvoting posts like this in general. You might want to make an exception 'cause it's worth hearing a particular rambling mindset once. My effort is better spent elsewhere (I can't imagine you'd disagree). I can't see anything that can be offered to me. I feel like I was more rational at age 7 than you are now (I wrote a pro and con list for castrating myself for the longevity and potential continuity of personality gains; e.g., maintaining the me of 7). A million other things. I'm working on real problems in other areas now.

Replies from: TimS, Viliam_Bur, Luke_A_Somers
comment by TimS · 2012-04-19T14:20:48.083Z · LW(p) · GW(p)

A whole lot of Less Wrong seems to be going for less detail, less knowledge, more use of frameworks of universal applicability and little precision. The sequences seem similar to me: Boring where I can judge meaning, meaningless where I can't. And always too long. I've read about four paragraphs of them in total. The quality of conversation here is high for a blog, of course, but low for a good academic setting. Some of the mild sneering at academics around here sounds ridiculous (an AI researcher believes in God). AI's a weak field. All round, papers don't quite capture any field and are often way way behind what people roughly feel.

This. A thousand times this. As a lawyer, LessWrong pattern matches with people outside a complicated field who are convinced that those in the fields are idiots because observers think that "the field is not that complicated."

That said, "Boring where I can judge meaning, meaningless where I can't." is an unfair criticism. Lots of really excellent ideas seem boring if you had already internalized the core ideas.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-20T02:46:42.693Z · LW(p) · GW(p)

Reminds me of part of a comment on Moldbug's blot, by Nick Szabo:

[legal reasoning]

It's a disciplined and competitive (dialectic, in the true original sense of that term) use of analogies, precedents, and emergent rules, far more sophisticated than normal use of analogy and metaphor. I learned it my first year of law school and it's a radically different kind of thinking I had never encountered before in school. The Bayesian bloggers seem to be completely oblivious to it, and to the tremendous value of tradition generally. That makes them, from my POV, culturally illiterate and incompetent to opine on law or politics. Yes, legal training also made me stuck up. :-)

If you can't afford law school, you can learn most of what you need to know from Legal Method and Writing by Charles R. Calleros and a first year law school common law casebook (Torts, Property, or Contracts).

The extremely short description of legal or scholastic reasoning is to think of a proposition or dispute as Schrodinger's Cat, both true and false at the same time, or each party at fault or not at the same time, or the appropriate dichotomy. Then gather all the moral or legal disputes that are similar to this one. Argue by analogy for each side both from the facts of those prior disputes and from the informal rules ("holdings") implied by the decisions resolving those disputes. This kind of reasoning allows a lawyer to anticipate an opponent's as well as their own argument in a case, and allows a judge to appreciate both sides of an argument, the latter also crucial, but often absent, in reasoning in about politics, morals, and the more complex areas of science, which in absence of this kind of discipline is dominated by confirmation bias and lack of understanding of other points of view.

Law also has a sophisticated set of qualitative probabilities I've blogged on, which imply not just degrees of truth but various aspects of gathering evidence, burdens of proof, and so on. The scientific method derived in large part from the Continental law of evidence, with which Galileo, Leibniz, etc. were intimately familiar having studied law. But legal reasoning, or scholastic reasoning as it used to be known, is still capable of covering a far wider swath of the human experience than scientific reasoning which is really just a special case and applies well only to hard evidence or the hard sciences.

I've been studying the history of common law lately due to Nick's influence, after which I'm gonna read the book he recommended. I notice that his description of legal reasoning is very similar to how I use my chess subskills for rationality.

Replies from: TimS
comment by TimS · 2012-04-20T19:51:59.279Z · LW(p) · GW(p)

The extremely short description of legal or scholastic reasoning is to think of a proposition or dispute as Schrodinger's Cat, both true and false at the same time, or each party at fault or not at the same time, or the appropriate dichotomy. Then gather all the moral or legal disputes that are similar to this one. Argue by analogy for each side both from the facts of those prior disputes and from the informal rules ("holdings") implied by the decisions resolving those disputes. This kind of reasoning allows a lawyer to anticipate an opponent's as well as their own argument in a case, and allows a judge to appreciate both sides of an argument, the latter also crucial, but often absent, in reasoning in about politics, morals, and the more complex areas of science, which in absence of this kind of discipline is dominated by confirmation bias and lack of understanding of other points of view.

This is a moderately reasonable model of litigation, but it isn't complete. For example, Thurgood Marshall litigated separate-but-equal in the law school context specifically because every judge has a gut feeling of how to compare law schools, which just isn't true about other educational institutions. In law school, I heard the apocryphal story that the law for the State of Texas argued that the new segregated law school was just as good as UT Law School, and Justice Clark - a graduate of UT - passed a note to a colleague that read "Bullshit" That's clever lawyering and has nothing to do with arguing from precedent.

Further, not all law is litigation. The legislature empowered to make new laws that have no relationship to old laws. In short, there's a fair amount more to the practice of law than reasoning by analogy, even if reasoning by analogy is an important skill for a lawyer.

comment by Viliam_Bur · 2012-04-19T09:41:03.181Z · LW(p) · GW(p)

I like your style of writing. Though: too many ideas, difficult to rate and respond.

Karma always has a random component. Karma of one comment is not significant. Karma of 10 comments shows a trend. I have once received a negative karma for a comment showing an obvious error in reasoning of others; but it only happened once in maybe hundred comments, so I don't make a drama of it. But yeah, it might be painful if that happened to someone's first comment on LW.

Instrumental rationality is a known problem of intelligent people. My worst experience was Mensa: huge signalling, almost nothing ever done; and if something is done, it's usually always done by the same two or three people, who could just as well have it done on their own. Compared with that, people at LW are relatively high in instrumental rationality -- they have a working website, they write good articles, they do research, they organize meetups and seminars. But yes, we could do a lot better. Instead of going meta, people could focus and write about things they care about. Not doing this on a web discussion is probably a symptom of not doing it in the real life.

Yes, being convinced of one's own rationality can lead to overconfidence. I don't know a cure. Perhaps repeated exposure to disagreement of other rational people will eventually move one to update. Another reason for people focusing on what they are good at -- providing more evidence for their rationalist friends.

Re: last three paragraphs -- the choice to stay or leave is on you. Don't participate in the discussions you consider worthless, write something about the real things you work on. (And perhaps I should do the same.) But this is not a new idea -- we have regular threads "what are you working on" here.

Replies from: twolier
comment by twolier · 2012-04-20T04:46:33.225Z · LW(p) · GW(p)

Same dude here, despite the name. Hypothetical: Should a prof at, say, Harvard working on the genetics of longevity post and spend time here?

Discussing his own work would be identifying and probably not very productive. Let's further say he's pre-tenure. Top places have a very different tenure success rate than even very good places, so it's an iffy point in his career.

Does Less Wrong have anything to offer him? And doesn't he serve Less Wrong best by staying away and working? (or even "playing" elsewhere)

My central criticism of this place may well be that some of you won't see there really is no question what the right answer is.

Incidentally, perfectly agree with your comment TimS, but the point is that I internalized those ideas independent of LessWrong. ViliamBur, you misunderstood my Karma point. I was merely acknowledging that my comment's being upvoted and Dmitry's downvoted means I can't use it to indict the community at large (and instead was offering is as illustration of my mindset). Luke: yup. But I did skim through the papers from the institute. Not very good. I suspect I can mostly infer the sequences from very basic background knowledge in game theory, philosophy, physics, neuroscience, psych, etc, and reading current comments threads. I don't see anything too fancy implied by the secondary sources (I enjoy reading the back-and-forth more).

Uh, what else. I enjoy HPMOR. What I like about it, however, is bad about me: Basically what Robin feared in his comment on OvercomingBias. I should (and will) go. It goes without saying that you wish me well. I just felt like saying hello because I like you. And if you can make it so I can talk to you profitably, I'd like that. Not your fault and I'm sorry to have said it, but I thought you should know.

Replies from: orthonormal, Viliam_Bur
comment by orthonormal · 2012-04-20T18:30:49.547Z · LW(p) · GW(p)

You should reply to different commenters individually, since then it will send them each notifications that you're replying. Few readers check all branches of the thread that they replied to.

comment by Viliam_Bur · 2012-04-20T06:43:21.531Z · LW(p) · GW(p)

Hypothetical: Should a prof at, say, Harvard working on the genetics of longevity post and spend time here? [...] Does Less Wrong have anything to offer him?

He could discuss the less crtitical parts of his work. If there is a meetup near his home, he could go there and try to find someone to cooperate with. Or if he is expert at genetics but less expert on math, he could ask someone to help him with statistics.

Also, he could just spend here his free time, if he prefers company of rational people and has problem finding it outside of his work.

And doesn't he serve Less Wrong best by staying away and working?

That question is relevant for all of us, experts or not. Even for me there are many things I should be doing rather than procrastinating on LW. However, I know myself -- I spent a lot of time online, so given that, at least I can choose a site that gives me intelligent discussions.

If you spend your time better, keep doing what works for you. Maybe visiting LW once a month and reading the articles in the "Main" part would be a reasonable compromise, if you want to participate. (I don't know if there is an RSS feed for "Main".)

Replies from: asr
comment by asr · 2012-04-20T07:00:28.786Z · LW(p) · GW(p)

He could discuss the less crtitical parts of his work. If there is a meetup near his home, he could go there and try to find someone to cooperate with. Or if he is expert at genetics but less expert on math, he could ask someone to help him with statistics.

Suppose you were a professional researcher looking for statistical help. Would you (A) go to a LessWrong meetup, (B), give a talk at the Statistics department of your hypothetical university, or (C) ask your colleagues which statisticians or statistically-literate graduate students they have collaborated with recently?

I'm sure the LessWrong community believes in statistics, which is good. But I don't believe the average member of this crowd is any better at the humdrum practicalities of statistical hypothesis testing than your average working scientist. I would guess LessWrong skews younger and less expert.

Also, he could just spend here his free time, if he prefers company of rational people and has problem finding it outside of his work.

You will not have a hard time finding smart rational people on the Harvard campus! Or, for that matter, near any major university.

I'm with twolier -- LessWrong is fun, but I don't see it being all that professionally valuable for people in most technical fields.

comment by Luke_A_Somers · 2012-04-19T16:58:25.413Z · LW(p) · GW(p)

I've read about four paragraphs of them in total.

??? Seriously?

comment by jsalvatier · 2012-04-20T23:48:04.239Z · LW(p) · GW(p)

I like this idea and am even willing to put money towards it, but some other similar experiments (of mine; maybe others would be better at this) didn't turn out so well ( this one got no entries, spaced repetition turned out okay, it but only got one good submission). Let me know if you're interested in putting effort into this (it wouldn't be hard to convince me to also do so, but I probably need someone else to help).

comment by orthonormal · 2012-04-18T23:32:03.371Z · LW(p) · GW(p)

One relevant dynamic is the following: if an idea is considered "absurd" to the mainstream, there will be very few people who take the idea seriously yet disagree with it. Social pressure forces polarization: if you're going to disagree with it, you might as well agree with all your normal friends that the idea is kooky.

Thus it's especially hard to find good contrarians for a forum that takes several "absurd" positions.

comment by daenerys · 2012-04-18T22:58:19.855Z · LW(p) · GW(p)

Upvote if you generally no longer post or discuss opinions that disagree with LW consensus.

Feel free to leave a comment on your experiences and reasons for this.

(If you would like to downvote this poll, please downvote the karma balance below instead, so that we can still get an accurate idea of the number of people who have this reaction.)

Replies from: pedanterrific, Multiheaded, Larks, NancyLebovitz, daenerys
comment by pedanterrific · 2012-04-19T05:01:19.620Z · LW(p) · GW(p)

with LW census

(consensus)

And what do you mean "no longer"? Is the idea "upvote if your contrarianism has been downvoted out of you", or what?

Replies from: daenerys
comment by daenerys · 2012-04-19T13:31:48.423Z · LW(p) · GW(p)

silly typos. fixed, thanks!

comment by Multiheaded · 2012-04-19T04:38:27.900Z · LW(p) · GW(p)

I'm curious, do you? If you do, why?

comment by Larks · 2012-04-19T04:58:50.337Z · LW(p) · GW(p)

This poll is poorly designed; karma balances often get downvoted less than the vote options get upvoted, so this will tend to over-estimate how many people no longer dissent.

For example, when I loaded this page, this comment was at 5 and the karma balance was at -3

Replies from: daenerys, Random832
comment by daenerys · 2012-04-19T14:37:36.367Z · LW(p) · GW(p)

karma balances often get downvoted less than the vote options get upvoted, so this will tend to over-estimate how many people no longer dissent.

To me, when a karma balance is downvoted less than poll options are upvoted, it means that people think running the poll deserves some karma. This does not overestimate people who have reacted to voting patterns, since that number does not come from the karma balance. If someone (who has NOT reacted to voting patterns) wants to give karma for running the poll, they would upvote the karma balance, not the voting comment

Also, the purpose of the poll is to see whether a relatively high or relatively low amount of people have reacted to the voting patterns this way. Exact numbers are not needed.

comment by Random832 · 2012-04-19T13:00:35.294Z · LW(p) · GW(p)

I have a proposal for a new structure for poll options:

The top-level post is just a statement of the idea, and voting has nothing to do with the poll. This can be omitted if the poll is an article.

A reply to this post is a "positive karma balance" - it should get no downvotes, and its score should be equal to the number of participants in the poll.

Two replies to the "positive karma balance" post, you downvote one to select this option in the poll.

This way voting either way in the poll has the same cost (one downvote), the enclosing post will have a high score (keeping it from being lost), and the only way to "corrupt" the poll results without leaving a trace [downvote the count post and upvote one of the option posts] simply cancels someone's vote without allowing you to make your own.

Replies from: HonoreDB
comment by NancyLebovitz · 2012-04-19T06:27:04.605Z · LW(p) · GW(p)

Nitpick-- that should be the LW consensus, not LW census.

comment by daenerys · 2012-04-18T22:58:51.254Z · LW(p) · GW(p)

Karma balance

(or downvote this, if you don't like the idea of this poll)

comment by steven0461 · 2012-04-18T22:42:49.824Z · LW(p) · GW(p)

If we have less contrarianism than is optimal, it seems like the root of the problem is that people often vote for agreement rather than for expected added value. I would start looking there for a solution.

Also, the site would be able to absorb more contrarians if their bad contributions didn't cause as much damage. It would help if we exercised better judgment in deciding when a criticism is worth engaging with and when we should just stop feeding the trolls.

Replies from: David_Gerard, John_Maxwell_IV
comment by David_Gerard · 2012-04-18T22:45:43.247Z · LW(p) · GW(p)

Change the mouseovers on the thumbs-up/thumbs-down icons from "Vote up"/"Vote down" to "More like this"/"Less like this". I've suggested this before and it got upvotes, I suggest now it might be time to implement it.

Replies from: Will_Newsome, Unnamed, John_Maxwell_IV, thomblake, steven0461, Multiheaded, A4FB53AC
comment by Will_Newsome · 2012-04-19T08:36:44.306Z · LW(p) · GW(p)

Stupid alternative: Instead of up/down, have blue/green. Let chaos reign as people arbitrarily assign meaning.

Replies from: pedanterrific, Nornagest, Multiheaded, faul_sname
comment by pedanterrific · 2012-04-19T11:46:07.026Z · LW(p) · GW(p)

Classic Will_Newsome. Greenvoted.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-19T14:21:47.066Z · LW(p) · GW(p)

BLUE!!

... well, it said blue when I clicked on it ...

comment by Nornagest · 2012-04-19T08:53:19.722Z · LW(p) · GW(p)

Predicted outcome: within a couple of weeks, blue/green will have understood but undocumented positive/negative associations. Votes will be noisier, though, thanks mostly to confused newcomers and the occasional contrarian pursuing an idiosyncratic interpretation. Complaints about downvotes, and color politics jokes, will both become more common.

p = 0.7 contingent on implementation for core claim, .5-6 range for corollaries.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T09:23:10.965Z · LW(p) · GW(p)

0.7 strikes me as low.

Proposed chaotic refinement: Blue/green, but switch them every 18 to 30 hours (randomly sampled, uniform distribution).

(ETA: Upon reflection days or weeks would be better, to increase chaos/noise ratio. Would also work better with prominent "top contributors for last 30 days" lists for both blue and green, and more adulation/condemnation based on those lists.)

Replies from: shokwave
comment by shokwave · 2012-04-20T02:21:24.679Z · LW(p) · GW(p)

Other refinements: each person is randomly permanently assigned either: blue/green OR they see blue/green but it's actually green/blue behind the scenes. This makes any explicit discussion of blue/green more difficult.

Or: Each person actually has grue and bleen buttons. At some time t, they are suddenly voting for the other colours. An extended form of this looks similar to your ETA.

comment by Multiheaded · 2012-05-14T10:20:23.219Z · LW(p) · GW(p)

Let chaos reign as people arbitrarily assign meaning.

And you call yourself an anti-liberal traditionalist? :)

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-14T10:38:44.980Z · LW(p) · GW(p)

Am I an anti-liberal traditionalist? Humans are so silly. I have an idea. If you want to hit the right-wingers with something out of left field, try Rigorous Intuition, especially those posts over on the right under the heading "The Military-Occult Complex, ritual abuse/mind control, and 'High Weirdness'". I guarantee a few WTFs.

Replies from: Multiheaded, Eugine_Nier
comment by Multiheaded · 2012-05-14T10:57:55.506Z · LW(p) · GW(p)

Heh, thanks. Probably won't work on the local right-wing technocrats, however, as they are simply not interested in many such issues like the workings of the Bush regime or the military-industrial complex. I'm curious enough to take a look, though.

Edit: heh, that blog quotes Dick's novels - already a good sign to me.

comment by Eugine_Nier · 2012-05-15T04:23:15.625Z · LW(p) · GW(p)

I'm curious why you picked this conspiracy theorist in particular.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-15T06:42:22.545Z · LW(p) · GW(p)

Availability heuristic; I haven't read many conspiracy theorists. He struck me as more careful and more cogent than the few others I'd read; like, he bothers to explicitly bracket certain ideas as having a good chance of being wrong, and he emphasizes giving up on a thread if it doesn't seem to be fruitful. He's generally pragmatic. He also has a healthy skepticism about the motives and natures of claimed demonic/alien entities, not in the sense of categorically doubting that they're supernatural/alien/"weird", but in the sense of not assuming that just because they say they want to help humanity and so on that that is strong evidence of actual benevolence: "I find it a fascinating frustration that many of those convinced of a massive government cover-up fall over themselves to accept the words of non-human entities." — this post on Fatima. Being pseudo-Catholic and schizotypal I naturally worry about demons—in fact that's part of why I'm pseudo-Catholic and not, say, pseudo-Tibetan-Buddhist. So Jeff Wells scores a lot of points with me for his caution on that front.

Do you have recommendations for other conspiracy theorists, or conspiracy theorist debunkers? 'Cuz honestly I think Jeff Wells makes a compelling, coherent case for High Weirdness, which is worth keeping in mind as a live hypothesis, though I don't think we'll have the collaborative argumentation tools necessary to rationally assess the hypothesis for at least another five years.

Replies from: Jayson_Virissimo, Eugine_Nier
comment by Jayson_Virissimo · 2012-05-15T10:13:46.897Z · LW(p) · GW(p)

I visited Fatima in 2007 with my family. It was...spooky...and in a way that the Vatican was not (that is to say, not in the same way as any old, massive, historically-important thing is). On the other hand, my Portuguese isn't very good, so I may not have understood as much as I thought.

comment by Eugine_Nier · 2012-05-15T07:49:20.037Z · LW(p) · GW(p)

I clicked around a little on his site. Most of his conspiracy theories appear to be political and he's clearly been mind-killed by politics.

As for evaluating "conspiracy theories", I recommend you start by reading this blog post by Eric Raymond, also this comment by Konkvistador if you haven't already seen it.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-15T08:34:23.256Z · LW(p) · GW(p)

Sounds like you might not have read enough to see where his strengths and weaknesses are. Politics is his weakness and I mostly ignore that stuff, but I'm more interested in his paranormal stuff including the military-occult stuff, where he seems to have less of an ax to grind and sometimes presents a bunch of interesting source material without trying too hard to spin a story out of it. E.g. I like his report on Fatima, linked in my previous comment; what do you think of that one? (Though I suppose I should have told Multiheaded that Wells' political stuff is bad and that his High Weirdness stuff is way better. Oh well.)

In my previous comment I for some reason conflated High Weirdness with conspiracy theory; in reality I suspect they're not that connected. I'm more interested in High Weirdness than conspiracy, so any critiques of High Weirdness would be useful. I'm really unimpressed with standard "skeptic" arguments. Re conspiracy theories, Konkvistador and Raymond make the obvious points, I suppose there might be nothing more insightful to be said about the matter at that level of generality.

Replies from: Multiheaded
comment by Multiheaded · 2012-05-15T11:11:42.639Z · LW(p) · GW(p)

Though I suppose I should have told Multiheaded that Wells' political stuff is bad and that his High Weirdness stuff is way better.

Nah, don't worry. I understood from the start that politically that blog is something like the rants of a hippie Bircher. That is, with rather clouded judgment and some nonsense priors in the first place, but curious when it directs attention to odd facts that don't fit the mainstream narrative. [1] Like the village idiot whose ravings contain clues to plot secrets in some computer RPGs.

(when I said "the Bush regime", I didn't mean all the standard left-of-center complaints about how he was evil, stupid and killed puppies - although I agree with the last two - but the genuinely irrational-looking stuff like the connections with fringe groups and the CIA's rumoured odd activities)

P.S. Wow, that guy's T-shirts are quite awfully designed.

P.P.S. And still it's clearly worth reading, at least in matters which are somewhat above mere conspiracies and politics:

"If you draw the timelines," said futurologist Ian Pearson, "realistically by 2050 we would expect to be able to download your mind into a machine, so when you die it's not a major career problem." Pearson is sometimes credited with the invention of that fouler of distinction between home and office, text messaging. And given how all the futurist fantasies of increased leisure time have panned out, no one should take comfort in the prospect that death itself need not encumber job performance. Even though pensionable age and benefits continue to be rolled back vindictively, there was always at least the promise of the peace of the grave.

When it's Hanson talking about the glorious future of Ems, the self-styled "rationalists" - I'm not talking about the LW majority, but the thinking patterns characteristic of some of the Overcoming Bias old guard - smile and nod. When it's a somewhat disturbed and not overly logical guy warning sincerely about the looming Hell on Earth - factually, the same thing - they groan with annoyance at the pathetic Luddites and their mental disease known as "humanity".

Obvious devil-worshipping "rationalist" cults like Objectiivism are only the tip of the iceberg here; we're talking about some rather shocking spiritual and cultural erosion, handwaved as "non-neurotypicality" or "contrarianism" when it is at all acknowledged. (I'm not saying that there's something horribly wrong with non-neurotypicality or contrarianism per se, as they are, but there's nothing wrong with patriotism per se either, and you know who else was patriotic? [Godwin's law])

By God, Will, I feel like I understand your concerns so much better now!

P.S. I know, I know, it's kinda hypocritical of me to criticize a community member as morally corrupt after telling another guy to cut that shit out, but I can't help it, I'm really spooked by this kind of people.

[1] Sorry, I missed this footnote when writing the comment, and now I forgot what it was. Silly me :(

Replies from: Multiheaded
comment by Multiheaded · 2012-05-15T14:14:36.427Z · LW(p) · GW(p)

Also, damn, it's a bit of a jolt to encounter someone who thinks of the world's course in the same Gnostic terms that I often entertain. I too have been associating the spectre of anti-religious, anti-ideological, technocratic tyranny that's haunting us with the supposed iron "logic", runaway reductionism and blind hubris of the Archons, as relayed by the ancients and by latter-day SF visionaries like Dick.

(All aboard! We're off for -10 rating in 3... 2... 1...)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-05-16T05:58:35.894Z · LW(p) · GW(p)

(All aboard! We're off for -10 rating in 3... 2... 1...)

Given how deeply this comment is buried in an old thread I'd be surprised if 10 people even read it.

Replies from: Multiheaded
comment by Multiheaded · 2012-05-16T11:54:10.003Z · LW(p) · GW(p)

Oh, don't worry, dude, you can simply make nine or so new accounts to make up for it. ;)

comment by faul_sname · 2012-04-20T02:06:12.880Z · LW(p) · GW(p)

Sort by greenest.

comment by Unnamed · 2012-04-19T00:16:58.026Z · LW(p) · GW(p)

I think of it as "Pay more attention to this" / "Pay less attention to this." Communicating primarily to other readers rather than to posters.

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-18T23:02:19.730Z · LW(p) · GW(p)

I think this would discourage me from writing contrary stuff. Right now if I get voted down, I explain it to myself as me having an unpopular but possibly correct opinion. Hearing that people want "less like this" seems harsh somehow.

Replies from: Larks, TimS
comment by Larks · 2012-04-19T05:02:43.587Z · LW(p) · GW(p)

This is the pro-airbrushing argument; airbrushing in magazines decreases body neurosis because it gives girls plausible deniability for why they don't look like models.

I saw this not to pass judgement either way on your argument.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-04-19T22:13:29.515Z · LW(p) · GW(p)

Does airbrushing actually work to decrease body neurosis? My impression is that it doesn't. However, mannikins seem to cause less damage, possibly because they're less realisitic looking.

comment by TimS · 2012-04-19T19:01:21.935Z · LW(p) · GW(p)

Hearing that people want "less like this" seems harsh somehow.

Isn't that the point? A stimuli that is insufficiently strong to change behavior is pointless to use for behavior modification.

comment by thomblake · 2012-04-19T14:32:08.929Z · LW(p) · GW(p)

Frankly I think we should reconsider the early suggestion that karma on comments should be between 0 and 1, starting at 0.5.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-19T15:23:11.841Z · LW(p) · GW(p)

1 and 999. No doubt someone will write a script to render the number in decibels ...

comment by steven0461 · 2012-04-18T23:01:34.788Z · LW(p) · GW(p)

Hmm. Or "Reward"/"Punish"? "Incent"/"Disincent"? "Carrot"/"Stick"?

"I like your comment, so I more like thissed it" doesn't roll off the tongue.

Replies from: Alicorn, Richard_Kennaway, vi21maobk9vp, David_Gerard
comment by Alicorn · 2012-04-18T23:07:40.087Z · LW(p) · GW(p)

"Carrot"/"Stick"?

I want to go around carroting things.

Replies from: None
comment by [deleted] · 2012-04-19T00:15:59.967Z · LW(p) · GW(p)

All I could think of was this. (deep link, ten seconds long).

(Warning: Homestuck fandom, implausibly unsafe for work, unless your boss is into Homestuck.)

comment by Richard_Kennaway · 2012-04-18T23:07:20.037Z · LW(p) · GW(p)

"Reward"/"Punish"?

Please, no. As far as I'm concerned, an upvote or downvote, by me or on my posts, is not a reward or a punishment. Not even slightly.

"I like your comment, so I more like thissed it" doesn't roll off the tongue.

So much the better. I am not interested in who has upvoted or downvoted me, and I never mention my own votes.

Replies from: David_Gerard, steven0461
comment by David_Gerard · 2012-04-19T19:22:35.862Z · LW(p) · GW(p)

Please, no. As far as I'm concerned, an upvote or downvote, by me or on my posts, is not a reward or a punishment. Not even slightly.

I think you're wrong there. Humans are exquisitely sensitive to status, anywhere they see anything that looks even slightly like it. Upvotes/downvotes are precisely rewards/punishments, whatever else they may be or whatever you may intend yours to be.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-04-20T07:58:20.743Z · LW(p) · GW(p)

Other people can torture themselves with such phantoms or not, as they please.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-20T10:02:02.099Z · LW(p) · GW(p)

"as they please" is, I think, wrong too. It's incredibly difficult to switch off awareness of status. Particularly with your score on the LessWrong video game right up there at the top-right in a little green oval, with your this-month score just below it.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-04-20T17:55:56.030Z · LW(p) · GW(p)

I'm not talking about how easy or difficult it is.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-20T21:58:21.569Z · LW(p) · GW(p)

"as they please" seems dismissive of how difficult it is. It's that basic to human nature, not just human thinking.

Of course, you may be able to lessen how much you care about your score on the LessWrong game to the point where it doesn't affect you more than epsilon, but assuming you're a human I would be very surprised to find you literally didn't have even the faintest twinge.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-04-20T22:31:07.622Z · LW(p) · GW(p)

"Difficult" too easily becomes an excuse for not doing the work. How "difficult" is it to get a university degree? How "difficult" is it to bike 100 miles?

Sometimes "difficult" just means "I don't want to".

So I see I'm currently at -3 for my two comments above, which I think may be the first time I have ever commented on the votes on my own posts. My reaction: so what? I am sufficiently self-assured (a virtue worth cultivating, and observing one's reaction to one's karma score is one small way of cultivating it) that I draw from it neither validation nor shame, and besides, a trifling few points here and there are nothing. Comments are a more substantial currency.

The dogs bark. The caravan moves on.

Also relevant.

comment by steven0461 · 2012-04-18T23:09:49.835Z · LW(p) · GW(p)

I agree that reward/punish doesn't quite capture the intended meaning. The other suggestions I edited in also have that problem.

Even if we're not mentioning votes, there are various other reasons why we might want to talk about the process of voting.

I kind of like "I like your comment, so I morepleased it".

comment by vi21maobk9vp · 2012-04-21T05:54:09.552Z · LW(p) · GW(p)

"Appreciated this" / "Pearl wasted on me" ?

comment by David_Gerard · 2012-04-18T23:16:26.353Z · LW(p) · GW(p)

"Bouquet"/"Brickbat".

comment by Multiheaded · 2012-04-19T04:33:27.906Z · LW(p) · GW(p)

This is a seriously fucking awesome suggestion! Do it!

comment by A4FB53AC · 2012-04-19T04:12:38.100Z · LW(p) · GW(p)

You should call it black and white. Because that's what it is, black and white thinking.

Just think about it : using nothing more than one bit of non normalized information by compressing the opinion of people who use wildly variable judgement criteria, from variable populations (different people care and vote for different topics).

Then you're going to tell me it "works nonetheless", that it self-corrects because several (how many do you really need to obtain such a self-correction effect?) people are aggregating their opinions and that people usually mean it to say "more / less of this please". But what's your evidence for it working? The quality of the discussion here? How much of that stems from the quality of the public, and the quality of the base material such as Eliezer's sequence?

Do you realize that judgements like "more / less of this" may well optimize less than you think for content, insight, or epistemic hygiene, and more than it should for stuff that just amuses and pleases people? Jokes, famous quotes, group-think, ego grooming, etc.

People optimizing for "more like this" eventually downgrades content into lolcats and porn. It's crude wireheading. I'm not saying this community isn't somewhat above going that deep, but we're still human beings and therefore still susceptible to it.

Replies from: NancyLebovitz, David_Gerard, Bugmaster
comment by NancyLebovitz · 2012-04-19T06:24:24.260Z · LW(p) · GW(p)

I've noticed that humor gets a lot of upvotes compared to good but non-funny comments. However, humor hasn't taken over, probably because being funny can take some thought.

I don't think karma conveys a lot of information at this point, though heavily upvoted articles tend to be good, and I've given up on reading down-voted articles, with a possible exception of those that get a significant number of comments.

comment by David_Gerard · 2012-04-19T06:58:50.389Z · LW(p) · GW(p)

People optimizing for "more like this" eventually downgrades content into lolcats and porn.

More so than "vote up"? You've made a statement here that looks like it should be supported by evidence. What sites do you know of this happening from going from "vote up" to "more of this"?

Replies from: A4FB53AC
comment by A4FB53AC · 2012-04-19T08:08:54.745Z · LW(p) · GW(p)

Not more so than "vote up".

In this case I don't think both are significantly different. They both don't convey a lot of information, both are very noisy, and a lot of people seem to already mean "more like this" when they "vote up" anyway.

Replies from: khafra
comment by khafra · 2012-04-19T12:47:07.386Z · LW(p) · GW(p)

I don't think it was clear from the context that you were arguing against the practice of community moderation in general. I also don't think you supported your case anywhere near well enough to justify your verbal vehemence. Was this a test/demonstration of Wei Dai's point about intolerance of overconfident newcomers with different ideas?

Replies from: A4FB53AC
comment by A4FB53AC · 2012-04-21T19:04:34.390Z · LW(p) · GW(p)

Actually, not against. I was thinking that current moderation techniques on lesswrong are inadequate/insufficient. I don't think the reddit karma system's been optimized much. We just imported it. I'm sure we can adapt it and do better.

At least part of my point should have been that moderation should provide richer information. For instance by allowing for graded scores on a scale from -10 to 10, and showing the average score rather than the sum of all votes. Also, giving some clue as to how controversial a post is. That'd not be a silver bullet, but it'd at least be more informative I think.

And yes, I was also arguing this idea thinking it would fit nicely in this post.

I guess I was wrong since it seems it wasn't clear at all what I was arguing for, and being tactless wasn't a good idea either, contrarian intolerance context or not. Regardless, arguing it in detail in comments, while off-topic in this post, wasn't the way to do it either.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-05-15T14:32:13.990Z · LW(p) · GW(p)

Karma graphs would give a lot of information-- whether a person's average karma is trending up or down, and whether their average karma is the result of a lot of similar karma or +/- swings.

comment by Bugmaster · 2012-04-19T06:35:30.646Z · LW(p) · GW(p)

Don't you technically need at least two bits ? There are three states: "downvoted", "upvoted", and "not voted at all".

Replies from: wedrifid, A4FB53AC
comment by wedrifid · 2012-04-19T08:50:57.541Z · LW(p) · GW(p)

Don't you technically need at least two bits ?

One and a half if you can find a suitable compression algorithm. I wouldn't rule that out as a possibility but it may be counter-intuitive.

comment by A4FB53AC · 2012-04-19T08:05:08.502Z · LW(p) · GW(p)

True, except you don't know how many people didn't vote (i.e. we don't keep track of that : a comment at 0 could as well have been read and voted as "0" by 0, 1, 10 or a hundred people and is the default state anyway.)(We similarly can't know if a comment is controversial, that is, how many upvotes and downvotes went into the aggregated score).

Replies from: Bugmaster
comment by Bugmaster · 2012-04-19T09:23:24.559Z · LW(p) · GW(p)

The system does keep track of how everyone voted, though; it needs to do that in order to render the thumbs up/down buttons as green or gray. wedrifid is right though; using suitable compression, you might be able to get away with less than two bits (in aggregate).

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-18T23:17:57.243Z · LW(p) · GW(p)

Edited wiki.

Replies from: thomblake
comment by thomblake · 2012-04-19T00:37:06.689Z · LW(p) · GW(p)

Useful edit.

comment by DanielVarga · 2012-04-19T12:08:46.565Z · LW(p) · GW(p)

Others already noted that we need contrary opinions more than contrarian people per se. Let me make another distinction. Is the goal a community with a diverse set of opinions, or more people who are vocal and articulate about some minority opinion? Maybe the latter goal is worth working on, but I suspect the former has already been reached. Let me go with myself as an example. I don't think anybody ever saw any of my comments as contrarian, and I am sure nobody associates my nick with contrarianism. The thing is: I would bet against Many Worlds. I am not a consequentialist. I am not really interested in cryonics. I think the flavor of decision theory practiced here is just cool math without foreseeable applications. I give very low probability to FOOM. I think FAI as a goal is unfeasible, for more than one reason.

I am not vocal at all about these positions, and you will very rarely see me engage in loud debates. But I state my position when I feel like it, and I was never punished for that. (I don't have any negatively voted comment out of a few hundred.) I think we would see a similar pattern when checking the positions of other individual "non-contrarian" commenters.

Replies from: byrnema, thomblake, vi21maobk9vp
comment by byrnema · 2012-04-19T23:03:50.914Z · LW(p) · GW(p)

Me too:

I would bet against Many Worlds. I am not a consequentialist. I am not really interested in cryonics. I think the flavor of decision theory practiced here is just cool math without foreseeable applications. I give very low probability to FOOM. I think FAI as a goal is unfeasible, for more than one reason.

I used to be very active on Less Wrong, posting one or two comments every day, and a large fraction of my comments (especially at first) expressed disagreement with the consensus. I very much enjoyed the training in arguing more effectively (I wanted to learn to be more comfortable with confrontation) and I even more enjoyed assimilating the new ideas and perspectives of Less Wrong that I came to agree with.

But after a long while (about two years), I got really, really bored. I visit from time to time just to confirm that, yes, indeed, there is nothing of interest for me here. Well, I'm sure that's no big deal: people have different interests and they are free to come and go.

This is the first post that has interested me in a while, because it gives me a reason to analyze why I find Less Wrong so boring. I would consider myself the type of "reasonable contrarian" the author of this post seems to be looking for -- I am motivated to argue if I disagree, and have the correct attitude in that I'm quite willing to think counter-arguments through and change my position if I disagree. If only, alas, I disagreed about anything.

On all the topics that I used to enjoy being contrary about, I've either been assimilated into Less Wrong (for example, I'm no longer a theist) or I have identified that either (a) the reason for the difference in opinion was a difference in values or (b) the argument in question had no immediate material meaning, and, so arguing about either was completely pointless. My disinterest in cryonics is an example of (a), and belief or disbelief in many worlds is an example of (b).

I do wish Less Wrong was more interesting, because I used to enjoy spending time here. I realize this is a completely self-centered perspective, because presumably many do continue to find Less Wrong entertaining. But I want to learn things, and be challenged and stretched as much possible, and now that I'm already atheist that challenge isn't there. I'd like to understand how the "world works" and now that I've got materialism under my belt, what's next? I wish Less Wrong would try and tackle taboo topics like politics, because this an area where I observe I'm completely clueless. On the other hand, I also understand that these questions are probably just too difficult to tackle, and such a conversation would have a large probability of being fruitless.

Still, I agree with prase, currently the top comment, that Less Wrong topics tend to be too narrow. My secondary criticism would be that for me (just my opinion) the posts are kind of bland. Maybe people are too reasonable (!?), but there doesn't seem to be anything to argue with.

Replies from: khafra, wedrifid
comment by khafra · 2012-04-23T12:43:59.583Z · LW(p) · GW(p)

Over a year ago, Michael Vassar spoke about writing a rationalist's guide to politics. Seems like the sort of thing Steve Rayhawk would also be good at. Perhaps we could all get together and bribe somebody who could do it well to do it.

Replies from: None, byrnema
comment by [deleted] · 2012-05-01T19:38:24.953Z · LW(p) · GW(p)

Perhaps we could all get together and bribe somebody who could do it well to do it.

You have my sword.

comment by byrnema · 2012-04-23T15:24:48.339Z · LW(p) · GW(p)

I like that idea.

I expect that this candidate would think very differently from me (perhaps the inferential distance would make communication difficult?) and for some reason be especially detached from social thought patterns. I think I'm somewhat detached, but can't make heads or tails of the patterns. Thus, apart from the possible difficulty in communication, I would trust my judgement of whether they were resolving the questions and would be happy with an individual attempt.

... An example of the type of candidate comes to mind, the Dûnyain Kellhus, but unfortunately he is fictional.

comment by wedrifid · 2012-04-23T14:07:15.331Z · LW(p) · GW(p)

I used to be very active on Less Wrong, posting one or two comments every day

One or two comments every day is very active?

Oops.

comment by thomblake · 2012-04-19T20:51:15.218Z · LW(p) · GW(p)

You should make some discussion posts about your reasons for disagreeing with the perceived consensus on each of those issues. If they are articulate, specific, and uses the techniques of epistemic rationality, they should be well-received. (If you have good reasons for disagreeing with the techniques of epistemic rationality, then that's an even better post).

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-04-20T18:05:03.342Z · LW(p) · GW(p)

If I have seen the replies to well-written comments expressing some opinions, I may find it unlikely that I would get new information from replies to a discussion post.

And I may have some hard-to-share reasons and personal red flags, so I do not know whether I will do good to anyone.

So, why bother?

Maybe original poster wouldn't agree with this approach, but his behaviour is consistent with it.

comment by vi21maobk9vp · 2012-04-19T20:42:09.899Z · LW(p) · GW(p)

A perfect example of the problem, I guess.

Many pro-LW-mainstream arguments are weak if you have significantly different priors. People with minority view quickly learn the difference in priors and learn to express their views less often and defend them less.

I also consider FOOM-as-described-on-LW quite improbable and the writings of Eliezer on the topic simply raise a few red flags; I see that it is a popular position here, but most people don't find it worth the effort to fight mainstream.

There are still many topics on LW where no relevant values or priors are parts of LW majority's collective identity and I get some entertainment and information form reading these discussions and participating in them. There are also topics close to things that are accessible to science with all its rigidity (but also stability) compared to Bayesian inference. These are very informative too.

comment by gRR · 2012-04-19T00:26:12.387Z · LW(p) · GW(p)

I would prefer an increase in 'question' (problem) posts, as opposed to 'statement' (solution) posts, contrarian or no.

comment by timtyler · 2012-04-19T00:41:40.448Z · LW(p) · GW(p)

Most of the machine intelligence folk don't seem to be on "your" side. I think they see you as potential competitors who don't share their values.

I tend to be more sympathetic to their position than yours. In particular I don't seem to share your values, and don't much like your PR - or your "end of the world" propaganda. I think that developing in secret is a pretty dubious plan - and that the precautionary principle sucks

Probably the best thing about you is that you have Eliezer on your side - and he's a smart cookie. However, that aspect also appears to have its downsides.

Replies from: orthonormal
comment by orthonormal · 2012-04-19T03:03:06.404Z · LW(p) · GW(p)

It took me much longer than it should have to mentally move you from the "troll" category to the "contrarian" one. That's my fault, but it makes for an interesting case study:

I quickly got irritated that you made the same criticisms again and again, without acknowledging the points people had argued against you each time. To a reader who disagrees with you, that style looks like the work of a troll or crank; to a reader who agrees with you, it's the best that you can do when arguing against someone more eloquent, with a bigger platform, who's gone wrong at some key step.

It should be noted that I don't instinctively think any more highly of contrarians who constantly change their line of attack; it seems to be a "damned if you do, damned if you don't" tribal response.

The way I changed my mind was that you made an incisive comment about something that wasn't part of your big disagreement with the Less Wrong community, and I was forced to update. For any would-be respected contrarians out there, this might be a good tactic to circumvent our natural impulse towards closing ranks.

Replies from: Will_Newsome, timtyler
comment by Will_Newsome · 2012-04-19T06:08:59.800Z · LW(p) · GW(p)

It took me much longer than it should have to mentally move you from the "troll" category to the "contrarian" one.

I still find it tricky to distinguish if timtyler realizes what he's saying is going to be misinterpreted but just doesn't care (e.g. doesn't want to cave into the general resource-intensive norm of rephrasing things so as not to set off politics detectors), or if he doesn't realize what he's saying is going to be misinterpreted. E.g. he makes a lot of descriptive claims that look suspiciously like political claims and thus gets downvoted even when upon being queried he says they were intended purely as descriptive claims. I've started to think he generally just doesn't notice when he's making claims that could easily be interpreted as unnecessarily political.

Replies from: timtyler
comment by timtyler · 2012-04-19T11:14:56.527Z · LW(p) · GW(p)

Politics? This might, perhaps, be to do with the whole plan of unilaterally taking over the world? If so, that is a plan with a few politicical implications, and maybe it's hard to discuss it while avoiding seeming political.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T11:30:03.393Z · LW(p) · GW(p)

Yes, and because the Eliezerian doom/world-takeover position is somewhat marginalized by the mainstream, people around here are quick to assume that stating simple facts or predictions about it, unless the facts are implicitly in favor of the marginalized position, is instead implicitly a vote in favor of further marginalization, and thus readers react politically even to simple observations or predictions. E.g., your anti-doom predictions are taken as a political move with the intent of further marginalizing the fund-us-to-help-fight-doom political position, even in the absence of explicit evidence that that's your intent, and so people downvote you. That's my model anyway.

Replies from: timtyler
comment by timtyler · 2012-04-19T12:12:25.807Z · LW(p) · GW(p)

E.g., your anti-doom predictions are taken as a political move with the intent of further marginalizing the fund-us-to-help-fight-doom political position, even in the absence of explicit evidence that that's your intent

Of course, from my point of view, the "doom exaggeration" looks like a crude funding move based on exploiting people by using superstimulii - or, at best, a source of low-relevance noise from a bunch of self-selected doom enthusiasts who have clubbed together.

You do have a valid point about my intentions. I derive some value from the existence of the SI, but the overall effect seems to be negative. I'm not on "your side". I think "your side" currently sucks - and I don't see much sign of reform. I plan to join another group.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T12:20:35.516Z · LW(p) · GW(p)

I plan to join another group.

Me too. Probably the Catholics.

Replies from: khafra
comment by khafra · 2012-04-19T13:41:05.570Z · LW(p) · GW(p)

Is there a Dominican community blog I should watch? Also, would you surreptitiously palm some small dry ice granules right before you dip your fingers in the water during confirmation? I've always wanted to see that.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T13:58:55.542Z · LW(p) · GW(p)

I know basically nothing about modern Catholics, actually, which is a big reason why I haven't yet converted. E.g. I have serious doubts about the goodness of the Second Vatican Council. If the Devil has seriously tainted the temporal Church then I want no part in it.

Also, would you surreptitiously palm some small dry ice granules right before you dip your fingers in the water during confirmation? I've always wanted to see that.

That would be really cool. But I think God would be displeased. ...I'm not sure about that, I'll ask Him. (FWIW I rather doubt He'll give an unambiguous answer.)

Replies from: drethelin, Richard_Kennaway, NancyLebovitz, NancyLebovitz
comment by drethelin · 2012-04-19T17:14:37.455Z · LW(p) · GW(p)

If you had to specify a historical year in which Catholicism seems most correct to you which would it be?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-20T05:39:16.218Z · LW(p) · GW(p)

I think it depends somewhat on a subquestion I'm confused about. How much culpability should we assign the Church as an institution for the Reformation? On the one hand they were getting pretty corrupt, on the other hand that's like blaming someone who lived a vigorous, moral life, but who is now dying of cancer, for harboring cancer. Should we blame the man for not having already discovered the cure to cancer? Anyway, my intuition says the answer is about 1200 or 1300 A.D., but I really don't know. How close or far before the Reformation is dependent on how culpability should be assigned to the Church for the Reformation. Jayson_Virissimo or Vladimir_M would have better answers.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-04-25T11:09:02.627Z · LW(p) · GW(p)

I think it depends somewhat on a subquestion I'm confused about. How much culpability should we assign the Church as an institution for the Reformation? On the one hand they were getting pretty corrupt, on the other hand that's like blaming someone who lived a vigorous, moral life, but who is now dying of cancer, for harboring cancer. Should we blame the man for not having already discovered the cure to cancer? Anyway, my intuition says the answer is about 1200 or 1300 A.D., but I really don't know. How close or far before the Reformation is dependent on how culpability should be assigned to the Church for the Reformation. Jayson_Virissimo or Vladimir_M would have better answers.

Sorry; my knowledge of the Middle Ages (and the Early Modern Period) is very low-level (with depth on very narrow topics like medieval science and logic, but little outside of that including politics and religion). Making an accurate judgment as to the (average?) truth-value of the many (importance-weighted?) propositions affirmed by (the majority of?) Catholic churchmen is way too high-level for my current understanding (although, I hope to rectify this in the near future). Also, although many of my comments can reasonably be interpreted as being "pro-Catholic", this is mostly by accident. It would be more accurate to say that I am defending the medievals (many of which were Catholics) from libel (of which I have been guilty in the past and am attempting to do penance).

comment by Richard_Kennaway · 2012-04-20T07:00:01.226Z · LW(p) · GW(p)

But I think God would be displeased. ...I'm not sure about that, I'll ask Him. (FWIW I rather doubt He'll give an unambiguous answer.)

How do you go about asking God, and how do you experience His answers?

comment by NancyLebovitz · 2012-04-20T06:09:09.640Z · LW(p) · GW(p)

Why do you think the Devil might have tainted the temporal Church through the Second Vatican Council?

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2012-04-21T05:35:20.029Z · LW(p) · GW(p)

So this is getting into really crazy conspiracy theories, but I notice Vatican II came soon after the Church's failure to release the Third Secret of Fatima, which given the way Church authorities reacted to it IMO seems to indicate that it did indeed predict something like ongoing or imminent Satanic infiltration, or something similarly potentially disruptive to the termporal Church. FWIW I'm pretty sure this conspiracy theory only sounds even halfway plausible if you already accept as legitimate the various prophecies and miracles of Fatima.

ETA: Not sure what to make of the fact that if I was in a Dan Brown novel this is definitely a hypothesis I should keep to myself. I fear I'm not being very genre savvy.

Replies from: None, Eugine_Nier
comment by [deleted] · 2012-05-01T09:13:51.264Z · LW(p) · GW(p)

I know basically nothing about modern Catholics, actually, which is a big reason why I haven't yet converted. E.g. I have serious doubts about the goodness of the Second Vatican Council. If the Devil has seriously tainted the temporal Church then I want no part in it.

Considering this among other things, I want to see the contrarian awesomeness that would be you writing a series of posts on the Orthospehere explaining your positions and theories regarding the Church and global history.

Regardless if this turned to be an epic troll or the birth of a new cult, it would be extremely entertaining.

comment by Eugine_Nier · 2012-04-21T22:37:53.009Z · LW(p) · GW(p)

ETA: Not sure what to make of the fact that if I was in a Dan Brown novel this is definitely a hypothesis I should keep to myself. I fear I'm not being very genre savvy.

Given how correlated his novels tend to be with reality, I'd decrease my belief in the hypothesis.

comment by Will_Newsome · 2012-04-29T20:19:59.014Z · LW(p) · GW(p)

Upon reflection I remembered reading that there was serious cause for concern years before Vatican II. (N.B.: Linked blog seems to be generally epistemically careful but is big on conspiracy theories.)

comment by NancyLebovitz · 2012-04-19T22:20:46.256Z · LW(p) · GW(p)

There is no such thing as "modern Catholics". There are a number of subgroups, but I don't know enough to be usefully more specific.

comment by timtyler · 2012-04-19T10:49:36.448Z · LW(p) · GW(p)

I quickly got irritated that you made the same criticisms again and again, without acknowledging the points people had argued against you each time.

That doesn't sound great! Was I right? If you think there's a case where I should have updated - but didn't - perhaps it can be revisited? Of course, I don't mean to put pressure on you to trawl through my comments - but it would be nice for me to know if you have any specific cases in mind.

Replies from: orthonormal
comment by orthonormal · 2012-04-19T23:06:12.919Z · LW(p) · GW(p)

I couldn't find them in a quick search, but the gist of the argument that got me frustrated was a cluster of arguments that you've stated a lot but never written up at length. Let me summarize roughly:

All new technological developments are just continuations of evolution; there are no relevant differences between evolution of genes, memes, corporations, etc; and therefore the Singularity couldn't be an existential crisis, just a faster continuation of evolution.

(Apologies if I've mangled it.) It seemed to me that every time a relevant topic was mentioned, back in the days of the Sequences, you merely stated one of these opinions rather than argued for it. But again, it's difficult for me to recognize good arguments when I disagree with their conclusions.

Replies from: timtyler
comment by timtyler · 2012-04-20T01:44:31.610Z · LW(p) · GW(p)

I couldn't find them in a quick search, but the gist of the argument that got me frustrated was a cluster of arguments that you've stated a lot but never written up at length.

Hmm. Thanks. I did write a whole book about that one - I think.

Your objection also makes me think of this material:

Even with regular evolution there can still be existence "failures" - for particular species.

Also, I do think one of these is coming: http://alife.co.uk/essays/memetic_takeover/

...leading to this: http://alife.co.uk/essays/engineered_future/ - apparently a future where humans as we know them play a pretty insignificant role.

I do think that the trend towards increased destructive power needs to be considered in the light of the simultaneous trend towards greater levels of cooperation, moral behaviour, and peacefulness.

Replies from: orthonormal, siodine
comment by orthonormal · 2012-04-20T02:11:40.100Z · LW(p) · GW(p)

Ah— you have written it up at great length, just not in Less Wrong posts.

I think you claim too strong a predictive power for the patterns you see, but that's a discussion for a different thread. (One particular objection: the fact that evolution has gotten us here contains a fair bit of anthropic bias. We don't know exactly how narrow are the bottlenecks we've survived already.)

Replies from: JoshuaZ, timtyler, Will_Newsome
comment by JoshuaZ · 2012-04-20T02:31:32.782Z · LW(p) · GW(p)

We don't know exactly how narrow are the bottlenecks we've survived already.

We can estimate this for a lot of the major bottlenecks. For example, we can look at how likely other intelligent species are to survive and in what contexts. We have a fair bit of data for that. We also now have detailed genetic data so we can look at historical genetic bottlenecks in the technical sense for humans and for other species.

Replies from: siodine
comment by timtyler · 2012-04-20T11:21:05.327Z · LW(p) · GW(p)

One particular objection: the fact that evolution has gotten us here contains a fair bit of anthropic bias. We don't know exactly how narrow are the bottlenecks we've survived already.

Well, I don't want to appear to endorse the thesis that you associated me with - but it appears that while we don't know much about the past exactly, we do have some idea about past risks to our own existence. We can look at the distribution of smaller risks among our ancestors, and gather data from a range of other species. What Joshua Zelinsky said about genetic data is also a guide to recent bottleneck narrowness.

Occam's razor also weighs against some anthropic scenarios that imply a high risk to our existence. The idea that we have luckily escaped 1000 asteroid strikes by chance has to compete with the explanation that these asteroids were never out there in the first place. The higher the supposed risk, the bigger the number of "lucky misses" that are needed - and the lower the chances are of that being the correct explanation.

Not that the past is necessarily a good guide - but rather we can account for anthropic effects quite well.

comment by Will_Newsome · 2012-04-20T04:19:54.897Z · LW(p) · GW(p)

(One particular objection: the fact that evolution has gotten us here contains a fair bit of anthropic bias. We don't know exactly how narrow are the bottlenecks we've survived already.)

User:timtyler himself has brought up the dinosaurs' semi-extinction, for example, which was a local decrease in "moral progress" even if it might have been globally necessary or whatever.

comment by siodine · 2012-04-20T02:21:47.147Z · LW(p) · GW(p)

What's the current state of memetics in science (universities, journals, and so on)? I thought it turned out to be a dead end.

Replies from: timtyler
comment by timtyler · 2012-04-20T11:06:33.694Z · LW(p) · GW(p)

Susan Blackmore recently described the current state of memetics as a science as being "pathetic".

A few pages on the general topic:

What we do have is a lot of modern work on "cultural evolution". It's not quite the same - but it's close, and it has many of the basics down.

Statistically, memetics may not be doing too well - but memes are going crazy - through the roof. It bodes well for the subject, I think.

Replies from: siodine
comment by siodine · 2012-04-20T14:04:11.321Z · LW(p) · GW(p)

Nice, I was impressed by the video and your page on the criticisms of memetics. But I think you'd be more agreeable to more prejudicial people (i.e., most everyone) if you made some stylistic changes; would you care to see some criticisms?

Replies from: timtyler
comment by timtyler · 2012-04-20T15:33:57.067Z · LW(p) · GW(p)

Any feedback you care to offer would be more than welcome.

comment by TheOtherDave · 2012-04-19T00:09:14.437Z · LW(p) · GW(p)

Perhaps we have this backwards?

If there is something intrinsically valuable about controversy (and I'm not really sure that there is, but I'm willing to accept the premise for the sake of discussion), and we're not getting the optimal level of controversy on the topics we normally discuss (again, not sure I agree, but stipulated), then perhaps what we should be doing is not looking for "more and better contrarians" who will disagree with us on the stuff we have consensus on, but rather starting to discuss more difficult topics where there is less consensus.

One problem is, of course, that some of us are already worried that LW is too weird-sounding and not sufficiently palatable to the mainstream, for example, and would probably be made uncomfortable if we explore more controversial stuff... it would feel too much like going to school in a clown suit. And moving from areas of strength to areas of weakness is always a little scary, and some of us will resist the transition simply for that reason. And many more.

Still, if you can make a case for the value of controversy, you might find enough of us convinced by that case to make that transition.

Replies from: roystgnr, None, David_Gerard, Will_Newsome
comment by roystgnr · 2012-04-19T01:57:31.957Z · LW(p) · GW(p)

Here's a case for the value of controversy.

  • LessWrong orthodoxy includes a large number of propositions (over a hundred posts in just core sequences, at least one thesis per post)
  • The deductions that lead to each claim are largely independent (if post B was an obvious corollary of post A, it would have saved writer's and readers' time not to write it)
  • Reasoning is error-prone, especially when not formalized (this is a point made in the sequences; if it's wrong then q.e.d.)
  • Even if each deduction is overwhelmingly likely (let's say 99%) to be correct, it would be likely (63% in this case) that at least one out of a hundred would be incorrect
  • Because these are deductive chains of reasoning (they're "the sequences", not just "the set"), one false deduction can invalidate any number of conclusions which follow from it. The Principle of Explosion has been defeating brilliant people for millennia.

In other words, even if you believe that each item of LessWrong consensus is almost certain to be correct, you should still be doubtful that every item of LessWrong consensus is likely to be correct. And if there are significant errors, then how else will they be found and publicized other than via a controversial discussion?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-19T14:18:29.485Z · LW(p) · GW(p)

I agree that there are errors in the "LW consensus."
I agree that a cost-effective mechanism for identifying those errors would be a valuable thing.

By your estimation, how many controversial discussions have occurred on LW in the last year?
How many of them have contributed to identifying any of those errors?

Replies from: roystgnr
comment by roystgnr · 2012-04-19T22:18:53.669Z · LW(p) · GW(p)

Those are both good questions (as is the implicit point about cost-effectiveness or lack thereof); I'm afraid I'm not a heavy enough reader here to quickly give accurate answers.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-19T22:33:16.883Z · LW(p) · GW(p)

I'm not looking to you for accurate answers, I'm trying to understand the model you're operating on.
If you tell me you think there have been a few controversial (in the sense you describe above) discussions and you think they've contributed to identifying errors, then it makes sense to me that you think having more such discussions is valuable. I may disagree, but it's clear to me what we're disagreeing about.
If you tell me you don't think we've had any such discussions, I can sort of understanding you believing that they would be valuable if we had them, but I would also conclude I don't quite know what sorts of discussions you're talking about.
If you tell me you think we've had a few such discussions but they haven't contributed anything, then I would be very confused and want to revisit my understanding of why you believe what you believe.
Etc.

comment by [deleted] · 2012-04-22T14:36:15.068Z · LW(p) · GW(p)

Controversial doesn't necessarily mean weird-sounding. For example, we could talk more about medicine, an area with a great deal of disagreement, without seeming like clown-suit wearing crazies. Mainstream topics should be more than enough to fill the controversy quota.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-22T14:50:18.068Z · LW(p) · GW(p)

(nods) Fair point.

comment by David_Gerard · 2012-04-22T14:04:03.784Z · LW(p) · GW(p)

This wouldn't be an issue except it's entirely unclear to me that LessWrong is making much in the way of progress of whatever sort. There's the meetup groups, which sometimes look good and sometimes sputter.

But perhaps I'm wrong and there's a list of things that are reasonable evidence of progress of whatever sort.

comment by Will_Newsome · 2012-04-19T11:12:26.101Z · LW(p) · GW(p)

See Wei Dai's comment here—he doesn't value controversy qua controversy.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-19T15:06:18.424Z · LW(p) · GW(p)

Mm.
Fair enough.

As I've said elsewhere, I'm not convinced that the goal of having correct beliefs on the topics addressed in the Sequences will be cost-effectively approached by introducing new contrarians to LW.
It would likely be more cost-effective to identify some thinkers we collectively esteem and hire them to perform a "peer review" on those topics.

That said, I'm not sure I see what the point of that would be either, since it's not like EY is going to edit the Sequences regardless of what the reviewers say.
It might be even more cost-effective to hire reviewers for his book before he publishes it.

comment by daenerys · 2012-04-19T13:58:43.042Z · LW(p) · GW(p)

Idea- Using Contrary Opinions as a Group Rationality Exercise

Sometimes when I'm discussing issues one-on-one with someone of a different opinion, I will find myself treating arguments as soldiers (I am improving on catching myself in this, I think.). I can also have difficulties verbalizing what is wrong with an argument when put on the spot.

Maybe we can use "Devil's Advocating" posts as a group exercise in rationality. Someone can read or summarize a specific opposing viewpoint that they do not necessarily agree with (maybe subjectivism, or Kuhn's scientific revolutions). They could hopefully even get completely new material, in order to provide practice in a field we haven't discussed.

They will present the strongest summary they can in a post, writing as if they fully supported the idea. The tag [Devil's Advocating] can be used to show that this is what they are doing.

One comment thread can be devoted to finding arguments that the viewpoint covers strongly. (i.e. maybe subjectivism handles a specific question a little better than most other philosophies, or maybe Kuhn's revolutions provide a better explanation of the different types of science that scientists engage in than other science philosophies). This can help us fight our "Soldiers as Arguments" inclinations.

Another comment thread can be devoted to finding specific fallacies in the argument. NOT just "This is silly, is better", but actual "This doesn't work because of ".

Of course, for this to be interesting, it has to be an opposing idea that hasn't been discussed to death. For example, I know in history there are all sorts of competing theories, some of which work better than others. I bet other fields are the same.

Replies from: thescoundrel
comment by thescoundrel · 2012-04-19T15:07:25.356Z · LW(p) · GW(p)

This reminds me of days in +x debate, where the topic was set in advance, and you were assigned to oppose or affirm each round. Learning to find persuasive arguments for ideas you actually support is not an intuitive skill, but certainly one that can be learned with practice. I, for one, would greatly enjoy +x debate over issues in the less wrong community.

comment by [deleted] · 2012-04-18T22:36:15.576Z · LW(p) · GW(p)

I would love to be better at contrarianism, but I don't know where to begin.

I got where I am today mostly through trial and error.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T02:12:26.665Z · LW(p) · GW(p)

The General Contrarian Heuristic:

  • Assume these and such people who claim to be right actually are at-least-somewhat-straightforwardly right, and they have good evidence or arguments that you're just not aware of. (There are many plausible reasons for your ignorance; e.g. for the longest time I thought Christianity and ufology were just obviously stupid, because I'd only read atheist/skeptic/scientismist diatribes. What evidence filtered evidence?) What is the most plausible evidence or argument that can be found while searching in good faith? This often splits in two directions:

    • The Vassarian steel method: E.g., you hear lots of stuff about fairies, so you go digging around and find Charles Bonnet syndrome. This might be akin to constructing steel men, but beware!, for it is often a path to sophistry & syncretism. You know how in Dan Brown novels he keeps constructing these shallow connections between spirituality and science in order to show that they're not actually at odds? Don't be Dan Brown.
    • The Newsomelike schizophrenic method: You find Charles Bonnet syndrome but decide that even that isn't enough—you postulate that daimons are taking advantage of any plausible excuse (e.g. stroke, optical damage, sleep paralysis) to manipulate people into delusion. (You then independently re-derive justifications for burning witches or whatever, 'cuz why not?) This might be akin to paranoid schizophrenia, but beware!, for it is often the path to, um, paranoid schizophrenia.

Some contrarian topics I've had fun exploring:

  • Assume UFO phenomena and Marian apparitions are legit, i.e. caused by some transhumanly powerful process. E.g., the Miracle at Fatima. What would be the mechanism? More pertinently, what would be the motivations?

  • Assume legit retroscausal psi effects in parapsychology: What would be the mechanism?

  • Assuming it is legit, i.e. retrocausal results are legit, why is psi capricious?

  • Assume intelligent life isn't fantastically unlikely. Why no signs of intelligent life? (Related to "why is psi capricious" question.)

Remember, skepticism is easy, it's the default position: if the phenomenon you're modeling is actually complex, your explanation will have to be subtle. It's always too easy to shout "confirmation bias", "mass hallucination", "memetic selection pressures", and what have you. Don't fall for that trap; it's just as much of an error as the Dan Brown trap—maybe moreso, because at least the Dan Brown trap doesn't tell you to ignore important evidence.

If you make an argument along the lines of "the prior probability of that hypothesis is low", deduct 10 of your contrarian points. If you make a reference to the universal prior, deduct 20 points and feel guilty for the next few weeks.

Note that I think I'm a decent contrarian but I'm bad at communicating contrarian ideas; I'm not sure to what extent this is a personal quirk or a general problem when talking to people who start out assuming that you're crazy/deluded/trolling/whatever. If there is a General Contrarian Heuristic that's more amenable to communicating resultant insights then maybe that heuristic is better.

"May we not forget interpretations consistent with the evidence, even at the cost of overweighting them."

Replies from: komponisto, Will_Newsome
comment by komponisto · 2012-04-19T02:35:45.133Z · LW(p) · GW(p)

May we not forget interpretations consistent with the evidence, even at the cost of overweighting them."

Upvoted. The easiest way to get the wrong answer is to never have considered the right answer.

I've always thought that imagination belonged on the list of rationalist virtues.

Replies from: NancyLebovitz, Will_Newsome
comment by NancyLebovitz · 2012-04-19T06:18:33.333Z · LW(p) · GW(p)

The easiest waty to get the wrong answer is to never have considered the right answer.

I like that a lot.

comment by Will_Newsome · 2012-04-19T03:44:08.567Z · LW(p) · GW(p)

I've always thought that imagination belonged on the list of rationalist virtues.

"What do you think are the rationalist virtues?" might be an interesting discussion post.

comment by Will_Newsome · 2012-04-19T02:55:01.342Z · LW(p) · GW(p)

For comparison, the General Chess Heuristic: Think about a move you could make, think about the moves your opponent could make in reply, think about what moves you could make if they replied with any of those candidate moves, &c.; evaluate all possible resultant positions, subject to search heuristics and time constraints.

What's interesting is that novice chess players reliably forget to even consider what moves their opponent could make; their thought process barely includes the opponent's possible thought process as a fundamental subroutine. I think novice rationalists make the same error (where "opponent" is "person or group of people who disagree with me"), and unfortunately, unlike in chess, they don't often get any feedback alerting them to their mistake.

(Interestingly, Roko once almost defeated me in chess despite having significantly less experience than me, because he just thought really hard and reliably calculated a ton of lines. I'd never seen anyone do that successfully, and was very impressed. I would've lost except he made a silly blunder in the endgame. He who has ears to hear, let him hear.)

comment by buybuydandavis · 2012-04-19T09:15:04.900Z · LW(p) · GW(p)

Any extreme minority position would take a long time to win converts. People are generally wrong because they have bad concepts, not because they have clear concepts, but mistakenly thought 2+2=5.

It takes a while to penetrate poor concepts, and the people with poor concepts have to be willing to put in the effort to justify their argument, and not just take it as a given that is up to someone else to refute their nonsense, because you can't refute gibberish. Most people here are intellectually confident. Add to that the consensus of the group, and who is going to expend the effort to honestly defend and justify the consensus?

On the contrarian side, the contrarian is also probably intellectually confident. Unless he finds a productive engagement, he'll eventually just shrug and move on. I've done as much. On one thread, I found the views about clinical trial data thoroughly wrongheaded. I was downvoted a lot, but persisted, being the ornery coot that I am. But eventually, I move on, because I have a day job, and other things to do.

And there's something about the "comments after blog post" format that isn't conducive to sustained debate for me. Maybe because it's one long page, it feels inappropriate to have 20 back and forths, while a serious discussion probably would require that.

Replies from: billswift
comment by billswift · 2012-04-19T14:16:34.387Z · LW(p) · GW(p)

I think this is the best comment, at least the one that best captures my own views, on this thread.

Another way of looking at the problem expressed in buybuydandavis's first two paragraphs is that most people are so busy signalling, rather than thinking, that their concepts are usually "not even wrong".

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-18T22:07:37.081Z · LW(p) · GW(p)

Maybe we could have a "contrarian of the month" award? This could also encourage normally agreeable Less Wrong users to argue against consensus positions in hopes of winning the award.

Replies from: wedrifid, David_Gerard, ahartell, Alicorn
comment by wedrifid · 2012-04-18T22:21:44.764Z · LW(p) · GW(p)

Maybe we could have a "contrarian of the month" award?

Can we please not do this? I already feel a pre-emptive contrarian outrage against whatever consensus is arrived at when awarding this "official contrarily" award. Then I start thinking of court Jesters. This is a way to get people to think in the predetermined 'outside the box box' and change their 'mainstream' uniform to the 'rebel' uniform. That's not the way to get useful contrarians.

This could also encourage normally agreeable Less Wrong users to argue against consensus positions in hopes of winning the award.

You're advocating this as a good thing?

Replies from: John_Maxwell_IV, chaosmosis
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-18T22:39:47.766Z · LW(p) · GW(p)

Are you suggesting folks can't be trusted to reliably identify genuinely high-quality opinions that disagree with theirs?

What can we learn from this thread?

http://lesswrong.com/lw/2sl/the_irrationality_game/

You're advocating this as a good thing?

The OP talks about folks who "like to find fault in every idea they see". Assuming this is valuable, there are two ways to have this kind of person: be this kind of person naturally, or unnaturally in order to win an award.

Keep in mind that the award's specifications can be changed, for example, "best civil disagreement with LW majority" or "changed the most minds among LW users".

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T01:18:01.093Z · LW(p) · GW(p)

http://lesswrong.com/lw/2sl/the_irrationality_game/

(Anybody is welcome to copy/paste/edit that post and run it again, probably in Main because the less casual nature of Main discourages accidental failure to read the rules. Also, I noticed that a lot of the rules weren't really necessary because people did reliably play in the spirit of the game; most of the rules are along the lines of 'don't cheat'. So if you re-run it you might want to remove a lot of the text. FWIW I'd upvote it and probably make a lot of comments.)

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-19T03:47:51.086Z · LW(p) · GW(p)

I would change the rules to go something like this: Write a one sentence summary of your conclusion first, in as shocking terms as possible. Get people to vote up or down based on whether they agree with the initial one sentence summary. Then you justify the one sentence summary in subsequent paragraphs, which might cause folks to change their mind. That way we could get novel but possibly true beliefs in addition to irrational beliefs at the top.

Or rethink the game entirely along these lines so it is the "More Plausible Than I Initially Thought Game", so we don't get things like UFOs at the top. Participants upvote those comments that cause the maximum change to their beliefs, especially by making something surprising seem at least vaguely plausible. I dislike the current game rules somewhat because it seems like a signaling fest.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T09:07:14.457Z · LW(p) · GW(p)

Or rethink the game entirely along these lines so it is the "More Plausible Than I Initially Thought Game", so we don't get things like UFOs at the top.

FWIW I'm really glad that UFOs were at the top. The resultant discussion and links to articles about Fatima contributed to me doing a lot of serious thinking and ultimately changing my mind, and now I believe in "hyperdimensional"/demonic/high-weirdness explanations for UFOs.

Your variation on the game still sounds better, though, 'cuz it focuses on marginals which are clearly more important here.

comment by chaosmosis · 2012-04-19T00:27:32.796Z · LW(p) · GW(p)

I was going to post a joke about receiving -100 reputation in less than 24 hours, but it was too sad to be funny.

comment by David_Gerard · 2012-04-18T22:50:24.438Z · LW(p) · GW(p)

Awarded to a nonconformist in black or a nonconformist in a clown suit? The latter is likely to get the tone argument (where someone's claimed rejection is the tone of the statement rather than its content).

Suggestion: whenever you're tempted to respond with a tone argument ("stop being so rude/dismissive/such a flaming arsehole/etc"), try really hard to respond to the substance as if the tone is lovely. The effort will net you upvotes ;-)

Replies from: cousin_it, John_Maxwell_IV
comment by cousin_it · 2012-04-18T23:21:40.146Z · LW(p) · GW(p)

Seconding your suggestion because it's worked well for me every time I found the strength to use it. Also, when you feel really aggravated at your opponent's tone, fogging is a useful and civil-sounding technique.

Replies from: thomblake, David_Gerard, Wei_Dai
comment by thomblake · 2012-04-19T00:34:55.872Z · LW(p) · GW(p)

That took forever for me to figure out. Wikipedia:Fogging.

Replies from: thomblake
comment by thomblake · 2012-04-21T16:54:46.616Z · LW(p) · GW(p)

Hmm... I just realized my standard for "taking forever" to find a piece of information is about 30 seconds. I love the future.

comment by David_Gerard · 2012-04-18T23:38:07.256Z · LW(p) · GW(p)

For a good example, note how wonderful Wei Dai's tone consistently is, even when responding to comments where "go away you idiot" would be a quite reasonable reaction.

Replies from: wedrifid
comment by wedrifid · 2012-04-19T00:54:41.442Z · LW(p) · GW(p)

For a good example, note how wonderful Wei Dai's tone consistently is, even when responding to comments where "go away you idiot" would be a quite reasonable reaction.

Better than many, worse than a few. Wei isn't consistent and has violated this principle at times, at least as flagrant as others. (Mind you I can't think of any examples from the last year or two.)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-20T17:50:55.896Z · LW(p) · GW(p)

"I can't think of any examples from the last year or two" seems to imply that you can think of some examples from before the last two years. Are you thinking of the time when I said arguing with you wasn't fun, or something worse than that? I'd like to know because I honestly can't remember writing any comments that might be considered "flagrant", and your comment has made me a bit afraid that I might have a biased self image due to selective recall.

Replies from: wedrifid
comment by wedrifid · 2012-04-20T23:50:39.119Z · LW(p) · GW(p)

Are you thinking of the time when I said arguing with you wasn't fun, or something worse than that?

I don't recall you saying that. (ie. If/when you did say that it didn't etch itself in my mind as a glaring social defection worth remembering).

I'd like to know because I honestly can't remember writing any comments that might be considered "flagrant"

What you consider ok may be different to what I consider ok.

I'm not saying your tone isn't better than most, but I'm certainly going to dispute it whenever you are put up on a pedestal as "wonderfully consistent". (ie. I don't want to bring up history, but claims of consistency are historical claims, and false ones at that!)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-21T00:10:26.834Z · LW(p) · GW(p)

What you consider ok may be different to what I consider ok.

Let me put it this way then: I can't recall doing anything worse than saying that arguing with you wasn't fun, and certainly nothing that deserves being called "flagrant", which my dictionary defines as "Conspicuously bad, offensive, or reprehensible". If you still want to stand by your statement, then I think I deserve to see at least one example of what you are talking about.

Replies from: komponisto, wedrifid
comment by komponisto · 2012-04-21T16:23:35.645Z · LW(p) · GW(p)

"flagrant", which my dictionary defines as "Conspicuously bad, offensive, or reprehensible"

Just for what it's worth, I think that's a poor definition. The actual meaning is more like "conspicuous", with a connotation of badness (etc.).

comment by wedrifid · 2012-04-21T05:17:18.120Z · LW(p) · GW(p)

If you still want to stand by your statement, then I think I deserve to see at least one example of what you are talking about.

I made my previous reply to you simply out of courtesy, and went out of my way to leave the option open for you to dismiss my objection as merely subjective - yet that was negatively received (by the metric of votes). I viscerally dislike it when I respond to questions in good faith and am penalized for doing so. I further assert (somewhat frequently) as a matter of general principle that nobody has the right to demand replies when making those replies can be expected to be detrimental for whatever reason - but almost always for some reason of social political nature. In this sense I oppose the sentiment and conclusion of your post from even more years back - Agree, retort or ignore. It introduces one more highly gamable social rule that would be a net detriment if adopted as a norm.

The above in mind I erased the draft reply I had - posting it would be an outright violation of my principles. I have no problem with accruing disapproval for expressing my own points, but actively provoking disapproval for the purpose of just answering a query of another when I would otherwise not have an interest in speaking on the subject? That's an entirely different matter!

If you still want to stand by your statement

I'll make no further stand here - and note that the stand I took here is against the position taken by your fanboy, not against you. In the unlikely event that David_Gerard or anyone else once again nominates Wei_Dai for an all-time "Turn The Other Cheek" award I will naturally take personal offense, sincerely, publicly and vocally.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-04-21T16:03:18.363Z · LW(p) · GW(p)

It introduces one more highly gamable social rule that would be a net detriment if adopted as a norm.

Isn't it probably a net detriment to have a norm against asking for concrete examples to back up vague critical claims when those vague criticisms are alleged to be offered in good faith?

Replies from: wedrifid
comment by wedrifid · 2012-04-21T16:39:22.779Z · LW(p) · GW(p)

There is no norm against asking. It is the asking via which Wei_Dai was able to publicly defend his right to the 'wonderfully consistent tone' nomination. What he gets no right to is the expectation that others will jump through a hoop he specifies when it is obviously detrimental to do so and no interest of there's.

Replies from: Tyrrell_McAllister, Wei_Dai
comment by Tyrrell_McAllister · 2012-04-22T00:34:48.558Z · LW(p) · GW(p)

What he gets no right to is the expectation that others will jump through a hoop he specifies when it is obviously detrimental to do so and no interest of there's.

One solution would be to have a general norm against offering vague criticisms without being prepared to back them up with concrete examples. If such a norm were in place, it wouldn't seem like you had made a concession to Wei in particular when you provided an example. You wouldn't have to "jump through a hoop he specifies", because the hoop would already have been pre-specified by the community. Wei would gain no status boost at your expense when you followed the general norm.

If such a norm doesn't already exist, do you agree that it should? If so, why not help to establish it by following it, while making it clear that you are providing the concrete example not because Wei requested it, but rather because there ought to be a general norm to provide such examples?

Replies from: wedrifid
comment by wedrifid · 2012-04-22T04:21:43.096Z · LW(p) · GW(p)

I forcefully reject your framing of the context.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-04-22T07:43:51.681Z · LW(p) · GW(p)

I have been trying to work within what I took to be your own framing of the context, since otherwise rejection is inevitable. I accept that I failed. Are you willing to explain where I strayed out of your frame?

comment by Wei Dai (Wei_Dai) · 2012-04-22T00:44:44.438Z · LW(p) · GW(p)

What he gets no right to is the expectation that others will jump through a hoop he specifies when it is obviously detrimental to do so and no interest of there's.

I do not see how it can be detrimental to either offer an example or say something like "I don't recall an example from more than two years ago either." Or are you objecting to the fact that I used the word "deserve" while asking for such an example, and "detrimental" refers to the possibility of encouraging such thinking and/or language in the future? But I only used the word after you refused my first request for an example. Why did you refuse that one?

I'm afraid you're either having an illusion of transparency (i.e., the thing you believe to be obvious is entirely unclear to others), or perhaps just making up excuses to avoid admitting an error.

ETA: Just saw Tyrrell's sibling comment, and I guess this whole incident could be explained by the fact that I think the norm suggested by Tyrrell already exists whereas you don't. Can you confirm that's what's going on?

Replies from: wedrifid
comment by wedrifid · 2012-04-22T04:19:16.728Z · LW(p) · GW(p)

The general scenario plays out rather frequently and the game-theoretic incentives are of interest to me (far more so than the specifics of to just what degree Wei_Dai should be honored as a universal role model.) Let me see if I can explain clearly, at least for the specific variant encountered a couple of ancestors up.

Background preferences:

  • Being downvoted - and in particular the social opposition that represents - is an undesirable thing. It induces negative affect and I take (and reflectively endorse myself taking) actions to minimise this.
  • The aversion I feel (and endorse feeling) for a given instance of being downvoted or subject to verbal social aggression varies greatly depending on context. Not all downvotes are equal.
  • Being downvoted for expressing a position that would reflect negatively on a user with many allies is a minor cost. Not only is expected it is behavior I endorse as the right thing for them to do, except in as much as it is based on false premises. That is the error in judgement on the behalf of the voters is in not believing that the expressed position is valid.
  • More broadly than the above, being downvoted for things that I want to do is a moderate-to-low cost. I am being opposed but I am in essence paying for the opportunity to seek a goal that I desire and endorse.
  • Being punished for answering a question for the sake of someone else's interest is a major cost. I find it highly unpleasant to be punished for doing something I did only out of courtesy. The verbal string provoked in my mind tends to include the phrase "Fuck That!" The aversion to being in such circumstances is high and represents my instincts rightly telling me that my behavior was naive. Following someone else's frame when to do so is to self-sabotage - to be baited into an intentional or unintentional trap - is a gross social blunder.
  • All else being equal I love to answer the questions of others and in general to assist them in understanding me or, for that matter, assist them in just about anything.

With that in mind consider my incentives at this point:

  • It was my judgement that I would certainly be downvoted and possibly sniped for just about any reply I gave (short of being blatantly dishonest by making some sort of retraction.) Voting responses in such situations are just political - the aforementioned social alliances dictate that my comments in the thread would be systematically downvoted to a more or less uniform degree based on how many people are on the 'blue' side. I can accept that.
  • Following your frame, answering your question would result in penalties - the same penalties I would get if I was actually trying to achieve my own goal. I've mentioned how much I dislike being in such situations and that I consider walking into them to be an inexcusable social blunder.
  • In such context a non-reply isn't about you, the person being ignored. It doesn't mean that the question is considered disingenuous or manipulative (although in other cases - not this one - it often is). It doesn't mean that there is any assumption that the situation is transparent. It doesn't mean that there is no desire to accede to your wishes and give you an answer. What it is about is the incentives implied by the predicted behavior of your allies.

I'm afraid you're either having an illusion of transparency (i.e., the thing you believe to be obvious is entirely unclear to others), or perhaps just making up excuses to avoid admitting an error.

I hope the above gives you an alternative understanding to 'transparency' issues. I further hope you understand that if you had not at least couched the latter option with at least a somewhat less dire dichotomous option as the alternative that I would have taken a rather significant degree of offense, in accordance to the social implications of underlying the move.

Or are you objecting to the fact that I used the word "deserve" while asking for such an example, and "detrimental" refers to the possibility of encouraging such thinking and/or language in the future?

This applies to a certain extent. The 'deserve' claim applied to the desire of someone to that another voluntarily does something that amounts to self-sabotage requires a high degree of endorsement before I will refrain from expressing an objection to it - regardless of whether it is directed at myself.

But I only used the word after you refused my first request for an example. Why did you refuse that one?

I gave you an answer, when I had the option of simply ignoring your comment. It consisted of giving the clear option to dismiss my objection to David_Gerard as merely different subjective preferences about how people should interact socially while giving a clear social cue that I didn't want to go looking up ancient history. As a general rule we should not expect others to exactly follow the instructions we give them regarding what to speak on and it is discourteous to try to press them to do so. (See also.) This isn't to say that it is always necessarily inappropriate for you to so but it does mean that the nature of the interaction moves from being a request to a coercion via the manipulation of perceptions within the social environment. Your expectation of getting an answer should move from being based on expectations of goodwill toward how effectively you can apply social force in the context.

As for the reasons I didn't directly respond with a link or detailed description:

  • See my several previous mentions of the difference between desiring to bring up specific history from multiple years ago to the desire to insist that general claims of your wonderfully consistent tone be tempered. But mostly:
  • When you executed the behaviors that you did you (probably) considered them the right thing to do. I didn't and don't. I did not wish to create a battle about whether said responses are right or wrong. Hence giving a pre-emptive and lite version of a 'lets agree that we will probably disagree'.
  • Frankly, it's a lot of work both in terms of time and emotional effort to dredge up details of past conflicts.
  • It is almost certain that we are already talking about the same incident (and month or so of context) but that we recall vastly different salient features.

By way of elaboration of the final point and also in answer to the question you have been asking, I refer to a case where you made false, highly personal and significant accusations regarding my nature and motives and backed it up by taking unrelated expressions of mine completely out of context, complete with links. A (yes) 'flagrant' and unacceptable attempt to do reputation damage against a target - a violation that both includes and exceeds that of mere 'tone'. This is was prompted by a disagreement with you regarding your post saying that lesswrong is biased because we didn't support a post by a (high status) outsider making, if I recall, claims about how clearly guilty Amanda Knox was.

I have commented on how ironic it is that of all the hundreds of social attacks I've endured on lesswrong - from vulgar name calling through denigration of my intellect or 'rationality' and even somehow to 'fanboy' - the most vicious and memorable attack has been by Wei_Dai, a user who is usually well be behaved and is universally respected. 'Universally respected' includes my own respect for the intellectual contributions you have made via object level posts.

The above is my best attempt to answer the question directly and is approximately what I had mentally rehearsed prior to the negative incentives prompting me to abort the reply. I don't expect you agree with my evaluation. I don't expect you like hearing it. While my decision to not directly answer the question (until now) was not based on how it effects you, do you really think it would have been better for me to say the above than to dodge with "we probably think different things are ok"?

ETA: Just saw Tyrrell's sibling comment, and I guess this whole incident could be explained by the fact that I think the norm suggested by Tyrrell already exists whereas you don't. Can you confirm that's what's going on?

Maybe somewhat. I accept a norm that all else being equal answering people's questions is desirable. A significant issue is that to the extent that Tyrrell's norm exists it is negatively enforced. By which I mean effective punishment of 'norm violaters', assuming said norm, has a sign bit that points in the wrong direction.

If people punish you for what they would advocate as 'the right thing to do' then don't do it.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-22T11:41:53.363Z · LW(p) · GW(p)

do you really think it would have been better for me to say the above than to dodge with "we probably think different things are ok"?

Yes, in part because I thought there was a non-negligible chance that I had done something really bad and then blocked the memory of it. (I mentioned this fear in my first reply to you.) So I really do appreciate the time and effort you took to clearly explain everything (or at least your perspective of it, which is all I can ask for). Of course, as you suspected, I disagree about your interpretation of the events you cited, but I'll respect your desire to not "battle" over it.

If people punish you for what they would advocate as 'the right thing to do' then don't do it.

Huh? Given the upvotes on Tyrrell's comment, it seems that most people (at least among the LW population paying attention to this thread) think the right thing to do is for you to provide a concrete example of what I did wrong, which you hadn't done until now. It seems clear to me that people were punishing you for not doing the right thing.

(I hope you don't mind that I ignored most of your comment. I did so because it was quite long and I'm afraid that readers are probably getting bored with this discussion. If you do mind, or have any specific parts you want me to address, or would like to continue in private, please let me know.)

Replies from: wedrifid
comment by wedrifid · 2012-04-22T11:55:48.127Z · LW(p) · GW(p)

Huh? Given the upvotes on Tyrrell's comment, it seems that most people (at least among the LW population paying attention to this thread) think the right thing to do is for you to provide a concrete example of what I did wrong, which you hadn't done until now. It seems clear to me that people were punishing you for not doing the right thing.

You miss the point. Comments that don't exist physically can not be downvoted or sniped. Moreover they do not draw attention to the issue at all so in my judgement (and experience!) will likely result in less antipathy that will be taken out elsewhere. Whereas, as previously described, any (realistic) comment that was made would be penalised out of social obligation to yourself by the couple of (net) people who had already picked that side. If I was executing the strategy that I advocate I would have made no reply at all. I of course did not, I went in to an extended analysis of the abstract subject - but hey, I don't consider myself obliged to do what I consider the correct thing to do all the time.

I hope you don't mind that I ignored most of your comment. I did so because it was quite long and I'm afraid that readers are probably getting bored with this discussion. If you do mind, or have any specific parts you want me to address, or would like to continue in private, please let me know.

I was neutral with respect to getting any reply at all.

comment by Wei Dai (Wei_Dai) · 2012-04-19T08:21:13.971Z · LW(p) · GW(p)

Worked well in what sense? David talked about netting upvotes, but surely that's not a main consideration for you at this point. I'm hoping that being nice and responding just to substance might make the other person less belligerent and a better contributor to the community. I tried this on Dmytry and it didn't work, but I wonder if it has worked in the past on others. Do you or anyone else have any anecdotes in this regard?

Replies from: cousin_it, David_Gerard, wedrifid, Will_Newsome
comment by cousin_it · 2012-04-19T18:14:00.566Z · LW(p) · GW(p)

Hmm, you're right, I just checked and it has never worked on rude people for me either. I must've been thinking about my exchanges with some people who were confident and confused about an issue, but not rude. Sorry.

comment by David_Gerard · 2012-04-19T19:27:22.583Z · LW(p) · GW(p)

It nets upvotes because it produces a useful response post for the onlookers, who have the votes. This is why it's work, because it involves turning an annoying post into something of value.

comment by wedrifid · 2012-04-19T08:56:59.306Z · LW(p) · GW(p)

Worked well in what sense?

Avoiding flame wars. Leaving the 'contrarian' at least with the sense that some of their ideas have been heard and validated. Reducing the extent to which you yourself get caught up in negative spirals. All without enabling them or encouraging more undesired behavior.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-20T00:11:55.977Z · LW(p) · GW(p)

Both you and David_Gerard seem to have taken my question as asking about the general benefits of "ignoring tone", when I was trying to figure out what cousin_it meant by "worked well", specifically whether he had succeeded in making a rude commenter less belligerent and a better contributor to the community, and also explaining why I wasn't sure what he meant.

Did you really misinterpret my question, or did you just use it as an opportunity to go off on a tangent and write something of general interest? (I'm trying to figure out if I need to be more careful about how to express myself.)

Replies from: David_Gerard, wedrifid
comment by David_Gerard · 2012-04-22T14:15:04.503Z · LW(p) · GW(p)

I would be interested to know what "worked well" meant more specifically as well (more specifically than "I felt personally satisfied with the conversation").

comment by wedrifid · 2012-04-20T14:24:53.475Z · LW(p) · GW(p)

Both you and David_Gerard seem to have taken my question as asking about the general benefits of "ignoring tone

I don't seem to have done that at all.

Not only was I repling to what 'worked well' meant - in general and from what I have observed of specific recent applications here - I was discussing the use of fogging, not merely tone-ignorance.

comment by Will_Newsome · 2012-04-20T03:11:09.395Z · LW(p) · GW(p)

(I remember being sort of rude or at least mildly-aggressively-uncharitable to you about a year ago and you responded saying we could clear up any misunderstandings via chat. I subsequently issued some mea culpas and was probably more charitable towards you from then on. Not sure if that counts, IIRC I was only being mildly rude.)

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-18T22:56:48.757Z · LW(p) · GW(p)

Whatever kind of contrarian Less Wrong thinks is valuable. It's not completely specified. I'm not sure I see how tone comes in.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-18T23:03:50.064Z · LW(p) · GW(p)

I'm thinking of the responses to critics of late. Even the arseholes are slightly worth listening to, but tone arguments are a way of not listening, and this may miss something important even if it's often all the response it deserves. No-one's obligated not to use it, but it's a good exercise to be able not to, particularly for the benefit of onlookers.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-19T00:24:33.571Z · LW(p) · GW(p)

Of course, listening doesn't leave a record, so it's hard to tell how many people are listening. It's the relative handful of people who reply who define the perceived tone of the site's response.

Or are you suggesting that responding to the substance is a better strategy than simply listening?

Replies from: David_Gerard
comment by David_Gerard · 2012-04-19T07:02:15.728Z · LW(p) · GW(p)

Hmmm. Driving readers away in such a way that they don't even respond strikes me as bad. But in working out what to do about this, I'm left with asking my other-people-simulator, which I strongly suspect will just hand me back the results of typical mind fallacy.

comment by ahartell · 2012-04-18T22:39:12.793Z · LW(p) · GW(p)

I'm not sure how much I like this idea (or the version I'm about to propose) but I think it would be better to treat it as a "Contrarian Quotes of the Month" type thing, kind of like the Rationality Quotes thread but using contrarian lesswrong comments.

comment by Alicorn · 2012-04-18T22:11:08.981Z · LW(p) · GW(p)

Would this award have content?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-18T22:18:24.278Z · LW(p) · GW(p)

Sorry, I'm not sure what you mean.

I'm thinking it would go something like this: users would be encouraged to track examples of contrarian contributions. At the end of the month, there would be a nomination process (with pointers to examples of contrary statements) and then voting on who was the best contrarian. (Whoever maximizes quality of statements degree of disagreement with other users at the time they wrote the statements. Number of contrary statements made could also be a multiplier, although that might be a bad idea if we want to avoid flooding LW with disagreeable contributions. Come to think of it, "contrarian *contribution of the month might be a better award".)

Allowing users to nominate themselves seems like a generally good idea, in case we are subconsciously avoiding our beliefs' real weak points, and to fight availability bias (individual users are more likely to remember good contrary comments they made early on in the month). There's probably no reason not to keep the registry open for nominations all month long.

If you're asking if there will be an award, maybe we could give them karma somehow? Personally, I suspect just winning the title will be a significant motivator.

An interesting variation would be to encourage established users to create alternate accounts to be contrary with, and only step out from behind the alternate account if they won the award.

One problem is quantifying the degree of disagreement. For instance, in one sense this recent discussion post of mine is very much in line with stereotyped opinions of what Less Wrong thinks, but in another sense, it got a substantial number of votes down (was negative for a good while after I created it) and the top-rated comment on it, voted much higher than the post itself, expresses disagreement. So was I being contrarian or not?

http://lesswrong.com/lw/bfy/you_only_live_once_a_reframing_of_working_towards/

Another idea is for contrary posts to specifically state that they are nominating themselves for the award within the body of the post. This could create a different dynamic when responding to the post, if it was explicitly pointed out that the post was something you might disagree with but might be correct anyway. (Probably not that good of an idea.)

comment by metaphysicist · 2012-05-15T05:10:09.316Z · LW(p) · GW(p)

I don't like contrarians, but I think honest and fundamental dissent is vital.

A recent development in applied psychology is that small incentives can have large consequences. I think the upvote/downvote ratio is underestimated in importance. The ratio currently is obviously greater than 1; I don't know how much greater. (Who does?) This creates an asymmetry in which below zero, each downvote has disproportionate stigmatizing power, creating an atmosphere of apprehension among dissenters. The complexion of postings might change if downvoting and upvoting rights were issued so that the numbers tended to be equal. A downvote should simply mean the opposite of an upvote; it shouldn't be the rare failing mark. Then, the outcome is truly more like a vote than a blackballing.

comment by Larks · 2012-04-19T07:16:08.738Z · LW(p) · GW(p)

We need a handy way of saying "Yes I understand the standard arguments for P but I still think it's worth your while considering this argument for ¬P rather than just telling me the standard arguments for P."

Unfortunately it may be that the only credible signal of this is to first outline the standard arguments for P.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T09:01:37.570Z · LW(p) · GW(p)

We need a handy way of saying "Yes I understand the standard arguments for P but I still think it's worth your while considering this argument for ¬P rather than just telling me the standard arguments for P."

Agreed. In my experience this problem of standard-argument-affirming shows up a lot during debates about uFAI risks. If I try to suggest some non-obvious argument against the Eliezerian position then I tend to mostly get re-assertions or re-phrasings of the standard Eliezerian arguments, which is distracting and a tad insulting. It seems some people identify me as a mainstream-view-loving enemy who is trying to unfairly marginalize the Eliezerian position, and thus don't bother to carefully check if my argument might be reasonable on its own terms.

In the last few months I've been averaging like 5 to 10 karma on my anti-Eliezerian AI risk arguments, and I think that's because I've expressed them more clearly and redundantly. But they're the same arguments that were getting downvoted to -5 or so back a year or two ago when I wasn't taking special care not to trigger local immune responses. (Weirdly, even saying that I'd spent a year or so with the Visiting Fellows talking to a lot of SingInst people who didn't think I was clearly stupid or insane didn't dissuade people from thinking I was clearly mistaken about basic SingInst arguments. I still don't really understand that... maybe I was interpreted as making an unjustified claim to authority that shouldn't be taken as evidence, or something.)

Replies from: Rain, Eugine_Nier
comment by Rain · 2012-04-20T14:17:05.831Z · LW(p) · GW(p)

The majority of your comments which I've downvoted have been for use of improper vocabulary. That is, you repurpose words in unconventional ways which result in extremely difficult, if not impossible, translation to something I can understand.

Lately, you seem to have been taking more care to use words with their dictionary definitions.

comment by Eugine_Nier · 2012-04-20T04:25:31.935Z · LW(p) · GW(p)

Part of it maybe that people know you and know you're not an idiot.

comment by anotherblackhat · 2012-04-19T19:01:30.948Z · LW(p) · GW(p)

I think the kind of people you're looking for are rare in general, so it shouldn't be a surprise that they are rare on LW.

That said, there's room for improvement. The karma system only allows for one kind of vote. It could be more like Slashdot and allow for tagging of the vote, or better yet allow for up/down voting in several different categories. If a comment is IMO well worded, clear, logical, and dead wrong, then it's probably worth reading, but not worth believing. Right now all I can do is vote it up or down. I'd like to be able to vote for clarity and against content at the same time. And as long as I'm wishing, I'd also like to be able to vote just to vote, so we can have user generated polls without needing a karma dump. And humor - that deserves it's own category. Better feedback, better results. Or at least, so I believe, never having had better feedback.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-19T19:16:59.437Z · LW(p) · GW(p)

Conversely, we could establish the convention of downvoting stuff we consider valueless and upvoting stuff we consider valuable, and leave right and wrong out of it except insofar as voters value right things and antivalue wrong things. If we did that, we'd understand that highly upvoted comments were considered valuable, but not necessarily agreed with.

Oh, wait.

Sure, we could also create a mechanism whereby people could indicate whether they agreed with it (also whether they thought it was well-worded, clear, logical, funny, properly spelled, whether it rhymed, and various other attributes), but before doing that it's worth asking what the benefit of that would be.

I understand wanting to facilitate finding valuable comments and hiding valueless ones, but for the other stuff I'd like to see the benefits articulated, not just labelled "better".

Replies from: anotherblackhat
comment by anotherblackhat · 2012-04-19T20:56:24.003Z · LW(p) · GW(p)

The idea is to make it possible to say (by voting) "even though I think you're wrong, I'd like to hear more". The problem IMO with the current system is that the people who vote "I think that's wrong" drown out the people who vote "I think that's interesting". It may be that isn't supposed to happen, but that seems to be what does happen. Would a "rhymes" button make sense? Sure - if you wanted to encourage rhyming posts. The GP wants to encourage contrarians and skeptics, so "like/dislike" and "agree/disagree" seemed appropriate. I haven't seen many of them on LW, but on other boards I really wish there was a "WTF? didn't understand your post" button, as I would press that one quite a bit. What buttons are best is a subject unto itself, but probably not worth discussing unless the basic concept is possible and worthwhile.

Replies from: TheOtherDave, JackV, NancyLebovitz
comment by TheOtherDave · 2012-04-19T21:06:11.555Z · LW(p) · GW(p)

Conversely, the impetus to make the basic concept possible might increase if someone made a compelling case for what value it would provide.

Incidentally, I'm not suggesting that people should upvote/downvote based on "interesting" rather than "true".

I'm suggesting people should upvote/downvote based on "want more like this."

That means if I see a true comment, and I want to see more true comments, I upvote it because it's true.
If I see a well-written comment, and I want to see more well-written comments, I upvote it because it's well-written.
If I see a rhyming comment, and I want to see more rhyming comments, I upvote it because it rhymes.
Etc.

Being able to tag a vote to indicate what attribute(s) I wanted more or less of would admittedly be clearer in ambiguous cases... I do sometimes find myself staring at a downvote wondering what the reason for it was.

That said, I'm not sure it would actually add much value.

comment by JackV · 2012-05-04T16:40:02.939Z · LW(p) · GW(p)

I think this is directly relevant to the idea of embracing contrarian comments.

The idea of having extra categories of voting is problematic, because it's always easy to suggest, but only worthwhile if people will often want to distinguish them, and distinguishing them will be useful. So I think normally it's a well-meaning but doomed suggestion, and better to stick to just one.

However, whether or not it would be a good idea to actually imlpement, I think separating "interested" and "agree" is a good way of expressing what happens to contrarian comments. I don't have first-hand experience, but based on what I usually see happening at message boards, I suspect a common case is something like:

  1. Someone posts a contrarian comment. Because they are not already a community stalwart, they also compose the comment in a way which is low-status within the community (eg. bits of bad reasoning, waffle, embedded in other assumptions which disagree with the community).

  2. Thus, people choose between "there's something interesting here" and "In general, this comment doesn't support the norms we want this community to represent." The latter usually wins except when the commenter happens to be popular or very articulate.

The interesting/agree distinction would be relevant in cases like this, for instance:

  • I'm pretty sure this is wrong, but I can't explain why, I'd like to see someone else tackle it and agree/disagree
  • I think this comment is mostly sub-par, but the core idea is really, really interesting
  • I might click "upvote" for a comment I thought was funny, but want a greater level of agreement for a comment I specifically wanted to endorse.

There's a possibly similar distinction between stackoverflow and stackoverflow meta, because negative votes affect user rank on overflow but not meta. On stack overflow, voting generally refers to perceived quality. On meta, it normally means agreement.

I'm not sure I'd advocate this as a good idea, but it seemed an interesting possibility given the problem proposed. FWIW, if it were implemented, it'd want a lot of scrutiny and brainstorming, but my first reaction would be to leave the voting as supposedly meaning "interesting", and usually sort by that, but add a secondary vote meaning "agree" or "disagree" or similar terms that can add a nuance to it.

Edit: Come to think of it, a similar effect is acheived by a social convention of people upvoting the comment, but also upvoting a reply that says "this part good, this part bad". If that happens, it should fulfil the same niche, but I don't know if it is happening enough.

comment by NancyLebovitz · 2012-04-19T22:28:55.246Z · LW(p) · GW(p)

"Even though I think you're wrong, I'd like to hear more" strikes me as better expressed as a comment rather than a vote.

That way, you can explain what you want to hear more about.

Replies from: vi21maobk9vp, anotherblackhat
comment by vi21maobk9vp · 2012-04-20T17:50:54.915Z · LW(p) · GW(p)

Vote + comment is even better: you can sort by votes.

There are topics here on LW where I would prefer to read only threads with high "wrong but interesting" scores.

comment by anotherblackhat · 2012-04-19T23:35:10.000Z · LW(p) · GW(p)

I'd much rather get a reply than a vote.
But presumably there's a reason for the current system rather than the arguably simpler method of not having up/down buttons.

comment by Will_Newsome · 2012-04-19T04:13:48.599Z · LW(p) · GW(p)

Some advice for wannabe contrarians and trolls, here. (Muflax seems to be in the middle of re-designing his blog so the link might not be 100% stable.)

comment by private_messaging · 2012-05-14T10:51:06.761Z · LW(p) · GW(p)

I think we can see now how the situation evolved: SI ignored what 'contrarians' (the mainstream) said, the views they formed after reading SI's arguments, etc.

SI then gone to talk to GiveWell, and the presentation resulted in Holden forming same view - if you strip his statement down to bare bones he says that he thinks giving money to SI results in either no change, or increase of the risk, as the approach SI advocates is more dangerous than current direction, and the rationale given has already been available (but has been ignored).

Ultimately, it may be the case that SI arguments, when examined in depth by random outsider, typically result in strongly negative opinion of SI, but sometimes result in positive opinion of SI. The people whom form positive opinion seem to be a significant fraction at LW - ultimately if you examine the AI related arguments here, and form negative opinion, you'll be far less interested in trying to learn rationality from those people.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-06-25T10:40:45.814Z · LW(p) · GW(p)

Is Holden's view really the same as the mainstream view, or is it just a surface similarity?

For example, a typical outsider would doubt about SIAI abilities, because a typical outsider thinks intelligent machines belong to sci-fi, not real life. Holden worries about lack of credentials. Among those who think intelligent machines are possible, a typical person thinks it will be OK, because obviously the machines will do only what we tell them to do. Holden worries that a (supposedly) Friendly AI is more risky than a "Tool AI". Etc.

Replies from: private_messaging
comment by private_messaging · 2012-06-25T11:44:38.479Z · LW(p) · GW(p)

Mainstream meaning the people with credentials that the Holden was referring to (whose views are somewhat echoed by everyone else). The kind of folk that will not be swayed by some sort of mental confusion between common discourse "the function of the AI is to make paperclips" and technical discourse where utility function is mathematical function that is a part of specific design of a specific AI architecture. Same kind of folk, if they come across the Russian mathematician name-dropping that's going on here, and after they politely exhaust the possibility that they misunderstood, would be convinced that this is some complete pile of manure arising from utterly incompetent person reporting his awesome misunderstandings of advanced mathematics he read off a popularization book. Second order bad science popularization. I don't even care about AI any more. It boggles my mind that there's entire community of people who just go around having such gross lack of understanding of the things they are talking about.

edit: This stuff is only tolerated because it sort of promotes interest in mathematics. To be fair, even very gross misunderstanding of mathematics may serve a good function if a person passionately talks of the importance of mathematics he misunderstood. But once you start seriously pushing nonsense forward - you're out. This whole thing reminds me of experience with entirely opposite but equally dumb point: some guy with good verbal skills read Godel, Escher, Bach, thought he understood Godel's incompleteness theorem, and imagined that understanding of Godel's incompleteness theorem implied that humans are capable of hypercomputation (beyond Turing machine). It's literally impossible to talk sense into such cases. They don't understand the basics but they jump ahead to the highly advanced topics, which they understand metaphorically. Not having had properly studied mathematics they do not understand how great is the care required for not screwing up (especially when bordering philosophy). That can serve a good function, yes: someone sees the One Truth in, say, Solomonoff induction, and someone else actually learns the mathematics, which is interesting in it's own right even though it doesn't disprove God or accomplish anything equally interesting.

comment by duckduckMOO · 2012-04-19T12:19:56.999Z · LW(p) · GW(p)

haven't read yet but you can start by not calling anyone who disagrees with the established view a contrarian. It implies anyone who disagrees is doing so to play out a role rather than out of actual disagreement.

edit: so it seems that people who are playing out a role is exactly what you want more of. I assumed you were using "how can we get more contrarians" as codespeak for how can we get more disagreement. If you just want more actual "contrarians", well, I'm not sure "contrarians" is a real category. In any case it's not the relevant category. What you want is people who like criticising things, not people who like disagreeing with established opinion (again I really have to emphasise how ridiculous the way "contrarian" is used is. It's blatantly a story someone has made up to ad hominem away criticisms of standard ideas.)

For my part I would not feel comfortable finding fault in everything I see here. I know I can do it, I just don't think it would go down well. Not that it tends to go down well many other places either. part of the problem is something like people being too comfortable talking in terms of e.g. evolution's intentions so good criticisms can be dismissed as pedantry.

I might make a contrarian account though and see how well that goes down.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-20T04:41:22.936Z · LW(p) · GW(p)

It implies anyone who disagrees is doing so to play out a role rather than out of actual disagreement.

I don't think that's the standard definition of contrarian.

comment by Richard_Kennaway · 2012-04-18T22:53:51.971Z · LW(p) · GW(p)

I don't see a problem with driving "contrarians" away. That is what we should be doing.

To be a "contrarian" is to have written a bottom line already: disagree with everything everyone else agrees with.

To be a "contrarian" among smart people is to adopt reversed intelligence as a method of intelligence.

To be a "contrarian" among stupid people is, like American football, something that you have to be smart enough to do but stupid enough to think worth doing.

To be a "contrarian" is to limit oneself to writing against. I am not interested in what anyone is against until I have seen what they are for.

To be a "contrarian" is the safe and easy path. It is easy, because you can find good arguments against everything, as nothing is perfect. It is safe, for you can take agreement and disagreement alike as confirmation. Like most safe and easy paths, nothing is achieved along it.

To style oneself a "contrarian" is a giant red warning light that the person has nothing useful to say. That rule has not failed me yet.

Replies from: Wei_Dai, None, Eugine_Nier, timtyler, Viliam_Bur, Will_Newsome
comment by Wei Dai (Wei_Dai) · 2012-04-18T23:12:41.804Z · LW(p) · GW(p)

Yes, being a "contrarian" is irrational for the individual, but may be good for the group. I wouldn't try to turn someone into a "contrarian" for my own benefits, but I don't feel qualms about making better use of people who already are.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T06:28:44.027Z · LW(p) · GW(p)

Yes, being a "contrarian" is irrational for the individual, but may be good for the group.

Jesus was a contrarian and the most rational person ever. I think Jesus and Vassar agree with me. Unless you're twisting "contrarian" to mean something dumb by definition. Or are you knowingly going along with Kennaway's trolling? Hm...

comment by [deleted] · 2012-04-18T23:04:55.888Z · LW(p) · GW(p)

I think there's a difference between "contrarian about X" and "contrarian". The former has (hopefully) looked at the evidence around X and come to a position on X that differs from the mainstream. The latter values being different over being right.

I think the first sort can be valuable, and shouldn't be driven away.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-04-18T23:15:33.468Z · LW(p) · GW(p)

Wei Dai's first sentence only talks about the second sort, and I wouldn't call someone who has come to a position on X that differs from the mainstream a "contrarian about X". If they call themselves that, then instead of simply being able to present their arguments, they have tied their identity to being in opposition, and the whole downward spiral I described comes into play.

Replies from: chaosmosis
comment by chaosmosis · 2012-04-19T00:37:33.816Z · LW(p) · GW(p)

There's no problem with identifying with arguments and wanting to defend certain positions if you are open to arguments and evidence against your position. It's actually convenient to do so for the purposes of discussion and advocacy.

Most people here are probably "transhumanists", which connects their beliefs to their identity, but that doesn't mean they wouldn't change their mind or alter their beliefs if they see evidence against transhumanism. Describing specific traits that apply to you and your positions shouldn't make you reluctant to change your positions, and also identifying with specific advocacy groups is probably inevitable.

I don't think you're really addressing what Wei Dai's original post is actually discussing. I think that it should be apparent that Wei Dai isn't advocating having more closeminded commenters, but is advocating a more diverse set of viewpoints and advocacies. You're dismissing the overall idea was trying to be reached at based on an interpretation of "contrarian" that doesn't make sense when viewed in the context of the advocacy statement within the original post. Even if you're right about what "contrarian" means, please mentally replace every instance of "contrarian" with "person advocating something unpopular", and that will make this discussion much more productive.

I agree that tying one's identity to opposition specifically is bad, though. That's political paralysis as a consequence of misguided cynicism. If you reject every position then you can advocate nothing. That's not just ineffective, it's a horrible way to live. Affirmation is good.

Replies from: wedrifid
comment by wedrifid · 2012-04-19T00:43:26.404Z · LW(p) · GW(p)

I don't think you're really addressing what Wei Dai's original post is actually discussing. I think that it should be apparent that she isn't advocating having more closeminded commenters.

As far as I know Wei Dai is male.

Replies from: chaosmosis, Alicorn
comment by chaosmosis · 2012-04-19T00:57:57.318Z · LW(p) · GW(p)

I realized while writing the post that I didn't know his gender and proceeded to edit as fast as I could but you people still caught the mistake before I fixed it, I'm embarrassed. At least it's better to use "she" than "he" as my default assumption (balances against gendered language in favor of men, etc). Although on second thought it probably indicates that I associate civility with females which is stupid and unfair and can't be intentionally controlled by me anyways so it's not really worth lamenting.

But, sorry, Wei Dai, although it was just an accident and I doubt you'll care much.

Replies from: wedrifid
comment by wedrifid · 2012-04-19T01:05:27.127Z · LW(p) · GW(p)

Although on second thought it probably indicates that I associate civility with females which is stupid and unfair and can't be intentionally controlled by me anyways so it's not really worth lamenting.

It makes a difference that there are some Wei Dais that are female.

I probably wouldn't default to associating anti-consensus advocacy with female. That goes against a notorious (and as far as I know reasonably well founded) stereotype.

Replies from: chaosmosis
comment by chaosmosis · 2012-04-19T02:45:36.103Z · LW(p) · GW(p)

I was thinking and perceiving in terms of tone rather than in terms of advocacy statement.

Someone else mentioned somewhere that essentially Wei Dai is very good at disagreeing politely.

comment by Alicorn · 2012-04-19T00:44:17.550Z · LW(p) · GW(p)

As far as I know Wei Dai is male.

I've met him in person, and this is the case.

comment by Eugine_Nier · 2012-04-19T00:50:00.096Z · LW(p) · GW(p)

I sometimes argue in favor of positions I don't really believe (i.e., assign p<.5 to) if I think the probability is higher than general consensus and I suspect at least Will Newsome frequently does the same.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T01:05:46.154Z · LW(p) · GW(p)

Yes, but it's often a hassle. You risk being accused of trolling, overconfidence, &c., and it's difficult to claim that such accusations don't have some tinge of truth.

I suspect it's not overall a very good habit and that I bring it to LessWrong mostly because it happens to work well in my personal rationality practice. On LessWrong it's probably better to put in a little extra work to find a way to go meta—don't support a side, but show clear not-introspectively-obvious reasons why someone could hold a belief that was to them introspectively obvious and thus difficult to explain. I generally like the anti-democracy LW commenters because they seem to have practiced this skill.

comment by timtyler · 2012-04-19T11:30:43.196Z · LW(p) · GW(p)

"Contrarian" is to have written a bottom line already: disagree with everything everyone else agrees with.

Contrarians get to pick and choose their battle grounds. All they have to do to be right is to seek out places where a lot of people are wrong.

comment by Viliam_Bur · 2012-04-19T11:06:55.364Z · LW(p) · GW(p)

This comment should have 99 upvotes and should be moved to "Main" as a separate article. Then we should link it whenever the same topic appears again.

Reversing group-think is like reversing stupidity, or like an underconfidence at group level. It can be done. It can be interesting. But I prefer reading rational people's best estimates of reality. And I prefer disagreement based on genuine experience and belief, not because someone has felt a duty to artificially maintain diversity.

If you disagree with whatever, for example many-worlds interpretation, say it. Say "I disagree because of X and Y". Or say "I disagree, because if feels wrong, and because many people disagree, including some experts in the field (which is a good Bayesian evidence)". That's all OK. But don't say or imply things like "we should attract more people who disagree with many-world interpretation, to keep our discussion balanced". That is manipulating evidence.

If anything, we should discuss wider range of topics. Then naturally we will attract people who agree with N-1 topics, and disagree with 1 topic; and they will say it, and we will know they mean it.

comment by Will_Newsome · 2012-04-19T06:27:49.099Z · LW(p) · GW(p)

Hm... I think you're lying to be contrary. E.g.:

To style oneself a "contrarian" is a giant red warning light that the person has nothing useful to say. That rule has not failed me yet.

I think you think Robin Hanson and Eliezer Yudkowsky have useful things to say. Both have styled themselves contrarians.

Your points are clearly dumb cliches—I think you did that purposefully, but I think the way in which you did it is self-contradictory, thus your meta-level point would also be invalid. So maybe you're calling attention to the meta-level problem of determining what a "contrarian" is?

comment by Manfred · 2012-04-19T02:09:23.283Z · LW(p) · GW(p)

This could be rephrased more positively :D

If someone has something they may well be right about, and you don't learn it, that's a problem. Or if they make an argument that you know is wrong from parallel lines of evidence but can't say why it's wrong, that's a slightly smaller problem. And it's a problem with you, not with them. This is a general principle of disagreement. This post is the charge that we are bad at learning from people.

Hmm. Or maybe that's not right. We could be learning from them (on average), but still driving them away because what seemed like constructive argument from one side didn't from the other. In which case, that's fine and you shouldn't listen to this comment :P

Replies from: billswift
comment by billswift · 2012-04-19T14:33:34.848Z · LW(p) · GW(p)

but still driving them away because what seemed like constructive argument from one side didn't from the other.

Or still driving them away because the comment stream petered out before people got around to expressing their changed viewpoint and the contrarian left because he never realized he was having an impact. The post and comment format isn't really very good for a serious back-and-forth discussion. Especially when posts are so briefly on the front page, note that this is another good reason for getting meet-up announcements OFF of the discussion page.

comment by thomblake · 2012-04-18T22:05:05.192Z · LW(p) · GW(p)

It's so difficult to find someone who will communicate on our level and yet disagrees on object-level things.

Probably the best way to get more contrarians, is for folks from Less Wrong to learn from people outside the community, change their own beliefs because of it, and come back to share their wisdom with the masses.

Okay, that sounded better in my head too.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-04-18T23:03:18.496Z · LW(p) · GW(p)

It's so difficult to find someone who will communicate on our level and yet disagrees on object-level things.

Is this because people smart enough to communicate on our level largely agree with a lot of what is generally agreed on here, for the same reason that most people all agree that 2+2=4?

Or is it because LessWrong is, for reasons unconnected with rationality, largely drawn from a certain very narrow demographic range, who grab onto this constellation of ideas like an enzyme to its substrate, and "communicating on our level" just means being that sort of person?

Replies from: thomblake, vi21maobk9vp
comment by thomblake · 2012-04-18T23:47:16.201Z · LW(p) · GW(p)

Probably both, mostly the latter. Noting that "being that sort of person" refers to the demographic range, and not necessarily agreeing with those ideas.

comment by vi21maobk9vp · 2012-04-24T06:51:16.798Z · LW(p) · GW(p)

It is not just about demographic.

You are supposed to be familiar with many standard arguments; but many of them make no sense if you have different priors, because they have too little evidence on their side (AI researcher interview series seems to illustrate well that some kind of experience can give you evidence against a few key points).

If you find Hanson's arguments about the core of FOOM concept stronger than Eliezer's, you will have less incentive to familiarize yourself to everything that you should remember to communicate on what you called "our level", because it makes no sense without this key point.

So disagreement on object level in the very beginning leads to infamilarity with required things. Nothing too strange here.

comment by Grognor · 2012-04-24T20:14:12.168Z · LW(p) · GW(p)

To what degree should the lack of good contrarians be taken as evidence that LW "consensus" (scare quotes because the like-mindedness of this community is overestimated [1]) is true?

People are always talking about how the Less Wrong arguments are good viewed from the inside but not the outside, so this question is important as it is an outside-view consideration that, unlike most others, strikes favorably on the Less Wrong mentality, which is usually only justified inside the arguments.

Replies from: AlanCrowe
comment by AlanCrowe · 2012-04-24T20:39:38.397Z · LW(p) · GW(p)

Asymmetrical motivation is the problem. If you disagree with a mainstream position, arguing against it feels worth while. If you agree with a fringe position, arguing in favour of it feels worth while. But if you disagree with a fringe position, why bother?

Where the LW "consensus" agrees with the mainstream, then the lack of good contrarians (who would feel their time well spent) is evidence of a sort that the LW "consensus" is true. But such weak evidence is hardly needed.

But where the LW "consensus" is itself the fringe position, we expect that good contrarians would have better things to do than try to set us straight. Thus the lack of good contrarians is both what we expect when a fringe LW "consensus" position is true (which makes it hard to dispute) and when it is false (why bother?). Consequently the lack of good contrarians tells us nothing at all in exactly the case when we look to it for clues.

Replies from: Grognor
comment by Grognor · 2012-04-24T21:02:41.604Z · LW(p) · GW(p)

Good point.

comment by Incorrect · 2012-04-18T23:30:31.661Z · LW(p) · GW(p)

I completely disagree. The optimal number of contrarians is 0.

Replies from: TimS, orthonormal
comment by TimS · 2012-04-18T23:48:33.368Z · LW(p) · GW(p)

What is the optimal number of people who are intelligent but, on reflection, don't agree with the LessWrong consensus?

Replies from: Incorrect
comment by Incorrect · 2012-04-18T23:51:48.357Z · LW(p) · GW(p)

Give me your answer to that question before I answer.

Replies from: TimS
comment by TimS · 2012-04-18T23:58:33.667Z · LW(p) · GW(p)

I'd guess that somewhere between 1/3 and 1/4 of the current active LessWrong community should be willing to intelligently disagree with consensus - if our goal is to improve our theories of how society does and should work.

Replies from: Incorrect
comment by Incorrect · 2012-04-19T00:00:29.296Z · LW(p) · GW(p)

I completely disagree.

Replies from: TimS, taelor
comment by TimS · 2012-04-19T00:05:15.492Z · LW(p) · GW(p)

Is there an answer (other than zero), that you wouldn't completely disagree with? If not, why did you ask me for my number first?

FWIW, I don't think "willingness to intelligently disagree with consensus" = contrarian. Disagreeing for the simple purpose of disagreeing is pointless.

Replies from: Incorrect
comment by Incorrect · 2012-04-19T00:10:15.172Z · LW(p) · GW(p)

Is there an answer (other than zero), that you wouldn't completely disagree with?

I would disagree with you if you said zero too.

Replies from: Dorikka
comment by Dorikka · 2012-04-19T00:22:47.568Z · LW(p) · GW(p)

If this chain of posts is a joke, I don't think I get it. If it's not, I am mildly amused.

Replies from: None, Eugine_Nier, TimS
comment by [deleted] · 2012-04-19T00:33:17.005Z · LW(p) · GW(p)

I think it's a meta-joke. Incorrect is a hyper-contrarian arguing about how many contrarians there should be :)

Replies from: CarlShulman
comment by CarlShulman · 2012-04-19T00:58:28.934Z · LW(p) · GW(p)

Not only that, but in an uninformative and confrontational manner, posing the problem of how to respond to generate better contrarianism.

comment by Eugine_Nier · 2012-04-19T01:54:04.161Z · LW(p) · GW(p)

TimS is encouraging people to be more contrarian, so Incorrect is disagreeing with him.

Replies from: TimS
comment by TimS · 2012-04-19T14:15:56.772Z · LW(p) · GW(p)

contrarian != willing to intelligently disagree with consensus

comment by TimS · 2012-04-19T00:51:37.991Z · LW(p) · GW(p)

I'm not joking, but it's pretty clear Incorrect is. I'm not amused, but the joke is basically at my expense, so that's not very good evidence of whether Incorrect was actually amusing.

Replies from: thomblake
comment by thomblake · 2012-04-19T00:54:04.479Z · LW(p) · GW(p)

I'm not amused, but the joke is basically at my expense, so that's not very good evidence of whether Incorrect was actually amusing.

Speaking as one who often upvotes bad jokes...

No.

Replies from: thomblake
comment by thomblake · 2012-04-25T22:21:46.955Z · LW(p) · GW(p)

For clarification, I only upvote good bad jokes.

comment by taelor · 2012-04-19T06:51:14.986Z · LW(p) · GW(p)

Is this the right room for an argument?

Edit: I seemed to have failed my spot test to notice that some else in the thread had aready linked to the same video.

comment by orthonormal · 2012-04-19T02:48:15.867Z · LW(p) · GW(p)

It's unlikely that the "LW mainstream position" is currently right about all of its weird beliefs, though I wouldn't be surprised if we're right to take each of the ideas more seriously than the normal mainstream does.

EDIT: never mind, I didn't catch that you were doing this.

comment by Will_Newsome · 2012-04-19T01:13:27.775Z · LW(p) · GW(p)

Of course what is optimal might be open to debate, but from my perspective, it can't be right that my own criticisms are valued so highly (especially since I've been moving closer to the SingInst "inner circle" and my critical tendencies have been decreasing). In the spirit of making oneself redundant, I'd feel much better if my occasional voice of dissent is just considered one amongst many.

comment by chaosmosis · 2012-04-20T19:58:39.001Z · LW(p) · GW(p)

Tangentially related: I was in the HPMOR thread and noticed that there's a strong tendency to reward good answers but only a weak tendency to reward good questions. The questions are actually more important than the answers because they're a prerequisite to the answers, but they don't seem to be being treated as such. They have roughly half as much reputation as the popular answers do, which seems unfair.

I would guess that this extends to the rest of the site as well, as it's a fairly common thing that humans do. Things would probably be better here if we tried to change that. As a rough rule of thumb, we should more or less make it our general personal policies to upvote a question if the question itself is not stupid and the question results in an answer that is insightful and deserves an upvote.

I tried to not use "we" in this comment but then it was grammatically incoherent and it wasn't worth the effort of fixing it.

Replies from: Nornagest
comment by Nornagest · 2012-04-20T20:09:19.827Z · LW(p) · GW(p)

Disagree. Insightful-sounding questions are much much easier to come up with than genuinely insightful answers, so despite the fact that the former is a prerequisite to the latter, rewarding them equally would provide perverse incentives.

At least, that's true if our goal is to maximize the number of insightful results we generate -- which seems like a pretty reasonable assumption to me.

Replies from: chaosmosis
comment by chaosmosis · 2012-04-23T02:25:32.237Z · LW(p) · GW(p)

You cheated. You're comparing "insightful-sounding questions" to "genuinely insightful answers". Of course the genuine answers are going to come out ahead. That's completely unfair to the suggestion. But, assuming that people on LessWrong actually have the ability to distinguish between insightful-sounding questions and genuinely insightful questions (which seems just as easy as distinguishing between insightful-sounding answers and genuinely insightful answers, btw) the proposal makes sense.

Your comment does not contain an argument. It contains a blatantly flawed framing of the proposal I put forward and a catchphrase, "perverse incentives", and you don't explain the thought that goes into that catchphrase. You never articulate what the actual impact of these perverse incentives would look like, or how these perverse incentives would arise. Do you anticipate that if more people upvoted questions we would end up with fewer good results? I do not see how such an outcome would occur. I see zero reason to believe the "perverse incentives" you reference would originate.

There's a huge tendency within academia to ignore anything with partial solutions or doubts or blank spaces, and to undervalue questioning. Questions are inherently low status because they explicitly reveal a large gap of knowledge that cannot easily be overcome by the asker, and also have an element of submission to the "more intelligent" person who will answer the question. My suggestion is designed to counterbalance that. The best way to maximize the number of insightful thoughts and results you have is to ask insightful questions, that seems like a very reasonable assumption to me.

Moreover, putting forth the question which took place at an earlier point in the thought process allows others to more easily understand whatever conclusions you may or may not reach. It also allows people to take that question along different avenues of thought to reach useful conclusions that you would not have even considered.

Now, clearly we don't want to ask questions for the sake of asking questions. But good questions are extremely important and should be encouraged. Upvoting more questions than usual and asking more questions as a general rule is therefore a good idea. The proposal can be selectively applied by the intelligent commenters of LessWrong, and none of the "perverse incentives" you envision will occur or do any damage to the site.

Replies from: Nornagest
comment by Nornagest · 2012-04-23T03:18:02.016Z · LW(p) · GW(p)

"Perverse incentives" isn't a LW catchphrase. It's a term from economics, used to describe situations where external changes in the incentive structure around some good you want to maximize actually end up maximizing something else at its expense. This often happens when the thing you wanted to maximize is hard to quantify or has a lot of prerequisites, making it easier to encourage things by proxy -- which sometimes works, but can also distort markets. Goodhart's law is a special case. I'd assumed this was a ubiquitous enough concept that I wouldn't have to explain it; my mistake.

In this case, we've got an incentive (karma) and a goal to maximize (insightful results, which require both a question and a promising answer to it). In my experience, which you evidently disagree with, judging the fruitfulness of questions (other than the trivial or obviously flawed) is difficult without putting effort into analyzing them: effort which is unproductive if expended on a dead-end question. Also in my experience, questions are cheap if you're already closely familiar with the source material, which most of the people posting in the MoR threads probably are. If I'm right about both of these points, valuing insightful-sounding questions on par with insightful-sounding answers creates a karma disincentive to spend time in analysis of open questions (you could spend the same time writing up new questions and be rewarded more), and a proportionally lower number of results.

There are a number of ways this could fail in practice: the question or answer space might be saturated, or people's inclinations in this area might be insensitive to karma (in which cases no amount of incentives either way would help). One of the premises could be wrong. But as marginal reasoning, it's sound.

Replies from: chaosmosis
comment by chaosmosis · 2012-04-23T14:28:20.578Z · LW(p) · GW(p)

This is all reasoning that should have been made explicit in your comment. Your objection has good thoughts going into it but I had no way of knowing that from your previous comment. I knew that "perverse incentives" was an economic catchphrase but thought you were just referencing it without reason because you made no attempt to describe why the perverse incentives would arise and why the LessWrong commenters would have a difficult time distinguishing intelligent questions from dumb questions. I thought you were treating the economic catchphrase like phlogiston. If your above thought process had been described in your comment it would have made much more sense.

In my experience, which you evidently disagree with, judging the fruitfulness of questions (other than the trivial or obviously flawed) is difficult without putting effort into analyzing them: effort which is unproductive if expended on a dead-end question.

Isn't this the same with answers? I don't see why it wouldn't be.

Also in my experience, questions are cheap if you're already closely familiar with the source material, which most of the people posting in the MoR threads probably are.

Isn't this the same with answers? I don't see why it wouldn't be.

If I'm right about both of these points, valuing insightful-sounding questions on par with insightful-sounding answers creates a karma disincentive to spend time in analysis of open questions (you could spend the same time writing up new questions and be rewarded more), and a proportionally lower number of results.

This only makes sense if people are rational agents. Given that you've already conceded that we irrationally undervalue good questions and questioners, doesn't it make more sense that actively trying to be kinder to questioners would return the question/answer market to its objective equilibria, thus maximizing utility?

I note the irony of asking questions here but I couldn't manage to express my thoughts differently.

Replies from: Nornagest
comment by Nornagest · 2012-04-23T19:48:35.602Z · LW(p) · GW(p)

Isn't [the difficulty of judging questions] the same with answers? I don't see why it wouldn't be.

If you come up with a good (or even convincing) answer, you've already front-loaded a lot of the analysis that people need to verify it. All you need to do is write it down -- which is enough work that a lot of people don't, but less than doing the analysis in the first place.

Isn't [familiarity discounting for questions] the same with answers? I don't see why it wouldn't be.

It helps, but not as much. Patching holes takes more original thought than finding them.

This only makes sense if people are rational agents.

It makes sense if people respond to karma incentives. If they don't, there's no point in trying to change karma allocation norms. The magnitude of the incentive does change depending on how people view the pursuits involved, but the direction doesn't.

Given that you've already conceded that we irrationally undervalue good questions and questioners...

I didn't say this.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-04-24T06:30:02.349Z · LW(p) · GW(p)

It makes sense if people respond to karma incentives. If they don't, there's no point in trying to change karma allocation norms. The magnitude of the incentive does change depending on how people view the pursuits involved, but the direction doesn't.

Actually, changing karma allocation norms could change visibility of unanswered questions judged interesting.

This can be an end in itself, or an indirect karma-related incentive.

comment by siodine · 2012-04-20T01:13:10.711Z · LW(p) · GW(p)

I've noticed there's been a dozen or more threads and suggestions like this one; has anything ever come from them? These suggestions are starting to look like simple opportunities for circle jerking. Who would even decide on and implement these things? Yudkowsky?

Replies from: MixedNuts
comment by Thomas · 2012-04-20T21:56:05.644Z · LW(p) · GW(p)

Somebody who is right does not need a contrarian that badly. Someone who is wrong needs one. But just everybody thinks how his contrarian is not a particularly good one,

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-04-21T08:17:59.046Z · LW(p) · GW(p)

Are there "people/communitites who are right"? There are usually ones who are right about some things, wrong about other things.

Motivation to find contrarians can stem from two directions: to be less wrong even if a falsehood is temporarily considered proven; and to get a wider set of ideas when brainstorming.

Note that when brainstorming you benefit from completely unfeasible but relevant ideas. Wild ideas give new points of view and increase the range of feasible ideas you can think of.

comment by Will_Newsome · 2012-04-19T06:43:09.863Z · LW(p) · GW(p)

By LessWrong standards, Catholicism—the most popular monolithic ideology on Earth—is insanely contrarian. I am given to understand there is no shortage of philosophically inclined Catholic intellectuals. Maybe you could woo them.

(Disclaimer: I happen to think the Catholics are right about pretty much everything, especially the important stuff.)

Replies from: wedrifid, siodine
comment by wedrifid · 2012-04-19T08:49:25.489Z · LW(p) · GW(p)

is insanely contrarian

Emphasis on the insane. It's based on plainly absurd mythological nonsense used to maintain status hierarchies.

(Disclaimer: I happen to think the Catholics are right about pretty much everything, especially the important stuff.)

You are blatantly trolling. Does replying to you rather than systematically downvoting constitute feeding a troll?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T09:14:40.407Z · LW(p) · GW(p)

Emphasis on the insane. It's based on plainly absurd mythological nonsense used to maintain status hierarchies.

By their fruits shall ye know them, not by their roots. And I strongly disagree in any case.

You are blatantly trolling.

I'm not actually trolling. You should consider thickening the tails on your models of why I do or say things. I am seriously considering officially becoming Catholic—that's how impressed with them I am.

Does replying to you rather than systematically downvoting constitute feeding a troll?

I'm not trolling, but if I were trolling, then yes, I think responding to me would constitute feeding me. (Seems to me like the answer is obvious, maybe the question was rhetorical for some reason.)

Replies from: MixedNuts, wedrifid
comment by MixedNuts · 2012-04-20T09:43:14.013Z · LW(p) · GW(p)

Please write a post, or several posts, in Discussion or off-site, about why you're impressed by Catholicism, about the equivalence you draw between theological and mathematical concepts, and about all that stuff you've written vague comments on. I would especially like it to address why you like Christianity when other religions are so much prettier.

Please also shut up about religion unless someone brings it up first.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-20T10:01:01.565Z · LW(p) · GW(p)

Please write a post, or several posts, in Discussion or off-site, about why you're impressed by Catholicism, about the equivalence you draw between theological and mathematical concepts, and about all that stuff you've written vague comments on. I would especially like it to address why you like Christianity when other religions are so much prettier.

I plan on writing a treatise some time in the next year that should address the more technical stuff, won't touch so much on why I like Catholicism in particular though. Not sure I agree other religions are much prettier—do you mean you find their concepts and perspectives more conceptually aesthetic? I think Catholicism is a lot more morally complex than many other religions. One way to make the comparison is with architecture: the optimization pressure put into architecture can act as a proxy measure for the optimization pressure put into the culture as a whole, including the philosophical and moral aspects of the culture. Anyway the only other religion I'm familiar with is Theravada Buddhism, it's possible I'm overestimating the value of Catholicism simply due to lack of variety of knowledge.

Please also shut up about religion unless someone brings it up first.

No thanks, at least not categorically.

Replies from: MixedNuts
comment by MixedNuts · 2012-04-20T11:55:47.803Z · LW(p) · GW(p)

Goody! (Unless it won't be online, in which case non-goody.)

Not sure I agree other religions are much prettier—do you mean you find their concepts and perspectives more conceptually aesthetic?

Yes, with the reservation that I don't actually understand your rephrasing.

I think Catholicism is a lot more morally complex than many other religions.

Judaism all the way, baby. I don't actually know all the complexities of Catholicism (can haz link?), but I've been to a Catholic school and grok the general aesthetic of most big brands of Christianity. It likes close obedience to rigid rules (yay!) and submission (meh), hates anything pleasurable (feh) and clever thinking (boo), and drops everything Judaism did like a hot potato (noooo!). This covers the Puritans and Augustine, but apparently not the parts of Catholicism you're talking about. I'm surprised that you think it's complex, because the only thing I really like about the brand of Catholicism I got is that it's simple. Maybe complicated theology like the casuists did?

comment by wedrifid · 2012-04-19T09:21:30.708Z · LW(p) · GW(p)

By their fruits shall ye know them, not by their roots.

Their fruits are worse! (But some of those fruits - with the violent oppression and suchlike - you have said they should have done more of.)

I'm not actually trolling. You should consider thickening the tails on your models of why I do or say things. I am seriously considering officially becoming Catholic—that's how impressed with them I am.

The latter precludes the former in my way of modelling internet contributions.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T09:28:26.012Z · LW(p) · GW(p)

The latter precludes the former in my way of modelling internet contributions.

Ah, I see. Unfortunate that "trolling" is so ambiguous as to whether it's about results or motivations (i.e.(?), immediate results or expected future results (potentially conditional on feeding/anti-feeding)). Results in e.g. Eliezer calling XiXiDu a troll even when XiXiDu clearly isn't trolling in the conative sense. Steve suggested ghost netting for the non-conative case.

Replies from: Eugine_Nier, wedrifid
comment by Eugine_Nier · 2012-04-20T04:23:05.545Z · LW(p) · GW(p)

Interesting, it appears that in some contexts the word "troll" is acquiring a usage similar to the word "fascist".

comment by wedrifid · 2012-04-19T09:34:53.735Z · LW(p) · GW(p)

Ah, I see. Unfortunate that "trolling" is so ambiguous as to whether it's about results or motivations

It's probably a silly term. I should use it less - but would like there to be convenient replacements.

Results in e.g. Eliezer calling XiXiDu a troll even when XiXiDu clearly isn't trolling in the conative sense. Steve suggested ghost netting for the non-conative case.

XiXiDu does troll in all senses of the term sometimes (according to both observed behavior and explicit self descriptions). It isn't consistent. Eliezer's usage is correct and it would have been better for lesswrong in general if this was identified by more people earlier.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T09:47:05.557Z · LW(p) · GW(p)

You read Steve's comment reply to you, right? I really don't think XiXiDu's self-characterizations are a reliable indicator of XiXiDu's actual drives. I think you're being unduly harsh on XiXiDu for political reasons, where "unduly" means that you're incorrectly attributing motives to him in order to justify a political position that may or may not be correct either way, not that your decision to act as if he is purposefully trolling is itself an unjustified political move. I also think Eliezer's bias to interpret his enemies as innately evil and/or stupid is evil & stupid—I hope that it hasn't rubbed off on you, and that your agreement with him here is due to contingent personal factors.

...Hm. You almost certainly won't change my mind about this, and I'm afraid I won't change your mind, so perhaps we should agree to disagree. Politics is hard, let's go shopping.

Replies from: wedrifid
comment by wedrifid · 2012-04-19T13:21:10.815Z · LW(p) · GW(p)

You read Steve's comment reply to you, right?

If I recall, I read it, found it naive and surprisingly superficial, downvoted and even replied.

I really don't think XiXiDu's self-characterizations are a reliable indicator of XiXiDu's actual drives.

Self-characterizations are seldom reliable, but when they match observed behavior it is significant evidence. More to the point, the benefit of the doubt that people 'mean well' is undermined when their explicitly endorsed motives are also considered undesirable.

I think you're being unduly harsh on XiXiDu for political reasons

The subject is rather political - and given the only loose relevance to the subject you were explaining to me I don't have a reliable model of why you would choose to bring it up with a political assertion that I would clearly reject out of hand. Other parts of your point and even other applications we could have agreed upon.

I also think Eliezer's bias to interpret his enemies as innately evil and/or stupid is evil & stupid—I hope that it hasn't rubbed off on you,

And right here is one reason that makes me tempted to say we need new and better contrarians. Because it is utterly bizarre that I end up on a 'side' that leaves some people characterizing me as an Eliezer fanboy. I'm far more contrarian that I really ought to be and oppose Eliezer far more than most (hey - I do most things here more than most). The fact that I oppose (perceived) fools who corrupt the epistemic commons with highly undesirable practices and that by default such individuals take out their angst on Eliezer by no means makes me his acolyte.

If we had more contrarians that were remotely sane then perhaps I could avoid appearing to be more or less mainstream (a position I'm rather unfamiliar with).

I'm not sure about the whole 'innately evil' thing by the way. I can't speak for Eliezer but for my part XiXiDu strikes me as merely the enemy of that which I am trying to protect (that is, my personal haven of at least tolerable levels of discussion standards). I actually think I'd get along well with him in person. He's been up front with me even when he has opposed me or expressed his anger personally. I find that pleasant to deal with. I'd play board games with him (pretty much my primary standard of evaluating people). I just don't want his influence here. (This doesn't extend to all that pass for 'contrarians'. Many others have personality traits that I don't get along well with in person either.)

and that your agreement with him here is due to contingent personal factors.

Almost certainly. Selection effects and all.

Politics is hard, let's go shopping.

Sure, until the next time politics gets brought up. Or until the next time I choose to respond to evangelism of Catholisism or demonic UFOs with verbal disapproval.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T14:10:40.806Z · LW(p) · GW(p)

(You seem to have misinterpreted me (my intentions) on a few (relatively minor) points (which is probably why you don't have a reliable model). Not worth getting into, just thought I should flag it for purposes of calibration.

And yeah, I also wouldn't be as tempted to go out of my way to (correctly or incorrectly) defend XiXiDu's (alleged) motivations if we had more/better contrarians.)

Replies from: wedrifid
comment by wedrifid · 2012-04-19T14:12:29.686Z · LW(p) · GW(p)

You seem to have misinterpreted me (my intentions) on a few (relatively minor) points (which is probably why you don't have a reliable model).

Doesn't the causality mostly go the other way there?

comment by siodine · 2012-04-20T00:45:48.769Z · LW(p) · GW(p)

(Disclaimer: I happen to think the Catholics are right about pretty much everything, especially the important stuff.)

Why even say something like this without explanation? You should know such a statement is about as meaningful as a shit stained homeless person screaming about the apocalypse, and that leads me to think you may either be socially incompetent, marginally insane or having a bit of fun.