Making Fun of Things is Easy

post by katydee · 2013-09-27T03:10:28.700Z · LW · GW · Legacy · 76 comments

Making fun of things is actually really easy if you try even a little bit. Nearly anything can be made fun of, and in practice nearly anything is made fun of. This is concerning for several reasons.

First, if you are trying to do something, whether or not people are making fun of it is not necessarily a good signal as to whether or not it's actually good. A lot of good things get made fun of. A lot of bad things get made fun of. Thus, whether or not something gets made fun of is not necessarily a good indicator of whether or not it's actually good.[1] Optimally, only bad things would get made fun of, making it easy to determine what is good and bad - but this doesn't appear to be the case.

Second, if you want to make something sound bad, it's really easy. If you don't believe this, just take a politician or organization that you like and search for some criticism of it. It should generally be trivial to find people that are making fun of it for reasons that would sound compelling to a casual observer - even if those reasons aren't actually good. But a casual observer doesn't know that and thus can easily be fooled.[2]

Further, the fact that it's easy to make fun of things makes it so that a clever person can find themselves unnecessarily contemptuous of anything and everything. This sort of premature cynicism tends to be a failure mode I've noticed in many otherwise very intelligent people. Finding faults with things is pretty trivial, but you can quickly go from "it's easy to find faults with everything" to "everything is bad." This tends to be an undesirable mode of thinking - even if true, it's not particularly helpful.

[1] Whether or not something gets made fun of by the right people is a better indicator. That said, if you know who the right people are you usually have access to much more reliable methods.

[2] If you're still not convinced, take a politician or organization that you do like and really truly try to write an argument against that politician or organization. Note that this might actually change your opinion, so be warned.

76 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2013-09-28T14:27:12.862Z · LW(p) · GW(p)

Related on LW: Talking Snakes: A Cautionary Tale.

I changed my mind in a Cairo cafe, talking to a young Muslim woman. I let it slip during the conversation that I was an atheist, and she seemed genuinely curious why. You've all probably been in such a situation, and you probably know how hard it is to choose just one reason, but I'd been reading about Biblical contradictions at the time and I mentioned the myriad errors and atrocities and contradictions in all the Holy Books.

Her response? "Oh, thank goodness it's that. I was afraid you were one of those crazies who believed that monkeys transformed into humans."

I admitted that um, well, maybe I sorta kinda might in fact believe that.

It is hard for me to describe exactly the look of shock on her face, but I have no doubt that her horror was genuine. I may have been the first flesh-and-blood evolutionist she ever met. "But..." she looked at me as if I was an idiot. "Monkeys don't change into humans. What on Earth makes you think monkeys can change into humans?"

Also, on Yvain's old blog:

On r/atheism, a Christian-turned-atheist once described an "apologetics" group at his old church. The pastor would bring in a simplified straw-man version of a common atheist argument, they'd take turns mocking it ("Oh my god, he said that monkeys can give birth to humans! That's hilarious!") and then they'd all have a good laugh together. Later, when they met an actual atheist who was trying to explain evolution to them, they wouldn't sit and evaluate it dispassionately. They'd pattern-match back to the ridiculous argument they heard at church, and instead of listening they'd be thinking "Hahaha, atheists really are that hilariously stupid!"

[...]

There are lots of good arguments against libertarianism. I have collected some of them into a very long document which remains the most popular thing I've ever written. But when I hear liberals discuss libertarianism, they very often head in the same direction. They make a silly face and say "Durned guv'mint needs to stay off my land!" And then all the other liberals who are with them laugh uproariously. And then when a real libertarian shows up and makes a real libertarian argument, a liberal will adopt his posture, try to mimic his tone of voice, and say "Durned guv'ment needs to stay off my land! Hahaha!" And all the other liberals will think "Hahaha, libertarians really are that stupid!"

Many of you will recognize this as much like the Myers Shuffle. As long as a bunch of atheists get together and laugh at religious people who ask them to read theology before criticizing it, and as long as they have an easily recognizable name for the object of their hilarity like "Courtier's Reply", then whenever a religious person asks them to familiarize themselves with theology the atheist can just say "Courtier's Reply!" and all the other atheists will crack up and think "Hahaha, religious people really are that stupid!" and they gain status and the theist loses status and at no point do they have to even consider responding to the theist's objection.

This tendency reaches its most florid manifestation in the "ideological bingo games". See for example "Skeptical Sexist Bingo", feminist bingo, libertarian troll bingo, anti-Zionist bingo, pro-Zionist bingo, and so on. If you Google for these you can find thousands, which is too bad because every single person who makes one of these is going to Hell.

Let's look at the fourth one, "Anti-Zionist Bingo." Say that you mention something bad Israel is doing, someone else accuses you of being anti-Semitic, and you correct them that no, not all criticism of Israel is necessarily anti-Semitic and you're worried about the increasing tendency to spin it that way.

And they say "Hahahahahhaa he totally did it, he used the 'all criticism of Israel gets labeled anti-Semitic' argument, people totally use that as a real argument hahahaha they really are that stupid, I get 'B1' on my stupid stereotypical critics of Israel bingo!"

Replies from: fubarobfusco, Yosarian2
comment by fubarobfusco · 2013-10-02T19:52:30.121Z · LW(p) · GW(p)

The other day, I asked a close friend of mine who's active in feminist organizations to read Yvain's post on bingo cards so we could discuss it. Some things that came out of that discussion:

It's actually useful to recognize repeated themes in opposing arguments. We have to pattern-match in order to understand things. (See this comment for a similar point — "[P]eople need heuristics that allow them to terminate cognition, because cognition is a limited resource") Even if mocking or dismissing opposing arguments is bad, we shouldn't throw out categorization as a tool.

One reason feminists make bingo cards is to say to other feminists, "You're not alone in your frustration at hearing these arguments all the time." Bingo cards function as an expression of support for others in the movement. This seems to me to be a big part of what feminists get out of feminism: "No, you're not alone in feeling crappy about gender relations. So do I, and so do all these other people, too. So let's work on it together." For that matter, a lot of what secularists get out of the secularist movement seems to be "No, you're not alone in thinking this god stuff is bogus. Let's make our society safer and friendlier for people like us."

If you're actually trying to have a discussion with someone who makes an argument that sounds like a bingo square, instead of stopping and responding only to the bingo-square match, you can ask for a delta between the bingo square and the argument they're making. "Huh, it sounds like you're saying my argument is invalid because I'm a woman, and you believe women are less rational than men. Is that really what you're saying?"

On the other hand, there exist some arguments that are only made by people who are ignorant of a field. For instance, even if sophisticated theologians might not make the "birds don't hatch from reptile eggs" argument, many creationists do make it. It is exhausting to have to constantly struggle to bring ignorant people up to the level where they can make sophisticated arguments — especially if they are both ignorant and hostile. Without some mechanism to recognize and exclude ignorant people, it's impractical to have a higher-level discussion.

Related: One problem that seems to be more common in feminism discussions online than in other topics, is that there are a heck of a lot of people who enter feminist forums and demand answers to their challenges ... when they have not done the background reading to understand the discussion that is taking place. So one possible reason feminists make more bingo cards than Zionists, anti-Zionists, libertarians, anarchists, etc. may simply be that they are the target of more "bingo-card-worthy" challenges from hostile ignorant people coming into their blogs, forums, casual conversations, etc.

(On that last point, I tried to imagine what Less Wrong would feel like if we had the same level of outright hostile outsider behavior that outspoken feminists regularly receive. "You dorks think belief has something to do with math? What you need is a real Christian to beat you senseless and drag you to church. Then you'd find out what belief is really about!")

Afterward, I realized that I have on hand a book that could be described as a very advanced bingo card: Mark Isaak's Counter-Creationism Handbook, which grew out of an FAQ for the Usenet newsgroup talk.origins. The entire book is a catalog of creationist arguments, classified by topic (e.g. "Biology", "Geology", "Biblical Creationism"), going so far as to give the arguments numeric catalog codes, e.g. "CB805: Evolution predicts a continuum of organisms, not discrete kinds." However, unlike the usual bingo-card format, Isaak gives for each argument a citation of one or more creationist writers actually using it, and a cited scientific rebuttal.

Replies from: Moss_Piglet, Eugine_Nier
comment by Moss_Piglet · 2013-10-02T22:44:27.403Z · LW(p) · GW(p)

You've made a lot of really good points about how these kinds of copy-paste responses can help identify trolls and build community solidarity, many of which hadn't really occurred to me. I hope you'll forgive me for not spending more space laying out where we agree; I don't like posts which could be summed up entirely with an upvote.

I do have to quibble with one point though;

(On that last point, I tried to imagine what Less Wrong would feel like if we had the same level of outright hostile outsider behavior that outspoken feminists regularly receive. "You dorks think belief has something to do with math? What you need is a real Christian to beat you senseless and drag you to church. Then you'd find out what belief is really about!")

The issue here isn't whether feminists (or anyone else for that matter) are morally/emotionally justified in using these sorts of thought-terminating cliches, but whether these types of cliches lower the quality of discourse and make their users more resistant to genuine counter-argument/counter-evidence.

In that vein, I'd have to say that thick skin is not always an asset; you can "win" arguments by endurance, but you'll never find truth or allies that way. Most of the discussions I've had with feminists online could be mapped 1:1 to arguments I've had with fundamentalist Christians IRL, where you realize halfway through that you're speaking to someone who is scanning everything you say for keywords without ever actually thinking about it. It's exhausting and in the end both people are angrier without having achieved anything.

I realize it's much easier to say "be rational" than to do it when your back is up, and I certainly don't want to dismiss anyone's emotional pain, but ultimately giving in to the urge to irrationality is not something to be celebrated. Not everyone is a Sage with pure Apatheia, able to resist any temptation through will alone, but that doesn't mean we shouldn't strive to be reasonably objective. Objectivity is a dirty word in some circles, but if we don't at least try to overcome our biases we are ruled by them.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-10-03T00:39:12.847Z · LW(p) · GW(p)

(Thanks for acknowledging the common ground; this response likewise deals only with the small area of disagreement.)

The issue here isn't whether feminists (or anyone else for that matter) are morally/emotionally justified in using these sorts of thought-terminating cliches,

Oh, I agree. My point in concocting the imaginary scenario of an embattled Less Wrong was to provide an alternative to the notion that feminism is fundamentally disposed to semantic stopsigns; namely that feminists find themselves in a situation) where semantic stopsigns are unusually cognitively necessary (as opposed to morally or emotionally).

That is, it's not possible to usefully understand the cognitive situation of public feminism without thinking about the death threats, the rape threats, the "you just need a good fucking" responses, the "feminists are just ugly women" responses, and so on. It's not that these morally justify the dismissive attitude represented by bingo cards, nor that they emotionally explain (i.e. psychoanalyze) it; but that they make it cognitively and dialectically a necessary tool.

but whether these types of cliches lower the quality of discourse and make their users more resistant to genuine counter-argument/counter-evidence.

If the situational interpretation applies, then reducing the use of semantic stopsigns would mean less available cognitive power to respond to meaningful counter-evidence, not more.

Replies from: Moss_Piglet, army1987
comment by Moss_Piglet · 2013-10-03T21:18:56.072Z · LW(p) · GW(p)

If the situational interpretation applies, then reducing the use of semantic stopsigns would mean less available cognitive power to respond to meaningful counter-evidence, not more.

I think situation plays a role here as well though.

If I'm reading the comments section on Shakesville and see some rando come in with a basic question and get hit with the "I'm not your sherpa" card and a link to 101 materials, that's fine. You can't drop everything to debate every random dude who expresses a disagreement; I certainly don't appreciate it when people wander into the bio department and start up debates about irreducible complexity (yup, true story).

On the other hand, if I'm on GiantITP having a fun conversation about the best way to generate ability scores in Dungeons and Dragons (3d6 down the line, BTW) and someone goes full RadFem and derails the thread into talking about "biotruth" and privilege until it has to be locked, my jimmies get considerably rustled. Especially when I recognize a lot of the same rhetorical techniques I saw up in the first example.

That's the general point I was making; these tools are useful for defense, but unfortunately just as useful for offense.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-10-04T04:37:04.803Z · LW(p) · GW(p)

That's the general point I was making; these tools are useful for defense, but unfortunately just as useful for offense.

In fact I suspect much of the feminists' need for defense comes from the highly aggressive ways they tend to go on offense.

comment by A1987dM (army1987) · 2013-10-04T05:27:49.243Z · LW(p) · GW(p)

I guessed the “fundamentally” link would be to this.

comment by Eugine_Nier · 2013-10-04T05:35:59.062Z · LW(p) · GW(p)

Afterward, I realized that I have on hand a book that could be described as a very advanced bingo card: Mark Isaak's Counter-Creationism Handbook, which grew out of an FAQ for the Usenet newsgroup talk.origins.

The problem with the Bingo boards is that they're not even a list of "answers to straw arguments" since they're missing the answers. Specifically, feminists treat placing an argument (or even a statement) they don't like on a bingo card as an alternative to answering (or disproving) it. This is similar to the obnoxious debating technique of saying "I don't want to here objection X" without bothering to actually address objection X.

One reason feminists make bingo cards is to say to other feminists, "You're not alone in your frustration at hearing these arguments all the time." Bingo cards function as an expression of support for others in the movement. This seems to me to be a big part of what feminists get out of feminism:

This nicely illustrates the source of the problem: What kind of arguments are the most frustrating? The kind where you don't have a good counterargument (possibly because the argument is in fact valid).

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-04T15:06:20.823Z · LW(p) · GW(p)

Sure, many people use "I don't want to hear X" or "pfft, X is a well-known fallacy" or "you really should read author X on this subject and come back when you've educated yourself" or many variations on that theme to dismiss arguments they don't actually have counterarguments for. Agreed.

This ought not be surprising... any strategy that knowledgeable people use to conserve effort can also be adopted as a cheap signal by the ignorant. And since ignorant people are in general more common than knowledgeable people, that also means I can dismiss all the people who use that cheap strategy as ignorant, including the knowledgeable ones, if I don't mind paying the opportunity costs of doing that. (Which in turn allows for cheap countersignaling by ignorant contrarians, and around and around we go.)

None of that is to say that all the people using this strategy are ignorant, or that there's no value in learning to tell the difference..

Many knowledgeable people find frustrating being asked to address the same basic argument over and over. A common response to this is to write up the counterargument once and respond to such requests with pointers to that writeup. In larger contexts this turns into a body of FAQs, background essays and concepts, etc. which participants in the conversation are expected to have read and understood, and are assumed to agree with unless they explicitly note otherwise.

LW does this with a number of positions... starting a conversation about ethics here and then turning out halfway through to not accept consequentialism, for example, will tend to elicit frustration. Non-consequentialists are not per se unwelcome, but failing to acknowledge that the community norm exists is seen as a defection, and people who do that will frequently be dismissed at that point as not worth the effort. Similar things are true of atheism, of the computational model of consciousness, and a few other things.

It's not an unreasonable way to go.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-10-05T05:28:47.152Z · LW(p) · GW(p)

My point is that there is a difference between an important FAQ and a bingo card.

Also, even with an FAQ one needs to be willing to engage in further discussion when people point out problems with the answers there, e.g., I don't entirely accept consequentialism (or many of the standard premises here for that measure) and have generally been able to have civilized discussions on the topics in question.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-05T15:00:47.096Z · LW(p) · GW(p)

one needs to be willing to engage in further discussion when people point out problems with the answers there

I don't really agree. Up to a point, yes, but one reaches that point quickly.

For example, we get theists every once in a while insisting that we engage in further discussion when they point out problems with our reasons for atheism. I often engage them in further discussion, as do others, although I wouldn't say we need to... it's not like theism is some kind of obscure philosophy that we're simply not acquainted with the compelling arguments for.

If we instead got one every few days, I would not engage them, and I would also recommend that others not do so; at that point silent downvotes would be a superior response.

Reasonable people can disagree about where exactly the threshold between those points is best drawn, but I think it's clear that it needs to be drawn somewhere.

comment by Yosarian2 · 2013-09-30T02:07:06.921Z · LW(p) · GW(p)

There are lots of good arguments against libertarianism. I have collected some of them into a very long document which remains the most popular thing I've ever written. But when I hear liberals discuss libertarianism, they very often head in the same direction. They make a silly face and say "Durned guv'mint needs to stay off my land!" And then all the other liberals who are with them laugh uproariously. And then when a real libertarian shows up and makes a real libertarian argument, a liberal will adopt his posture, try to mimic his tone of voice, and say "Durned guv'ment needs to stay off my land! Hahaha!" And all the other liberals will think "Hahaha, libertarians really are that stupid!"

This is an interesting example, because there is an entire subreddit of libertarians doing the exact same thing, making fun of strawman versions of anti-libertarian arguments just to train themselves to not listen to the actual arguments.

http://www.reddit.com/r/whowillbuildtheroads/

It's interesting to see the exact same technique being used as a thought-killing device by people on both sides.

comment by RomeoStevens · 2013-09-27T03:36:24.100Z · LW(p) · GW(p)

"everything is bad" is only a crappy thinking mode when unaccompanied by the obvious next step of "optimize all the things."

Replies from: katydee, atucker, hyporational
comment by katydee · 2013-09-27T06:11:59.644Z · LW(p) · GW(p)

I disagree. "Bad" is a value judgement that is not optimized for maximum utility. In my opinion, there's usually little reason (signaling aside) to make fun of something rather than provide constructive criticism.

While it's certainly possible to use "bad" as a shortcut for "needs optimizing," the word "suboptimal" already means that and doesn't carry the same pejorative connotations.

comment by atucker · 2013-09-29T14:35:04.309Z · LW(p) · GW(p)

You would want your noticing that something is bad to, in some way, indicate what would be a better way to make the thing better. You want to know what in particular is bad and can be fixed, rather than the less informative "everything". If your classifier triggers on everything, it tells you less on average about any given thing.

comment by hyporational · 2013-09-28T05:33:12.961Z · LW(p) · GW(p)

The next depressing step is realizing you can't really do that. Maybe optimize the most important thing and pretend everything else is awesome?

comment by BaconServ · 2013-09-27T05:46:00.939Z · LW(p) · GW(p)

If you can correct your beliefs by thinking up a good argument against them, isn't that a good thing? I'm unsure why you're terming it "warning."

Replies from: katydee, satt, TheOtherDave
comment by katydee · 2013-09-27T06:07:47.859Z · LW(p) · GW(p)

Studies indicate that in some cases, writing arguments causes you to later believe what you wrote, even if you didn't believe it at the time.

Replies from: BaconServ
comment by BaconServ · 2013-09-27T06:27:13.867Z · LW(p) · GW(p)

That sounds a lot like overcoming bias rather than creating it. If it happened in the general case, I'd suspect bias was in play, but if it's only some cases, it sounds like someone just corrected their incorrect beliefs without having to debate it out with an external party.

Replies from: None, Viliam_Bur
comment by [deleted] · 2013-09-27T08:44:08.004Z · LW(p) · GW(p)

Two worrying observations from reality:

  1. Propaganda seems to work. At least, many governments that stay in power put a lot of effort into it, and advertising is a massive industry.

  2. False memories are incredibly easy to implant via suggestion.

Don't believe everything you think.

Replies from: BaconServ
comment by BaconServ · 2013-09-27T21:52:11.726Z · LW(p) · GW(p)

Wow. This is incredible. I am advocating sitting down, thinking critically, and asserting that warning against doing so is contrary to rationality. You're committing severe fallacies here and getting far more upvotes than I am; not reforming your own opinions without your own rationality is more popular than actually doing so.

Let's suppose you've sat down, played Devil's advocate, and decided the most logical stance is the one you've just now taken the time to realize; it is literally the best of your judgment at this exact moment. Allow me to extrapolate two illustratively extreme possibilities based on the premise of your first point:

The original opinion is the result of propaganda. You have lived your life among your peers (who are also influenced by the same propaganda) and through their opinions—or through the opinions of the ones you've liked better, or were more willing to listen to—you'd come to the conclusion that your original opinion were only natural/obvious/rational/reasonable/whatever positive description. (Let's say, hypothetically, you were of the opinion that Christianity was a useful premise to build a life around because it possesses "knowledge of God.") You sit down one day to question yourself (or God, or whatever else) and, after a bit of thinking, realize that your opinion is full of contradictions. You immediately recognize your folly and immediately consider yourself a bastion of rationality in a sea of fools. You log on to LessWrong to upvote anyone making arguments against the guy advocating the Devil (because you still believe in him) and go back to resting on your laurels, fully convinced you didn't make any other severe mistakes of rationality while you were growing up. You later donate to the anti-faith militia because really belief in God is the only real mind-killer.

The new opinion is the result of propaganda. Life is tough, and it just seems to be getting tougher. You're depressed and stressed and the idea of a psychotherapist just makes you feel like you're that much more pathetic and you'd rather not admit that to yourself. While walking across a bridge you stop for a moment to look anywhere but forward into the path that you (and so many others) have walked so many times that its worn down to a rut. You look outwards, toward the side of the bridge, trying to appreciate what else is out there. ("That I'll never be a part of," you think.) For a split second that you're barely aware of, the thought to jump off the bridge happens somewhere within your brain. You go home and within 24 hours you're standing here again. The thought it a bit stronger this time. A day in, a day out, a day in... Day in, day out, you almost unconsciously construct a barrier to prevent the through from ever reaching your consciousness. Eventually it occurs to you anyway. You instantly repulse the thought. The nearby church fails to take advantage of your situation and show you the kind of kindness that would make you instantly question if you've really been living your life correctly—a thought you didn't really need any help thinking as it is. You stare in disgust at them even more virulently than you have in every day passed. The approaching Muslim wearing a concerned look sees the disgust on your face and instantly decides now probably isn't the time to try reaching out to you. The hobo thanks you for your contribution and says, "You'll be alright man." You nod and make your way home, consciously deciding against joining his religion—whatever that is. You log on to LessWrong, ignore the warning at the bottom of the post, and decide to criticize atheism. Your mind instantly caves to the propaganda and you join the church near the bridge the next day.

If only you'd been indoctrinated in your youth and discovered that God doesn't exist in your rebellious self-forming teenager years!

Of course these are extreme examples of how propaganda can influence the opinions you form. The opinion you end up criticizing could be anything. Why, using this method, you might come to any conclusion at all, however ridiculous it is! If only future-actively-questioning-you were as intelligent as past-thinking-you!

(Votes in this comment chain as of press time: 2: 100%+, 6: 100%+, 1: 67%+, 6: 100%+. By my analysis, two people agreed with me, one disagreed, and six prefer not reforming their existing opinions.)

I had better get severely downvoted for this... Or else severely upvoted. ~__~

How can you value specific opinions over the process that generated them in the first place?!

comment by Viliam_Bur · 2013-09-27T10:38:16.392Z · LW(p) · GW(p)

It depends on how likely your original opinion is true, how likely you are to change a wrong opinion to a correct opinion (using this exercise), and how likely you are to change correct opinion to wrong opinion (using this exercise).

If your opinions are good on average, and if the process of argumenting for a belief has approximately the same chance to move you in any direction, then it is a harmful experiment. If the process is more likely to move you in a correct direction than in a wrong direction, and your original opinions are not too great, then you would benefit from this process on average.

Unfortunately, I don't know any of these numbers. But it should be possible to make an experiment where the correct answer is known to the researchers, but half of the population believes the incorrect answer anyway -- whether the experiment would be more likely to move them in the correct direction. This would have to be repeated for differend kinds of topics, to make sure the effect is not specific for one of them.

comment by satt · 2013-09-28T03:21:09.766Z · LW(p) · GW(p)

If you can think up a good argument against your own beliefs, isn't that a good thing?

I think katydee's post makes a valid point whether or not the answer to that question is yes. Making fun of beliefs often relies on making a bad argument against them instead of a good argument; in those cases the goodness of good arguments doesn't matter.

Replies from: BaconServ
comment by BaconServ · 2013-09-28T05:13:38.868Z · LW(p) · GW(p)

This post was written under the premise of resolving inferential silence. The author has opted to leave the post as drafted for optimum clarity, despite known flaws. Please bear this in mind.

I had hoped I could explain what was wrong with that point in relation to my point without outright saying it had no bearing on the point I raised because it provides absolutely no reasoning about whether the before-opinion or the after-opinion is correct. Saying a response is entirely orthogonal to the thing it is responding to sort of just seems way too close to calling the author of the response a complete idiot in light of the cognitive biases inherent to the topic. I like to think that's an ad hominem and it's epistemologically incorrect and corrosive to discussion, so I tried to avoid it. Do you think I should edit my reply to explicitly assert, "That has literally nothing to do with what I said," regardless? My perception on what is offensive may be miscalibrated, so please.

Although, reading this all over and over and over again, it occurs to me katydee's reply to my point may not have been a defensive reply at all, but rather an aside: "I read something once that warned about the possibility, so the connotation of it being a bad thing was a cached thought in my mind that I didn't remove. Thank you, I'll rephrase that part." Of course, some people prefer to not edit things, so I can't exactly claim the lack of editing is evidence that that wasn't the intention of that reply. Nevertheless, the votes paired with that reply seemed to me to strongly indicate a significant number of people that interpreted my reply as questioning whether or not this process can really alter your opinions. Ha, rereading, it seems this could well be the case! I'll edit that now, for clarity.

So now I'm left with a question: If I meant one thing, and others read another thing, what went wrong?

I'm seeing two likely interpretations of my original literal message:

  • "If you can [reason abstractly], isn't that a good thing?"
  • "If you can [correct yourself], isn't that a good thing?"

The latter is what I intended, of course, and explains the inferential gap of my frustration. It is now possible for me to interpret the replies to mine that had initially appeared to be rejecting rationality as a useful tool in a different way. This at least alleviates the stress of the gap in my mind. It may be that my original message was received and the interpretations I assigned were not grounded.

On the other hand, if the former message is the one that was received... katydee's reply would be an allusion to how the art of reasoning itself can go wrong. The same applies—in a limited sense—to jaibot's reply. Of course, that's what I was already interpreting their posts to suggest. Perhaps they were trying to ever-so-gently inform me that my own reasoning could fail? This seems to explain the confusion very well. Naturally, this seems exceptionally odd to me, because my understanding of reasoning is rooted in considering all interpretations. Even and especially the irrational ones that don't seem to make much sense. It is this very premise of questioning my initial thought by considering its opposite that I consider to be the core basis of all reasoning/rationality/logic itself. My first-order comment was made with the solid understanding that blindly reasoning has no reason to produce correct results. It didn't make sense, at the time, to consider that they might know something I didn't, so I failed to realize that the problem was their maps not matching up with the territory I had intended to create in comment form: I had constructed territory with two distinct mappings!

Yeah, I'm getting bored wasting my time explaining verbosely. There's no real way to resolve this situation in my mind that doesn't ultimately result in, "Don't take it personally; authors speak to the reader, not to you." Is this the same dynamic behind the reddit "hive mind"? Does this happen wherever comment vote-rating systems are used? Has anyone actually observed this as being constructive to discussion? Is each individual trying to get the attention of the LessWrong collective consciousness? It's like... It's almost as if... Does a majority of LessWrongers really think that language is somehow inoptimal and that that is the failure point in communication they experience, rather than any number of cognitive baises and mind projection fallacies they themselves committed in the exchange?

Is it really that simple? Have (most of) you guys just not yet realized you're no less infected by bias than the average layman?

Please give me something to go on here, because I'm really not getting why I can communicate so effectively with literally every English-speaking community other than LessWrong. Is the idea that you make communication hard for yourselves somehow unthinkable?

Replies from: satt
comment by satt · 2013-10-15T20:17:03.400Z · LW(p) · GW(p)

I'm having trouble thinking up a useful response to your comment because I don't really understand it as a whole. I understand most of the individual sentences, but when I try to pull them all together I get confused. So I'll just respond to some isolated bits.

I had hoped I could explain what was wrong with that point in relation to my point

This reads like you reckon katydee & I were making the same point, while I'd thought I was making a different point that wasn't a non sequitur. (Your comment seemed to me to rely on an implicit premise that making fun of things involves thinking of a good argument against them, so I disputed that implicit premise, which I'd read into your comment. But it looks like we mutually misunderstood each other. Ah well.)

Saying a response is entirely orthogonal to the thing it is responding to sort of just seems way too close to calling the author of the response a complete idiot in light of the cognitive biases inherent to the topic.

I'm not sure I follow and I don't think I agree.

Do you think I should edit my reply to explicitly assert, "That has literally nothing to do with what I said," regardless?

I probably would've if I were in your shoes. Even if katydee disagreed, the resulting discussion might have clarified things for at least one of you. (I doubt it's worth making that edit now as this conversation's mostly died down.) Personally, I'm usually content to tell someone outright "that's true but irrelevant" or some such if they reply to me with a non sequitur (e.g.).

I'm seeing two likely interpretations of my original literal message:

  • "If you can [reason abstractly], isn't that a good thing?"
  • "If you can [correct yourself], isn't that a good thing?"

I interpreted it as saying the second one too. But in this context that point sounded irrelevant to me: if katydee warns someone that style S of argument is dangerous because it can make bad arguments sound compelling, a response along the lines of "but isn't it good if you can correct yourself by thinking of good arguments?" doesn't seem germane unless it leans on an implicit assumption that S is actually a reliable way of generating good arguments. (Without that qualifying assumption one could use the "but isn't it good if you can correct yourself" argument to justify any old method of generating arguments, even generating arguments at random, because sometimes it'll lead you to think of a good argument.)

Replies from: BaconServ
comment by BaconServ · 2013-10-16T00:33:59.515Z · LW(p) · GW(p)

I believe I have a bad habit of leaping between points for understanding them to be more directly obvious than they commonly are. I think it might clarify things considerably if I start from the very beginning.

When I first saw Making Fun of Things is Easy as a heading, I was pleased, because I have long recognized that numerous otherwise intelligent people have an extremely disuseful habit of refusing to spend thought on things—even to the point of failing to think about it enough to make a rational assessment of the usefulness of thinking about it—by dismissing them as "hilariously wrong." If LessWrong is getting to the point where they're starting to recognize positive emotional responses (laughter) can be disuseful, then I have reason to celebrate. Naturally, I had to read the article and see if my suspicion—that LessWrong is actually getting less wrong—was correct.

A large part of the damage caused by laughing things into mental obscurity is that the laughing parties lose their ability to think rationally about the subject they are laughing at. The solution to this is to stop laughing, sit down, and take ideas that you consider ridiculous as potentially holding value in being even preliminarily considered. Ideas like telepathy, for example. It's bothersome that a community of rationalists should be unable to mentally function without excessive disclaiming. I realize this isn't actually the case, but that members still feel the need to specify "this-isn't-nonsense" is telling of something beyond those individual members themselves.

So I read the article, and it's great. It touches on all the points that need to be touched upon. Then, at the very last sentence on the very last line at the very last word, I see a red flag. A warning about how your opinions could change. Good golly gosh. Wouldn't that be ever so horrible? To have my own ability to reason used against me, by my own self, to defeat and replace my precious now-beliefs? Oh what a world!

...You can begin to see how I might derive frustration from the fact of the very problem caused by epistemic laughter was explicitly warned against solving: "Don't make fun, but still be wary of taking the stance seriously; you might end up with different beliefs!!"

I figured I really ought to take the opportunity to correct this otherwise innocuous big red flag. I suppose my original phrasing was too dualistic in meaning to be given the benefit of the doubt that I might have a point to be making. No no, clearly I am the one who needs correcting. What does it say about this place that inferential silence is a problem strong enough to merit its own discussion? Of course the ensuing comments made and all the questions I asked were before I had identified the eye of LessWrong's focal mass. It's a ton easier to navigate now that I know the one localized taboo that literally every active member cannot stand is the collective "LessWrong" itself. I can be vicious and vile and impolite and still get upvoted no problem, because everyone's here to abdicate responsibility of thought to the collective. I can attack any one person, even the popular ones, and get upvoted. The cult isn't bound to any one individual or idea that isn't allowed to be attacked. It is when the collective itself is attacked that the normal human response of indignation is provoked. Suffice to say all my frustration would have been bypassed if I had focused more on arguing with the individuals rather than the mass of the collective where the actual problem here lies.

To get back to your actual argument: Any method of generating an argument is useful to the point of being justified. Making fun of things is an epistemic hazard because it stops that process. Making fun of things doesn't rely on making bad arguments against them; it relies on dismissing them outright before having argued, discussed, or usefully thought about them at all in the first place. Bad arguments at least have the redeeming quality of being easy to argue against/correct. Have you ever tried to argue against a laugh and a shrug?

A list, of the most difficult things to argue against:

  1. "Rationalized" apathy.
  2. Rational apathy.
  3. Apathy.
  4. A complex argument.
  5. An intelligent argument.
  6. A well-thought-out argument.
  7. A well-constructed/prepared argument.
  8. ...
  9. A bad argument.
  10. Sophomoric objections.

Each of these comes in two flavors: Vanilla and meme. I'm working against memetic rationalized apathy in a community of people who generally consider themselves generally rational. If I were even a fragment less intelligent, this would be a stupid act.

Replies from: satt
comment by satt · 2013-10-16T02:49:10.697Z · LW(p) · GW(p)

I find that reply easier to follow, thanks.

The last sentence of katydee's post doesn't raise a red flag for me, I guess because I interpret it differently. I don't read it as an argument against changing one's opinion in itself, but as a reminder that the activity in footnote 2 isn't just an idle exercise, and could lead to changing one's mind on the basis of a cherry-picked argument (since the exercise is explicitly about trying to write an ad hoc opposing argument — it's not about appraising evidence in a balanced, non-selective way). Warning people about changing their minds on the basis of that filtered evidence is reasonable.

I'm not too worried that inferential silence is a big enough problem on LW to merit its own discussion. While it is a problem, it's not clear there's an elegant way to fix it, and I don't think LW suffers from it unusually badly; it seems like something that occurs routinely whenever humans try to communicate. As such the presence of inferential silence on LW doesn't say anything special about LW.

The paragraph about LW being a cult where "everyone's here to abdicate responsibility of thought to the collective" comes off to me as overblown. I'm not sure what LW's "memetic rationalized apathy" is, either.

It looks like we interpret "making fun" differently. To me "making fun" connotes a verbal reaction, not just a laugh and a shrug. "Ha ha, get a load of this stupid idea!" is making fun, and hinges on the implicit bad (because circular) argument that an idea's bad because it's stupid. But a lone laugh or an apathetic shrug isn't making fun, because there's no real engagement; they're just immediate & visible emotional reactions. So, as I see it, making fun often does rely on making bad arguments, even if those arguments are so transparently poor we hardly even register them as arguments. Anyway, in this paragraph, I'm getting into an argument about the meaning of a phrase, and arguments about the meanings of words & phrases risk being sterile, so I'd better stop here.

Replies from: BaconServ
comment by BaconServ · 2013-10-16T06:43:16.064Z · LW(p) · GW(p)

The problem is that most opinions people hold, even those of LessWrong's users, are already based on filtered evidence. If confirmation bias wasn't the default state of human affairs, it wouldn't be a problem so noteworthy as to gain widespread understanding. (There are processes that can cause illegitimate spreading, but that isn't the case with confirmation bias.) When you sit down to do the exercise and realize legitimate arguments (not merely ad hoc arguments) against your own views, you're overcoming your confirmation bias (default) on that issue for the first time. This is why it is important to respect your partner in debate; without respecting their ability to reason and think things you haven't, their mere disagreement with your permanent correctness directly causes condescension. Nonsensical ad-hoc arguments are more useful than no argument whatsoever; one has the quality of provoking thought. The only way otherwise rational people come to disagree is from the differing priors of their respective data sets; it's not that the wrong one among them is thinking up nonsense and being negatively affected by it.

The truth is I don't really read comments on LessWrong all that much. I can't stand it. All I see being discussed and disagreed over are domain-specific trivial arguments. I recall someone on IRC once criticized that they hadn't seen evidence that Eliezer_Yudkowsky ever really admits being wrong in the face of superior arguments. This same concept applies to the entirety of LessWrongers; nobody is really changing their deep beliefs after "seeing the light." They're seeing superior logic and tactics and adding those onto their model. The model still remains the same, for the most part. Politics is only a mind-killer insofar as the participants in the discussion are unable to correct their beliefs on physically and presently important issues. That there exist subjects that LessWrong's users ban themselves from participation in is class A evidence of this. LessWrong only practices humble rationality in the realm of things that are theoretically relevant. The things that are actually shown to matter are taboo to even bring up because that might cause people to "realize" (confirmation bias) that they're dealing with people they consider to be idiots. Slow progress is being made in terms of rationality by this community, but it is so cripplingly slow by my standards that it frustrates me. "You could do so much better if you would just accept this one single premise as plausible!" The end result is that LessWrong is advancing, yes, but not at a pace that exceeds the bare minimum of the average.

Everything this community has done up to now is a good warm-up, but now I'd like to start seeing some actual improvement where it counts.

It's not the mere existence of inferential silence that is the issue here. Inferential silence exists everywhere on digital forums. What's relevant is the exact degree to which the inferential silence occurs. For example, if nobody commented, upvoted, or downvoted, then LessWrong is just a disorganized blog. If all the topics worth discussing have their own posts and nobody posts anything new, and everyone stops checking for new content, the site is effectively dead. The measuring of inferential silence has the same purpose as asking, "Is this site useful to me?" Banned subjects are a severe form of inferential silence. We're rationalists. We ought to be able to discuss any subject in a reasonable manner. Other places, when someone doesn't care about a thread, they just don't bother reading it. Here, you're told not to post it. Because it's immoral to distract all these rationalists who are supposed to be advancing the Singularity with temptation to debate (seemingly) irrelevant things. LessWrong places next to no value on self-restraint; better to restrain the world instead.

This is the part where things get difficult to navigate.

I predicted your reaction of considering the coherency of the collective as overblown. I'd already started modeling responses in my head when I got up from the computer after posting the comment. I don't predict you're terribly bothered by a significant degree of accuracy to the prediction; rather, I predict that, to you, it will seem only obvious that I should have been able to predict that. This will all seem fairly elementary to you. What I'm unsure about is the degree to which you are aware that you stand out from the rest of these folks. You're exhibiting a deeper level of understanding of the usefulness of epistemic humility in bothering to speak to me and read my comments in the way that you are. You offer conversational pleasantries and pleasant offers for conversation, but that can be both a consciously recognized utility or an unconscious one, with varying degrees in between. I can already tell, though, what kind of path you've been on with that behavior. You'll have seen and experienced things that most other LWers have not. Basically, what sets you apart is that you don't suck at conversation.

It's not that I predicted that you'd disagree or be unsure about what I was referring to, it's more that the idea I understand, by virtue of being able to understand it, inherently lets me know that you will immediately agree if you can actually grasp the concept. It's not that you'll immediately see everything I have; that part will take time. What will happen is that you'll have grasped the concept and the means to test the idea, though you'll feel uncertain about it. You'll of course be assessing the data you collect in the opposite of the manner that I do; while I search for all the clues indicating negatives, you'll search for clues and reasoning that leave LessWrong with less blame—or maybe you'll try to be more neutral about it (if you can determine where the middle ground lies). I wrote my last comment because I'd already concluded that you'll be able to fully grasp my concept; but be forewarned: Understanding my lone hypothesis in light of no competing hypotheses could change your beliefs! (An irreparable change, clearly.)

I've more to say, but it won't make sense to say it without receiving feedback about the more exact mechanics of your stage of grasping my concept. I predict you won't notice anything out of the ordinary about the thoughts you'll have thought in reading/responding to/pondering this. These prediction, again, will appear to be mundane.

Replies from: satt
comment by satt · 2013-10-17T00:43:48.336Z · LW(p) · GW(p)

When you sit down to do the exercise and realize legitimate arguments (not merely ad hoc arguments) against your own views, you're overcoming your confirmation bias (default) on that issue for the first time.

That's not obvious to me. I'd expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding to like a particular politician or organization in the first place, and would probably, having made that decision, try to remain aware opposing evidence exists.

Nonsensical ad-hoc arguments are more useful than no argument whatsoever; one has the quality of provoking thought.

I'm less optimistic. While nonsensical ad hoc arguments do provoke thoughts, those thoughts are sometimes things like, "Jesus, am I doomed to hear that shitty pseudo-argument every time I talk to people about this?" or "I already pre-empted that dud counterargument and they ignored me and went ahead and used it anyway!" or "Huh?!", rather than "Oh, this other person seems to have misunderstanding [X]; I'd better say [Y] to try disabusing them of it".

The only way otherwise rational people come to disagree is from the differing priors of their respective data sets; it's not that the wrong one among them is thinking up nonsense and being negatively affected by it.

Unfortunately a lot of arguments don't seem to be between "otherwise rational people", in the sense you give here.

All I see being discussed and disagreed over are domain-specific trivial arguments.

But I've seen (and occasionally participated in) arguments here about macroeconomics, feminism, HIV & AIDS, DDT, peak oil, the riskiness of the 80,000 Hours strategy of getting rich to donate to charity, how to assess the importance of technologies, global warming, how much lead exposure harms children's development, astronomical waste, the global demographic transition, and more. While these are domain-specific issues, I wouldn't call these trivial. And I've seen broader, nontrivial arguments about developing epistemically rationality, whether at the personal or social level. (What's the right tradeoff between epistemic & instrumental rationality? When should one trust science? How does the social structure of science affect the reliability of the body of knowledge we call science? How does one decide on priors? What are good 5-second skills that help reinforce good rationalist habits? Where do the insights & intuitions of experts come from? How feasible is rationality training for people of ordinary IQ?)

This same concept applies to the entirety of LessWrongers; nobody is really changing their deep beliefs after "seeing the light." They're seeing superior logic and tactics and adding those onto their model. The model still remains the same, for the most part.

That's too vague for me to have a strong opinion about. (Presumably you don't literally mean "nobody", and I don't know precisely which beliefs you're referring to with "deep beliefs".) But there are possible counterexamples. People have decided to dedicate years of their lives (and/or thousands of dollars) to attacking the problem of FAI because of their interactions with LW. I dimly recall seeing a lurker post here saying they cured their delusional mental illness by internalizing rationality lessons from the Sequences.

The things that are actually shown to matter are taboo to even bring up because that might cause people to "realize" (confirmation bias) that they're dealing with people they consider to be idiots.

That's a bit of an unfair & presumptuous way to put it. It's not as if LW only realized human brains run on cognitive biases once it started having flamewars on taboo topics. The ubiquity of cognitive bias is the central dogma of LW if anything is; we already knew that the people we were dealing with were "idiots" in this respect. For another thing, there's a more parsimonious explanation for why some topics are taboo here: because they lead to disproportionately unpleasant & unproductive arguments.

Everything this community has done up to now is a good warm-up, but now I'd like to start seeing some actual improvement where it counts.

Finally I can agree with you on something! Yes, me too, and we're by no means the only ones. (I recognize I'm part of the problem here, being basically a rationalist kibitzer. I would be glad to be more rational, but I'm too lazy to put in the actual effort to become more rational. LW is mostly an entertainment device for me, albeit one that occasionally stretches my brain a little, like a book of crosswords.)

We're rationalists. We ought to be able to discuss any subject in a reasonable manner.

Ideally, yes. Unfortunately, in reality, we're still human, with the same bias-inducing hot buttons as everyone else. I think it's legitimate to accommodate that by recognizing some topics reliably make people blow up, and cultivating LW-specific norms to avoid those topics (or at least damp the powder to minimize the risk of explosion). (I'd be worried if I thought LWers wanted to "restrain the world", as you grandiosely put it, by extending that norm to everywhere beyond this community. But I don't.)

I predicted your reaction of considering the coherency of the collective as overblown. [...] I don't predict you're terribly bothered by a significant degree of accuracy to the prediction; rather, I predict that, to you, it will seem only obvious that I should have been able to predict that. This will all seem fairly elementary to you.

Yeh, pretty much!

What I'm unsure about is the degree to which you are aware that you stand out from the rest of these folks. You're exhibiting a deeper level of understanding of the usefulness of epistemic humility in bothering to speak to me and read my comments in the way that you are.

This is flattering and I'd like to believe it, but I suspect I'm just demonstrating my usual degree of getting the last word-ism, crossed with Someone Is Wrong On The Internet Syndrome. (Although this is far from the worst flare-up of those that I've had. Since then I've tried not to go on & on so much, but whether I've succeeded is, hrrrm, debatable.)

I've more to say, but it won't make sense to say it without receiving feedback about the more exact mechanics of your stage of grasping my concept. I predict you won't notice anything out of the ordinary about the thoughts you'll have thought in reading/responding to/pondering this.

Right again. I still don't have any idea what your concept/hypothesis is (although I expect it'll be an anticlimax after all this build-up), but maybe what I've written here gives you some idea of how to pitch it.

Replies from: BaconServ, BaconServ
comment by BaconServ · 2013-10-17T04:03:34.780Z · LW(p) · GW(p)

[Comment length limitation continuance...]

Although I expect it'll be an anticlimax after all this build-up.

It will, despite my fantasies, be anticlimactic, as you predict. While I predicted this already, I didn't predict that you would consciously and vocally predict this yourself. My model updates as thus: Though I was not consciously aware of the possibility of stating my predictions being an invitation for you to state your own set of predictions, I am now aware that such a result is possible. What scenarios the practice is useful in, why it works, how it fails, when it does, and all such related questions are unknown. (This could be why my brain didn't think to inform my consciousness of the possibility, now that I think about it in writing this.) A more useful tool is that I can now read that prediction as a strong potential from a state of it not having been stated; I can now read inferential silence slightly better. If not for general contexts, then at least for LessWrong to whatever degree. Most useful of all data packed into that sentence is this: I now know, dividing out the apathy, carelessness, and desires for the last word and Internet Correction, what you're contextually looking for in this conversation. Effectively, I'm measuring your factored interest in what I have to say. The next factor to divide out is the pretense/build-up.

Maybe what I've written here gives you some idea of how to pitch it.

Certainly so, insofar as you were willing to reply. Though you didn't seem it and there was no evidence, the thought crossed my mind that I'd gone too far and you were just not going to bother responding. I didn't think I exceeded your boundaries, but I've known LessWongers to conceal their true standards, in order to more fully detect "loonies" or "crackpots."

There's no sentence I can form (Understand style) that will stun you with sheer realization (rather than be more likely to convince you of lessened intelligence). This is primarily because building the framework for such realizations results in a level of understanding that makes the lone trigger assertion seem mundane by conceptual ambiance. That is, I predict that there is nothing I can say about my predictions of you that you will both recognize as accurate while also recognizing as extraordinary. I have one more primary prediction to make, but I'll keep it to myself for the moment.

Replies from: satt
comment by satt · 2013-10-21T03:23:00.169Z · LW(p) · GW(p)

I predict that there is nothing I can say about my predictions of you that you will both recognize as accurate while also recognizing as extraordinary.

Yes, I expect whatever big conclusion you're winding up to will prove either true & trivial, or surprising & false. (I am still a bit curious as to whether you'll take the boring route or the crackpot route, although my curiosity is hardening into impatience.)

comment by BaconServ · 2013-10-17T03:39:16.742Z · LW(p) · GW(p)

Do you have any actual reason (introspection doesn't count) to "expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding"? I'm not asking if you can fathom or rationalize up a reason, I'm requesting the raw original basis for the assumption.

Your reduced optimism is a recognition within my assessment rather than without it; you agree, but you see deeper properties. Nonsensical arguments are not useful after a certain point, naturally, but where the point lies is a matter we can only determine after assessing each nonsensical idea in turn. We can detect patterns among the space of nonsensical hypotheses, but we'd be neglecting our duty as rationalists and Bayesians alike if we didn't properly break down each hypothesis in turn to determine its proper weight and quality over the space of measured data. Solomonoff induction is what it is because it takes every possibility into account. Of course if I start off a discussion saying nonsense is useful, you can well predict what the reaction to that will be. It's useful, to start off, from a state of ignorance. (The default state of all people, LessWrongers included.)

  • Macroeconomics: Semi-legitimate topic. There is room for severe rational disagreement. Implications for most participants in such discussions is very low, classifying the topic as irrelevant, despite the room for opinion variance.
  • Feminism: Arguably a legitimate point to contend over. I'll allow this as evidence in counter to my stance if you can convince me that it was being legitimately argued: Someone would need to legitimately hold the stance of a feminist and not budge in terms of, "Well, I take feminism to mean..." Basically, I don't really believe this is a point of contention rather than discussion for the generalized LessWrong collective.
  • HIV & AIDS: Can't perform assessment. Was anyone actually positing non-consensus ideas in the discussion?
  • DDT: What's to discuss? "Should it have been done?" From my understanding this is an issue of the past and thus qualifies as trivial by virtue of being causally disconnected from future actions. Not saying discussing the past isn't useful, but it's not exactly boldly adventurous thinking on anyone's part.
  • Peak oil: Very legitimate topic. Surprised to hear that it was discussed here. Tell me though, how did the LessWrong collective vote on the comments composing the discussion? Is there a clear split composed of downvotes for comments arguing the dangers of peak oil, and upvotes for the other side? If you wish to argue that wrongness ought to be downvoted, I can address that.
  • Getting rich to donate to charity: Trivial. Absolutely trivial. This is the kind of LessWrong circlejerking that upsets me the most. It's never a discussion about how to actually get rich, or what charities to donate to, which problems to solve, who is best qualified to do it, or any such useful discussion. It is always, every time, about whether or not that is the most optimal route. Since nobody is actually going to do anything useful as the result of such discussions, yes, literally, intellectual masturbation.
  • How to assess the importance of technologies: Semi-legitimate topic. What we need here are theories, new ideas, hypotheses; in spades. LessWrong hates that. New ideas, ideas that stands out, heck, anything less than previously established LessWrong general consensus, is downvoted. You could say LessWrong argues how to assess the importance, but never actually does assess the importance.
  • Global warming: Fully legitimate topic.
  • "How much lead exposure harms children's development:" It's a bad thing. What's to argue or discuss? (Not requesting this as a topic, but demonstrating why I don't think LessWrong's discussing it is useful in any way.)
  • Astronomical waste: Same as above.
  • Global demographic transition: Legitimate, useful even, but trivial in the sense that most of what you're doing is just looking at the data coming out; I don't see any real immediate epistemic growth coming out of this.

And I've seen broader, nontrivial arguments about developing epistemically rationality, whether at the personal or social level.

Yes, that is the thing which I do credit LessWrong on. The problem is in the rate of advancement; nobody is really getting solid returns on this investment. It's useful, but not in excess of the average usefulness coming from any other field of study or social process.

People have decided to dedicate years of their lives (and/or thousands of dollars) to attacking the problem of FAI because of their interactions with LW.

I have a strong opinion on this that LessWrong has more or less instructed me to censor. Suffice to say I am personally content with leaving that funding and effort in place.

I dimly recall seeing a lurker post here saying they cured their delusional mental illness by internalizing rationality lessons from the Sequences.

That is intensely interesting and the kind of thing I'd yell at you for not looking more into, let alone remembering only dimly. Events like these are where we're beginning to detect returns on all this investment. I would immediately hold an interview in response to such a stimulus.

For another thing, there's a more parsimonious explanation for why some topics are taboo here: because they lead to disproportionately unpleasant & unproductive arguments.

That is, word for word, thought for thought that wrote it, perception for perception that generated the thoughts, the exact basis of the understanding that leads me to make the arguments I am making now.

I think it's legitimate to [cultivate] LW-specific norms to avoid those topics (or at least damp the powder to minimize the risk of explosion).

This is, primarily, why I do things other than oppose the subject bans. Leaving it banned, leaving it taboo, dampens the powder considerably. This is where I can help, if LessWrong could put up with the fact that I know how to navigate the transition. But of course that's an extraordinary claim; I'm not allowed to make it. First I have to give evidence that I can do it. Do what? Improve LessWrong on mass scale. Evidence of that? In what form? Should I bring about the Singularity? Should I improve some other (equally resistant) rationalist community? What evidence can I possibly give of my ability to do such a thing? (The last person I asked this question to was unable to divine the answer.)

I'm left with having to argue that I'm on a level where I can manage a community of rationalists. It's not an argument any LessWronger is going to like very much at all. You're able to listen to it now because you're not the average LessWronger. You're different, and if you've properly taken the time to reflect on the opening question of this comment, you'll know exactly why that is. I'm not telling you this to flatter you (though it is reason to be flattered), but rather because I need to you to be slightly more self-aware in order for you to see the true face of LessWrong that's hidden behind your assumption that the members of the mass are any bit similar to yourself on an epistemic level. How exactly to utilize that is something I've yet to fully ascertain, but it is advanced by this conversation.

LW is mostly an entertainment device for me, albeit one that occasionally stretches my brain a little, like a book of crosswords.

Interesting article, and I'm surprised/relieved/excited to see just how upvoted it's been. I can say this much: Wanting the last word, wanting to Correct the Internet... These are useful things that advance rationality. Apathy is an even more powerful force than either of those. I know a few ways to use it usefully. You're part of the solution, but you're not seeing it yet, because you're not seeing how far behind the mass really is.

I'd be worried if I thought LWers wanted to "restrain the world", as you grandiosely put it.

LessWrong is a single point within a growing Singularity. I speak in grandiose terms because the implications of LessWrong's existence, growth, and path, is itself grand. Politics is one of three memetically spread conversational taboos, outside of LessWrong. LessWrong merely formalized this generational wisdom. As Facebook usage picks up, and the art of internet argument is brought to the masses, we're seeing an increase in socioeconomic and sociopolitical debate. This is correct, and useful. However, nobody aside from myself and a few others that I've met seem to be noticing this. LessWrong itself is going to become generationally memetic. This is correct, and useful. When, exactly, this will happen, is a function primarily of society. What, exactly, LessWrong looks like at that moment in history will offset billions of fates. Little cracks and biases will form cavernous gaps in a civilization's mindset. This moment in history is far off, so we're safe for the time being. (If that moment were right now, I would be spending as much of time time as possible working on AGI to crush the resulting leviathan.)

Focusing on this one currently-LessWrong-specific meme, what do you see happening if LW's memetic moment were right now? Now is LessWrong merely restraining its own members?

[Comment length reached, continuing...]

Replies from: satt, satt
comment by satt · 2013-10-19T16:54:35.512Z · LW(p) · GW(p)

Do you have any actual reason (introspection doesn't count) to "expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding"? [...] I'm requesting the raw original basis for the assumption.

  • LWers self-report having above-average IQs. (One can argue that those numbers are too high, as I've done, but those are just arguments about degree.) People with more cognitive firepower to direct at problems are presumably going to do so more often.

  • LWers self-report above-average AQs. (Again, one might argue those AQs are exaggerated, but the sign of the effect is surely right given LW's nerdy bent.) This is evidence in favour of LWers being people who tend to automatically apply a fine-grained (if not outright pedantic) and systematic thinking style when confronted with a new person or organization to think about.

  • Two linked observations. One: a fallacy/heuristic that analytical people often lean on is treating reversed stupidity as intelligence. Two: the political stupidity that an analytical person is likely to find most salient is the stupidity coming from people with firmly held, off-centre political views. Bringing the two together: even before discovering LW, LWers are the kind of analytical types who'd apply the reversed stupidity heuristic to politics, and infer from it that the way to avoid political stupidity is to postpone judgement by trying to look at Both Sides before committing to a political position.

  • Every time Eliezer writes a new chapter of his HPMoR fanfic, LW's Discussion section explodes in a frenzy of speculation and attempts to integrate disparate blobs of evidence into predictions about what's going to happen next, with a zeal most uninterested outside observers might find hard to understand. In line with nerd stereotype, LWers can't even read a Harry Potter story without itching to poke holes in it.

(Have to dash out of the house now but I'll comment on the rest soon.)

comment by satt · 2013-10-21T03:18:49.399Z · LW(p) · GW(p)

Nonsensical arguments are not useful after a certain point, naturally, but where the point lies is a matter we can only determine after assessing each nonsensical idea in turn. We can detect patterns among the space of nonsensical hypotheses, but we'd be neglecting our duty as rationalists and Bayesians alike if [...]

I agree with that, read literally, but I disagree with the implied conclusion. Nonsensical arguments hit diminishing (and indeed negative) returns so quickly that in practice they're nearly useless. (There are situations where this isn't so, namely educational ones, where having a pupil or student express their muddled understanding makes it possible to correct them. But I don't think you have that sort of didactic context in mind.)

  • Feminism: Arguably a legitimate point to contend over. I'll allow this as evidence in counter to my stance if you can convince me that it was being legitimately argued: Someone would need to legitimately hold the stance of a feminist and not budge in terms of, "Well, I take feminism to mean..." Basically, I don't really believe this is a point of contention rather than discussion for the generalized LessWrong collective.

Hmm. I tend not to wade into the arguments about feminism so I don't remember any examples that unambiguously meet your criteria, and some quick Google searches don't give me any either, although you might have more luck. Still, even without evidence on hand sufficient to convince a sceptic, I'm fairly sure feminism, and related issues like pick-up artistry and optimal ways to start romantic relationships, are contentious topics on LW. (In fact I think there's something approaching a mild norm against gratuitously bringing up those topics because Less Wrong Doesn't Do Them Well.)

  • HIV & AIDS: Can't perform assessment. Was anyone actually positing non-consensus ideas in the discussion?

Yep. The person I ended up arguing with was saying that HIV isn't an STD, that seroconversion isn't indicative of HIV infection, and that there's not much reason to think microscopic pictures of HIV are actually of HIV. (They started by saying they had 70% confidence "that the mainstream theory of HIV/AIDS is solid", but what they wrote as the thread unfolded made clear that their effective degree of confidence was really much less.)

  • DDT: What's to discuss?

Here's the discussion I had in mind.

  • Peak oil: Very legitimate topic. Surprised to hear that it was discussed here. Tell me though, how did the LessWrong collective vote on the comments composing the discussion? Is there a clear split composed of downvotes for comments arguing the dangers of peak oil, and upvotes for the other side?

I quickly skimmed the conversation I was thinking of and didn't see a clear split. But you can judge for yourself.

  • Getting rich to donate to charity: Trivial. Absolutely trivial. This is the kind of LessWrong circlejerking that upsets me the most. It's never a discussion about how to actually get rich, or what charities to donate to, which problems to solve, who is best qualified to do it, or any such useful discussion.

Here's a post on deciding which charities to donate to. Here's a student asking how they can get rich for effective altruism. Here's a detailed walkthrough of how to maximize the cash you get when searching for a programming job. Here's someone asking straightforwardly how they can make money. Here's Julia Wise wondering which career would allow her to donate the most money.

It is always, every time, about whether or not that is the most optimal route.

This would appear to be false.

  • "How much lead exposure harms children's development:" It's a bad thing. What's to argue or discuss?

Whether it affects children's development to such a degree that it can explain future variations in violent crime levels.

I had hoped that your going through my list of examples point by point would clarify how you were judging which topics were "legitimate" & nontrivial, but I'm still unsure. In some ways it seems like you're judging topics based on whether they're things LWers are actually doing something about, but LWers aren't (as far as I know) doing anything more about global warming or peak oil than they are about astronomical waste or the (insufficient speed of the) global demographic transition. So what makes the former more legit than the latter?

People have decided to dedicate years of their lives (and/or thousands of dollars) to attacking the problem of FAI because of their interactions with LW.

I have a strong opinion on this that LessWrong has more or less instructed me to censor. Suffice to say I am personally content with leaving that funding and effort in place.

The point I meant to make in bringing that up was not that you should cheer people on for dedicating time & money to FAI; it was that people doing so is an existence proof that some LWers are "changing their deep beliefs after 'seeing the light.'". If someone goes, "gee, I used to think I should devote my life to philosophy/writing/computer programming/medicine/social work/law, but now I read LW I just want to throw money at MIRI, or fly to California to help out with CFAR", and then they actually follow through, one can hardly accuse them of not changing their deep beliefs!

That is intensely interesting and the kind of thing I'd yell at you for not looking more into, let alone remembering only dimly. [...] I would immediately hold an interview in response to such a stimulus.

Unless my memory's playing tricks on me, Eliezer did ask that person to elaborate, but got no response.

This is where I can help, [...] I know how to navigate the transition. But of course that's an extraordinary claim; I'm not allowed to make it. First I have to give evidence that I can do it. Do what? Improve LessWrong on mass scale. [...] What evidence can I possibly give of my ability to do such a thing? (The last person I asked this question to was unable to divine the answer.)

It seems pretty sensible to me to demand evidence when someone on the fringes of an established community says they're convinced they know exactly (1) how to singlehandedly overhaul that community, and (2) what to aim for in overhauling it.

I also can't divine the answer you have in mind, either.

I'm left with having to argue that I'm on a level where I can manage a community of rationalists. It's not an argument any LessWronger is going to like very much at all. You're able to listen to it now because you're not the average LessWronger.

I don't think you're making the argument you think you are. The argument I'm hearing is that LW isn't reaching its full potential because LWers sit around jacking each other off rather than getting shit done. You haven't actually mounted an argument for your own managerial superiority yet.

You're different, and if you've properly taken the time to reflect on the opening question of this comment, you'll know exactly why that is. [...] I need to you to be slightly more self-aware in order for you to see the true face of LessWrong that's hidden behind your assumption that the members of the mass are any bit similar to yourself on an epistemic level.

How about this: I need you to spell out what you mean with this "true face of LessWrong" stuff. (And ideally why you think I'm different & special. The only evidence you've cited so far is that I've bothered to argue with you!) I doubt I'm nearly as astute as you think I am, not least because I can't discern what you're saying when you start laying on the gnomic flattery.

LessWrong is a single point within a growing Singularity. [Rest of paragraph snipped.]

My own hunch: LW will carry on being a reasonable but not spectacular success for MIRI. It'll continue serving as a pipeline of potential donors to (and workers for) MIRI & CFAR, growing steadily but not astoundingly for another decade or so until it basically runs its course.

Focusing on this one currently-LessWrong-specific meme, what do you see happening if LW's memetic moment were right now? Now is LessWrong merely restraining its own members?

OK, yes, if the LW memeplex went viral and imprinted itself on the minds of an entire generation, then by definition it'd be silly for me to airily say, "oh, that's just an LW-specific meme, nothing to worry about". But I don't worry about that risk much for two reasons: the outside view says LW most likely won't be that successful; and people love to argue politics, and are likely to argue politics even if most of them end up believing in (and overinterpreting) "Politics is the Mindkiller". Little political scuffles still break out here, don't they?

Replies from: BaconServ
comment by BaconServ · 2013-10-21T16:26:27.301Z · LW(p) · GW(p)

There are situations where this isn't so, namely educational ones, where having a pupil or student express their muddled understanding makes it possible to correct them. But I don't think you have that sort of didactic context in mind.

I do, actually, which raises the question as to why you think I didn't have that in mind. Did you not realize that LessWrong and pretty much our entire world civilization is in such a didactic state? Moreover, if we weren't in such a didactic state, why does LessWrong exist? Does the art of human rationality not have vast room to improve? This honestly seems like a highly contradictory stance, so I hope I'm not attacking a straw man.

This would appear to be false.

So it would. Thank you for taking the time to track down those articles. As always, it's given me a few new ideas about how to work with LessWrong.

LWers aren't (as far as I know) doing anything more about global warming or peak oil than they are about astronomical waste or the (insufficient speed of) the global demographic transition. So what makes the former more legit than the latter?

I was using a rough estimate for legitimacy; I really just want LessWrong to be more of an active force in the world. There are topics and discussions that further this process and there are topics and discussion that simply do not. Similarly, there are topics and discussions where you can pretend you're disagreeing, but not really honing your rationality in any way by participating. For reference, this conversation isn't honing our rationality very well; we're already pretty finely tuned. What's happening between us now is current-optimum information exchange. I'm providing you with tangible structural components, and you've providing me with excellent calibration data.

If someone goes, "gee, I used to think I should devote my life to philosophy/writing/computer programming/medicine/social work/law, but now I read LW I just want to throw money at MIRI, or fly to California to help out with CFAR", and then they actually follow through, one can hardly accuse them of not changing their deep beliefs!

Oh but that is very much exactly what I can do!

In each and every one of those cases you will find that the person had not spent sufficient time reflecting on the usefulness of thought and refined reasoning, or else uFAI and existential risks. The state these ideas existed in their mind in was not a "deep belief" state, but rather a relatively blank slate primed to receive the first idea that came to mind. uFAI is not a high-class danger; EY is wrong, and the funding and effort is, in large part, illegitimate. I am personally content leaving that fear, effort and funding in place precisely because I can milk it for my own personal benefit. Does every such person who reads the sequences run off to donate or start having nightmares about FAI punishing them for not donating? Absolutely, positively; this is not the case.

Deep beliefs are an entirely different class of psychological construct entirely. Imagine I am very much of the belief that AI cannot be created because there's something fundamental in the organic brain that a machine cannot replicate. What will reading every AI-relevant article in the sequences get me? Will my deep (and irrational) beliefs be overridden and replaced with AI existential fear? It is very difficult for me to assume you'll do anything but agree that such things do not happen, but I must leave open the possibly that you'll see something that I missed. This is a relatively strong belief of mine, but unlike most others, I will never close myself off to new ideas. I am very much of the intention that child-like plasticity can be maintained so long as I do not make the conscious decision to close myself off and pretend I know more than I actually do.

Eliezer did ask that person to elaborate, but got no response.

Ah. No harm, no foul, then.

You haven't actually mounted an argument for your own managerial superiority yet.

I've been masking heavily. To be honest, my ideas were embedded many replies ago. I'm only responding now insofar as seeing what all you have to offer, what level you're at, and what levels and subjects you're overtly receptive to. (And on the off-chance, picking up an observer or two.)

I need to you to be slightly more self-aware in order [...]

How about this: I need you to spell out what you mean with this "true face of LessWrong" stuff. (And ideally why you think I'm different & special.)

"Self-aware" is a non-trivial aspect here. It's not something I can communicate simply by asserting it, because you can only trust the assertion so much, especially given that the assertion is about you. Among other things, I'm measuring the rate at which you come to realizations. "If you've properly taken the time to reflect on the opening question of this comment," is more than enough of a clue. That you haven't put the reflection in simply from my cluing gives me a very detailed picture of how much you currently trust my judgment. I actually thought it was pretty awesome that you responded to the opening question in an isolated reply and had to rush out right after answering it, giving you very much more time to have reflected on it than the case of serially reading and replying without expending too much mental effort in doing so. I'm really not here to convince you of my societal/managerial competence by direct demonstration; this is just gathering critical calibration data on my part.

I've already spelled it out pretty damn concisely. Recognizing the differences between yourself and the people you like to think are very much like you is uniquely up to you.

My own hunch:

Yeah, pretty much. LessWrong's memetic moment in history isn't necessarily at a point in time at which it is active. That's sort of the premise of the concern of LessWrong's immediate memeplex going viral. As the population's intelligence slowly increases, it'll eventually hit a sweet spot where LessWrong's content will resonate with it.

...But yeah, ban on politics isn't one of the dangerous LessWrong memes.

Replies from: satt
comment by satt · 2013-10-30T00:35:57.310Z · LW(p) · GW(p)

I do, actually, which raises the question as to why you think I didn't have that in mind. Did you not realize that LessWrong and pretty much our entire world civilization is in such a didactic state?

I did not. And do not, in fact. Those didactic states are states where there's someone who's clearly the teacher (primarily interested in passing on knowledge), and someone who's clearly the pupil (or pupils plural — but however many, the pupil(s) are well aware they're not the teacher). But on LW and most other places where grown-ups discuss things, things don't run so much on a teacher-student model; it's mostly peers arguing with each other on a roughly even footing, and in a lot of those arguments, nobody's thinking of themselves as the pupil. Even though people are still learning from each other in such situations, they're not what I had in mind as "didactic".

In hindsight I should've used the word "pedagogical" rather than "didactic".

Moreover, if we weren't in such a didactic state, why does LessWrong exist? Does the art of human rationality not have vast room to improve?

I think these questions are driven by misunderstandings of what I meant by "didactic context". What I wrote above might clarify.

This would appear to be false.

So it would. Thank you for taking the time to track down those articles.

Thank you for updating in the face of evidence.

I was using a rough estimate for legitimacy; I really just want LessWrong to be more of an active force in the world.

Fair enough.

In each and every one of those cases you will find that the person had not spent sufficient time reflecting on the usefulness of thought and refined reasoning, or else uFAI and existential risks. The state these ideas existed in their mind in was not a "deep belief" state, but rather a relatively blank slate primed to receive the first idea that came to mind.

I interpreted "deep beliefs" as referring to beliefs that matter enough to affect the believer's behaviour. Under that interpretation, any new belief that leads to a major, consistent change in someone's behaviour (e.g. changing jobs to donate thousands to MIRI) would seem to imply a change in deep beliefs. You evidently have a different meaning of "deep belief" in mind but I still don't know what (even after reading that paragraph and the one after it).

"Self-aware" is a non-trivial aspect here. It's not something I can communicate simply by asserting it, because you can only trust the assertion so much, especially given that the assertion is about you. [...] "If you've properly taken the time to reflect on the opening question of this comment," is more than enough of a clue. [...] I'm really not here to convince you of my societal/managerial competence by direct demonstration; this is just gathering critical calibration data on my part.

I've already spelled it out pretty damn concisely. Recognizing the differences between yourself and the people you like to think are very much like you is uniquely up to you.

Hrmm. Well, that wraps up that branch of the conversation quite tidily.

LessWrong's memetic moment in history isn't necessarily at a point in time at which it is active.

I suppose that's true...

That's sort of the premise of the concern of LessWrong's immediate memeplex going viral. As the population's intelligence slowly increases, it'll eventually hit a sweet spot where LessWrong's content will resonate with it.

...but I'd still soften that "will" to a "might, someday, conceivably". Things don't go viral in so predictable a fashion. (And even when they do, they often go viral as short-term fads.)

Another reason I'm not too worried: the downsides of LW memes invading everyone's head would be relatively small. People believe all sorts of screamingly irrational and generally worse things already.

comment by TheOtherDave · 2013-09-27T20:58:04.743Z · LW(p) · GW(p)

If coming up with good arguments against a belief is not differentially harder for more valuable beliefs, then coming up with good arguments for beliefs is not a reliable way of sorting beliefs by their value.

Replies from: BaconServ
comment by BaconServ · 2013-09-28T04:20:53.889Z · LW(p) · GW(p)

Have you read this article?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-28T14:56:17.879Z · LW(p) · GW(p)

Yes.

comment by WalterL · 2013-09-27T19:54:04.188Z · LW(p) · GW(p)

I concur with you.

Also, you have an unlikely ally. I think it was C.S. Lewis that said that it was hard work to make a joke, but effortless to act as though a joke has been made. (google help me, yes, Screwtape Letters, number 11.) I generally try to let that guide me.

I think that genuinely funny jokes typically need some participation from the an aspect object of the joke. If you're mocking a policy by pointing out an incongruent consequence of that its certainly funny, but it wouldn't be possible if the root wasn't there to start with.

Say I'm an authority figure have a policy that everyone must wear two shoes. You draw a cartoon of someone with just one leg gamely wearing the other shoe on his hand. I think that this is a joke of the first flavor. It criticizes me, but it does so by pointing out my laughably bad policy.

By contrast I consider 'making fun of', to be humor that merely derides, that doesn't point out anything or rely on its substance at all. Its something humans do to Out-Tribe symbols, and I try not to let it signify beyond that.

Say I don't like a particular politican. I restate his latest speech in a sardonic mockery of his distinctive speaking style, then roll my eyes. I think this is "making fun of". It doesn't bring anything new to the conversation, I can do it about anyone and it, as you say, breeds contempt.

Hrr...the above wasn't as clear as I'd like it to be. How about this then? If a joke of the first (good) sort is made and I laugh I'm laughing at the joke. It indicates my approval of the cleverness of the witticism's author and the joy that I find in the paradoxical. If I laugh at a joke of the second sort I'm laughing not at the joke itself, but at its subject. My laughter indicates my derision towards the Hated/Scorned enemy.

Replies from: Viliam_Bur, Richard_Kennaway, katydee, Lumifer
comment by Viliam_Bur · 2013-09-28T19:41:29.326Z · LW(p) · GW(p)

I think that genuinely funny jokes typically need some participation from the an aspect object of the joke.

So perhaps a good joke is about the essence of the criticized thing. And a bad joke is mere pattern-matching of the criticized thing; sometimes using a very poorly matching pattern.

(Or the bad joke may be about something irrelevant. Reminds me of two politicians in my country who were very powerful a few years ago. One of them seemed mentally unstable, and he frequently said the exact opposite of what he said before, just because it happened to fit in his newest conspiracy theory. His opponents made fun of him, often simply by quoting what he said last year and what he said now; and they also made fun of how his supporters also quickly changed their mind but sometimes didn't get the memo about the latest change of mind of their leader, so they contradicted each other, and then clumsily pretended the contradiction didn't happen. But also the other side made fun of their most important opponent... saying that he was short. And it seemed equally funny to them.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-09-29T17:43:09.766Z · LW(p) · GW(p)

One of them seemed mentally unstable, and he frequently said the exact opposite of what he said before, just because it happened to fit in his newest conspiracy theory. His opponents made fun of him, often simply by quoting what he said last year and what he said now; and they also made fun of how his supporters also quickly changed their mind but sometimes didn't get the memo about the latest change of mind of their leader, so they contradicted each other, and then clumsily pretended the contradiction didn't happen.

So what you're saying is that he was willing to change his mind. ;)

At least that's how American politicians who do things like this try to spin it.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-09-29T20:30:47.488Z · LW(p) · GW(p)

This one was an extreme case, even for a politician. Most likely really mentally ill, suffering from paranoia. His stories felt credible and sincere (I wouldn't be surprised if this is true in general for intelligent paranoid people), and he started very popular, but gradually more and more people noticed that his words don't match well with reality, and not even with his previous words. At the end mostly the old people remained loyal to him, so we say cynically that his political base died of old age. I guess that unlike many politicians, this one probably truly believed what he said.

comment by Richard_Kennaway · 2013-09-28T12:49:48.058Z · LW(p) · GW(p)

Also, you have an unlikely ally. I think it was C.S. Lewis that said that it was hard work to make a joke, but effortless to act as though a joke has been made.

That explains so much stand-up comedy.

comment by katydee · 2013-09-27T23:51:38.466Z · LW(p) · GW(p)

I generally agree-- though I think that there are some cases in which even the first sort of joke is perhaps unwarranted. Jokes about spelling errors and other trivial mistakes seem to fall into a middle category where, while they are based on incongruities or errors, they are not based on substantive or meaningful ones.

Also, C.S. Lewis is far from an unlikely ally of mine. I consider his writing important and useful in many respects.

comment by Lumifer · 2013-09-27T20:00:14.806Z · LW(p) · GW(p)

I'm laughing not at the joke itself, but at its subject. My laughter indicates my derision towards the Hated/Scorned enemy.

So what's wrong with that?

Replies from: hyporational
comment by hyporational · 2013-09-28T05:21:33.582Z · LW(p) · GW(p)

Personally, I find this sort of humour way too easy and therefore usually not funny. People not recognizing these two types takes away from the average quality of comedy. I reflexively see such people as stupid, but I understand this isn't entirely fair. Just laughing at the subject makes it possible to laugh at anything, and it starts to take away from other things, as lmm points out.

Replies from: Lumifer
comment by Lumifer · 2013-09-30T16:24:03.288Z · LW(p) · GW(p)

I find this sort of humour way too easy and therefore usually not funny.

Easy?

I have in mind people like Jews in 1930s Europe, Russians under the Soviet rule, or, to take a contemporary example, Christian Copts in Egypt. Oppressed people who don't have an opportunity to change their lot through polite democratic process. Humor -- biting, nasty, derisive humor -- was and is very important for them. To fight back with, to keep their sanity, to feel as humans and not cattle. That humor's point is to "indicate derision towards the Hated/Scorned enemy".

Replies from: hairyfigment, hyporational
comment by hairyfigment · 2013-10-08T06:57:30.352Z · LW(p) · GW(p)

Yes, quite reasonable - but it can degenerate into the Book of Revelation.

comment by hyporational · 2013-10-06T06:20:33.210Z · LW(p) · GW(p)

I somewhat agree with this, but I had a much more casual interpretation in mind. The examples of jewish humour I've seen have all been quite witty, so they don't really count as "just laughing at the subject".

Just a data point: Your tone instinctively feels confrontational, and originally demotivated me from replying. Might be what turns off other people too. Is this intentional?

Replies from: Lumifer
comment by Lumifer · 2013-10-07T15:02:58.099Z · LW(p) · GW(p)

Is this intentional?

Somewhat. The point is not to demotivate or turn off other people, the point is to liven up the exchange, as well as provide some entertainment and motivation. I am not averse to poking people with sharp pointy objects :-D but I don't object to being poked myself.

Replies from: hyporational
comment by hyporational · 2013-10-07T15:34:23.169Z · LW(p) · GW(p)

Ok, good to know. Livening things up is good. It's funny how a simple acknowledgement can shift your view of a user agent sailing in bitspace.

You'd make a good surgeon. Just remember keep those objects sharp.

comment by hyporational · 2013-09-27T15:19:37.016Z · LW(p) · GW(p)

Optimally, only bad things would get made fun of, making it easy to determine what is good and bad-- but this doesn't appear to be the case.

How do you differentiate between benign comedy and "making fun of"? Is it just the implied intent? I've found this is an incredibly difficult line to draw, people are so variably calibrated. Many times couldn't have helped myself and have inadvertently insulted people. Later I have learned that quite a few laughs are not worth one wrongly placed offence, so I mostly joke among friends.

comment by Yosarian2 · 2013-09-30T02:10:04.011Z · LW(p) · GW(p)

While that's all true, using humor can be a socially acceptable way to point out the flaws in someone else's "sacred cows" without them getting angry. By avoiding the anger response using humor, sometimes you can short-circuit the whole knee-jerk reaction and get someone to think in a more rational way, to actually take a closer look at their own beliefs. Political satirists have used this technique for a long time, and still do.

So it can be a positive and socially useful thing to do. Like all of these kinds of tools, it can either be used to get to the truth or to hide it, to think more deeply or to avoid thinking. It all depends on the details.

comment by John_Maxwell (John_Maxwell_IV) · 2013-09-28T04:25:30.193Z · LW(p) · GW(p)

How many people actually did the exercises katydee suggested? I know I didn't.

katydee, perhaps you could take a semi-random sample of things in relevant reference classes (politicians/organizations) and demonstrate how easy it is to make fun of them? Otherwise I suspect many people will take you for your word that things are easy to make fun of.

Here's my semi-random sample of organizations and politicians. I'll take the most recent 3 Daily Show guests) I recognize the names of and the largest 3 charities I recognize the names of.

  • Richard Dawkins

  • Chelsea Clinton

  • Robert Reich

  • Salvation Army

  • American National Red Cross

  • American Cancer Society

Here are my brief attempts to make fun of them.

  • Richard Dawkins: He wasn't satisfied with being an eminent biologist, he just had to stir up controversy by provoking religious people. And his arguments apparently aren't even very philosophically sound. Stick to biology next time, Dawkins.

  • Chelsea Clinton: Good luck finding a man who's higher status than you to marry.

  • Robert Reich: Is he liberal because he's short or is he short because he's liberal?

  • Salvation Army: Ineffective charity that spreads religious lies. Chumps.

  • American National Red Cross: We're all going to die eventually anyway. Organizations like the Red Cross just prolong our misery.

  • American Cancer Society: Clearly cancer is just a side effect of aging and your money is better sent to SENS. Also, just how much money have we poured in to cancer research without finding a cure yet? They should call it the "American Cancer Researcher Welfare Program".

It seems to me that the things I can make fun of the most easily are the ones that have legitimate arguments that reflect poorly on them (e.g. the Salvation Army and American Cancer Society). But maybe I'm just bad at making fun of things.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2013-09-28T19:32:51.286Z · LW(p) · GW(p)

How many people actually did the exercises katydee suggested? I know I didn't.

I did, but I don't think people realised it.

comment by lmm · 2013-09-27T11:33:08.727Z · LW(p) · GW(p)

The best conversations are in places that put a low value on humour. Unfortunately in wider society disliking humour is seen as a massive negative.

Replies from: gjm, ILikeLogic, Lumifer, hyporational
comment by gjm · 2013-09-27T11:37:41.387Z · LW(p) · GW(p)

I think (albeit on the basis of limited evidence) that what's helpful for good conversations is a low value on humour rather than a negative value on humour. The fora I've seen with the best discussion don't generally regard humour as bad; they just regard it as generally not good enough to redeem an otherwise unhelpful comment. Exceptionally good humour, or humour produced incidentally while saying something that would have been valuable even without the humour, is just fine on (for instance) Less Wrong or Hacker News -- but comments whose only point is a feeble witticism are liable to get downvoted into oblivion.

comment by ILikeLogic · 2013-09-28T00:02:52.352Z · LW(p) · GW(p)

I find it can be really irritating to try to make any kind of point about anything with certain people. To some there is no point in talking other than to yuk it up. I guess you just have to know your audience.

comment by Lumifer · 2013-09-30T20:01:51.178Z · LW(p) · GW(p)

The best conversations are in places that put a low value on humour.

Not in my experience.

I find best conversations in places which operate on the Ha-Ha-Only-Serious basis.

comment by hyporational · 2013-09-27T15:23:09.928Z · LW(p) · GW(p)

Why do you dislike humour?

Replies from: lmm
comment by lmm · 2013-09-27T17:53:17.461Z · LW(p) · GW(p)

I'm pretty indifferent to humour per se, but empirically it takes away from other things. Discussion sites where humour is valued have a lower proportion of interesting (to me) posts; television series with a lot of humour seem to make a corresponding sacrifice in character development.

comment by AlanCrowe · 2013-09-28T19:38:45.127Z · LW(p) · GW(p)

This example pushed me into formulating Crowe's Law of Sarcastic Dismissal: Any explanation that is subtle enough to be correct is turbid enough to make its sarcastic dismissal genuinely funny.

Skinner had a subtle point to make, that the important objection to mentalism is of a very different sort. The world of the mind steals the show. Behaviour is not recognized as a subject in its own right.

I think I grasped Skinner's point after reading something Feynman wrote on explanations in science. You can explain why green paint is green by explaining that paint consists of a binder (oil for an oil paint) and a pigment. Green paint is green because the pigment is green. But why is the pigment green? Eventually the concept of green must ground in something non-green, wavelengths of light, properties of molecules.

It is similar with people. One sophisticated way of having an inner man without an infinite regress is exhibited by Minsky's Society of Mind. One can explain the outer man in terms of a committee of inner men provided that the inner men, sitting on the committee, are simpler than the outer man they purport to explain. And the inner men can be explained in terms of interior-inner-men, who are simpler still and whom we explain in terms of component-interior-inner-men,... We had better have remember that 1+1/2+1/3+1/4+1/5+1/6+... diverges. It is not quite enough that the inner men be simpler. They have to get simpler fast enough. Then our explanatory framework is philosophically admissible.

But notice the anachronism that I am committing. Skinner retired in 1974. Society of Mind was published in 1988. Worse yet, the perspective of Society of Mind comes from functional programming, where tree structured data is processed by recursive functions. Does your recursive function terminate? Programmers learn that an infinite regress is avoided if all the recursive calls are on sub-structures of the original structure, smaller by a measure which makes the structures well-founded. In the 1930's and 1940's Skinner was trying to rescue psychology from the infinite regress of man explained by an inner man, himself a man. It is not reasonable to ask him to anticipate Minsky by 50 years.

Skinner is trying to wake psychology from its complacent slumber. The inner man explains the outer man. The explanation does indeed account for the outer man. The flaw is that inner man is no easier to explain than the outer man.

We could instead look at behaviour. The outer man has behaved before. What happened last time? If the outer man does something unexpected we could look back. If the usual behaviour worked out badly the time before, that offers an explanation of sorts for the change. There is much to be done. For example, if some behaviour works ten times in a row, how many more times will it be repeated after it has stopped working? We already know that the inner man is complicated and hence under-determined by our experimental observations. This argues for caution and delay in admitting him to our explanations. We cannot hope to deduce his character until we have observed a great deal of his behaviour.

But let us return to humour and the tragedy of Sidney Morgenbesser's sarcastic dismissal

Let me see if I understand your thesis. You think we shouldn't anthropomorphize people?

The tragedy lies in the acuteness of Morgenbesser's insight. He grasped Skinner's subtle point. Skinner argues that anthropomorphizing people is a trap; do that an your are stuck with folk psychology and have no way to move beyond it. But Morgenbesser makes a joke out of it.

I accept that the joke is genuinely funny. It is surely a mistake for biologists to anthropomorphize cats and dogs and other animals, precisely because they are not people. So there is a template to fill in. "It is surely a mistake for psychologists to anthropomorphize men and women and other humans, precisely because they are not people." Hilarity ensues.

Morgenbesser understands, makes a joke, and loses his understanding somewhere in the laughter. The joke is funny and sucks every-one into the loss of understanding.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-09-28T20:29:32.109Z · LW(p) · GW(p)

Dennett's heterophenomenology seems to offer some of the good points of Skinner's behaviorism without a lot of the bad points.

Heterophenomenology notices that people's behavior includes making descriptions of their conscious mental states: they emit sentences like "I think X" or "I notice Y". It takes these behaviors as being as worthy of explanation as other behaviors, and considers that there might actually exist meaningful mental states being described. This is just what behaviorism dismisses.

Replies from: AlanCrowe
comment by AlanCrowe · 2013-09-28T23:31:26.029Z · LW(p) · GW(p)

In Beyond Freedom and Dignity Skinner writes (page 21)

A more important reason is that the inner man seems at times to be directly observed. We must infer the jubilance of a falling body, but can we not feel our own jubilance? We do, indeed, feel things inside our own skin, but we do not feel the things which have been invented to explain behaviour. The possessed man does not feel the possessing demon and may even deny that one exists. The juvenile delinquent does not feel his disturbed personality. The intelligent man does not feel his intelligence or the introvert his introversion. (In fact, these dimensions of mind or character are said to be observable only through complex statistical procedures.) The speaker does not feel the grammatical rules he is said to apply in composing sentences, and men spoke grammatically for thousands of years before anyone knew there were rules. The respondent to a questionnaire does not feel the attitudes or opinions which lead him to check items in particular ways. We do feel certain states of our bodies associated with behaviour, but as Freud pointed out we behave in the same way when we do not feel them; they are by-products and not to be mistaken for causes.

Dennett writes (page 83)

The heterophenomenologlcal method neither challenges nor accepts as entirely true the assertions of subjects, but rather maintains a constructive and sympathetic neutrality, in the hopes of compiling a definitive description of the world according to the subjects.

So far Skinner and Dennett are not disagreeing. Skinner did say "We do, indeed, feel things inside our own skin,...". He can hardly object to Dennett writing down our descriptions of what we feel, as verbal behaviour to be explained in the future with a reductionist explanation.

Dennett continues on page 85

My suggest, then, is that if we were to find real goings-on in people's brains that had enough of the "defining" properties of the items that populate their heterophenomenological worlds, we could reasonably propose that we had discovered what they were really talking about --- even if they initially resisted the identifications. And if we discovered that the real goings-on bore only a minor resemblance to the heterophenomenological items, we could reasonably declare that people were just mistaken in the beliefs they expressed.

Dennett takes great pains to be clear. I feel confident that I understand what he is taking 500 pages to say. Skinner writes more briefly, 200 pages, and leaves room for interpretation. He says that we do not feel the things that have been invented to explain behaviour and he dismisses them.

I think it is unambiguous that he is expelling the explanatory mental states of the psychology of his day (such as introversion) from the heterophenomenological world of his subjects, on the grounds that they are not things that we feel or talk about feeling. But he is not, in Dennett's phrase "feigning anesthesia" (page 40). Skinner is making a distinction, yes we may feel jubilant, no we do not feel a disturbed personality.

What is not so clear is the scope of Skinner's dismissal of say introversion. Dennett raises the possibility of discovering meaningful mental states that actually exist. One interpretation of Skinner is that he denies this possibility as a matter of principle. My interpretation of Skinner is that he is picking a different quarrel. His complaint is that psychologists claim to have discovered meaningful mental states already, but haven't actually reached the starting gate; they haven't studied enough behaviour to even try to infer the mental states that lie behind behaviour. He rejects explanatory concepts such as attitudes because he thinks that the work needed to justify the existence of such explanatory concepts hasn't been done.

I think that the controversy arises from the vehemence with which Skinner rejects mental states. He dismisses them out-of-hand. One interpretation is that Skinner rejects them so completely because he thinks the work cannot be done; it is basically a rejection in principle. My interpretation is that Skinner rejects them so completely because he has his own road map for research in psychology.

First pay lots of attention to behaviour. And then lots more attention to behaviour, because it has been badly neglected. Find some empirical laws. For example, One can measure extinction times: how long does the rat continue pressing the lever after the rewards have stopped. One can play with reward schedules. One pellet every time versus a 50:50 chance of two pellets. One discovers that extinction times are long with uncertain rewards. One could play for decades exploring this stuff and end up with quantitative "laws" for the explanatory concepts to explain. Which is when serious work, inferring the existence of explanatory concepts can begin.

I see Skinner vehemently rejecting the explanatory concepts of the psychology of his day because he thinks that the necessary work hasn't even begun, and cannot yet be started because the foundations are not in place. Consequently he doesn't feel the need to spend any time considering whether it has been brought to a successful conclusion (which he doesn't expect to see in his life-time).

Replies from: fubarobfusco
comment by fubarobfusco · 2013-09-28T23:48:01.364Z · LW(p) · GW(p)

Okay, I can see that interpretation.

To draw something else from the distinction: Skinner seems to be talking about the objects of psychotherapeutic inquiry, such as "disturbed personality" or "introversion"; whereas Dennett is talking about the objects of philosophy-of-mind inquiry, such as "beliefs" and "qualia". The juvenile delinquent is imputed as having a "disturbed personality" by others; but the believer testifies to their own belief themselves.

comment by Lumifer · 2013-09-27T16:51:07.015Z · LW(p) · GW(p)

whether or not people are making fun of it is not necessarily a good signal as to whether or not it's actually good

Correct.

Optimally, only bad things would get made fun of

Incorrect. Being too serious is a deadly disease. Everything should be made fun of -- it's fun!

Second, if you want to make something sound bad, it's really easy.

"making something sound bad" is not at all the same thing as "making fun of"

This sort of premature cynicism tends to be a failure mode I've noticed in many otherwise very intelligent people.

As usual, balance is important. Rainbows, kittens, and unicorns also tend to be a failure mode in many people, especially young. Tales of disillusionment are... common.

Replies from: hyporational, Richard_Kennaway
comment by hyporational · 2013-09-27T19:35:31.290Z · LW(p) · GW(p)

"making something sound bad" is not at all the same thing as "making fun of"

Generally speaking, "making fun of" implies a pejorative connotation. But don't you worry, the op didn't make sense to me either until I checked the dictionary!

Replies from: Lumifer
comment by Lumifer · 2013-09-27T19:50:41.977Z · LW(p) · GW(p)

Generally speaking, "making fun of" implies a pejorative connotation.

Not necessarily, it depends on the context. For example, people in a secure, stable relationship tend to make fun of each other a lot.

I associate "making fun of" with things like "irreverent" and "not taking too seriously".

Replies from: hyporational
comment by hyporational · 2013-09-27T20:07:33.609Z · LW(p) · GW(p)

For example, people in a secure, stable relationship tend to make fun of each other a lot.

Yeah. I think major part of the fun is knowing it would be insulting in most other contexts. I'm finnish, so I don't really know how english speaking people use the expression irl.

comment by Richard_Kennaway · 2013-09-28T13:19:08.003Z · LW(p) · GW(p)

"making something sound bad" is not at all the same thing as "making fun of"

It's a superset of it.

Replies from: Lumifer
comment by Lumifer · 2013-09-30T16:26:13.345Z · LW(p) · GW(p)

Not in the way I use the English language. "Funny" != "bad".

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-09-30T21:49:54.457Z · LW(p) · GW(p)

"Funny" is only a minor and not actually necessary part of "make fun of". To "make fun of" is to make jokes about someone or something in an unkind way, to mock, tease, ridicule, laugh at, taunt, mimic, parody, deride, send up (British, informal), scoff at, sneer at, lampoon, make a fool of, pour scorn on, take the mickey out of (British, informal), take the piss out of (taboo & slang), satirize, pull someone's leg, hold up to ridicule, make a monkey of, make sport of, make the butt of, ...

All of which are ways of "making something sound bad". That's from a dictionary and thesaurus, but actual use, according to the Google hits that aren't dictionaries or thesauruses, agrees with them. Making something sound bad is the whole purpose of making fun of it. The "fun" part is the method of accomplishing that.

comment by Andreas_Giger · 2013-09-27T22:20:29.305Z · LW(p) · GW(p)

I'm not sure if this post is meant to be taken seriously. It's always "easy" to make fun of X; what's difficult is to spread your opinion about X by making fun of X. Obviously this requires a target audience that doesn't already share your opinion about X, and if you look at people making fun of things (e.g. on the net), usually the audience they're catering to already shares their views. This is because the most common objective of making fun of things is not to convince people of anything, but to create a group identity, raise team morale, and so on. There is zero point talking about the difficulty of that because there is none.

Someone would have to be very susceptible to be influenced by people making fun of things. I guess rationality doesn't have all that much to do with how influenceable you are, but this post strikes me as overly naïve concerning the intentions of people. If someone makes fun of X, they're clearly not interested in an objective discussion about X, so why would you be swayed by their arguments?

Whether or not people are making fun of it is not necessarily a good signal as to whether or not it's actually good.

Gee, you think?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-09-28T13:39:36.942Z · LW(p) · GW(p)

I'm not sure if this post is meant to be taken seriously. It's always "easy" to make fun of X; what's difficult is to spread your opinion about X by making fun of X. Obviously this requires a target audience that doesn't already share your opinion about X, and if you look at people making fun of things (e.g. on the net), usually the audience they're catering to already shares their views.

I'd rather argue that this doesn't work to convince people who already like X (although it may make them more inclined to keep their opinions to themselves), but does influence people who have no opinion of X (possibly because they've never heard of X before) dislike X and cause people who mildly dislike X to strengthen their opinion.

comment by metatroll · 2013-09-27T23:41:23.874Z · LW(p) · GW(p)

All the comments responding to this post deserve to be mocked mercilessly.