The End of Bullshit at the hands of Critical Rationalism

post by Stefan_Schubert · 2014-06-04T18:44:29.801Z · LW · GW · Legacy · 59 comments

The public debate is rife with fallacies, half-lies, evasions of counter-arguments, etc. Many of these are easy to spot for a careful and intelligent reader/viewer - particularly one who is acquainted with the most common logical fallacies and cognitive biases. However, most people arguably often fail to spot them (if they didn't, then these fallacies and half-lies wouldn't be as effective as they are). Blatant lies are often (but not always) recognized as such, but these more subtle forms of argumentative cheating (which I shall use as a catch-all phrase from now on) usually aren't (which is why they are more frequent).

The fact that these forms of argumentative cheating are a) very common and b) usually easy to point out suggests that impartial referees who painstakingly pointed out these errors could do a tremendous amount of good for the standards of the public debate. What I am envisioning is a website like factcheck.org but which would not focus primarily on fact-checking (since, like I said, most politicians are already wary of getting caught out with false statements of fact) but rather on subtler forms of argumentative cheating. 

Ideally, the site would go through election debates, influential opinion pieces, etc, more or less line by line, pointing out fallacies, biases, evasions, etc. For the reader who wouldn't want to read all this detailed criticism, the site would also give an overall rating of the level of argumentative cheating (say from 0 to 10) in a particular article, televised debate, etc. Politicians and others could also be given an overall cheating rating, which would be a function of their cheating ratings in individual articles and debates. Like any rating system, this system would serve both to give citizens reliable information of which arguments, which articles, and which people, are to be trusted, and to force politicians and other public figures to argue in a more honest fashion. In other words, it would have both have an information-disseminating function and a socializing function.

How would such a website be set up? An obvious suggestion is to run it as a wiki, where anyone could contribute. Of course, this wiki would have to be very heavily moderated - probably more so than Wikipedia - since people are bound to disagree on whether controversial figures' arguments really are fallacious or not. Presumably you will be forced to banish trolls and political activists on a grand scale, but hopefully this wouldn't be an unsurmountable problem.

I'm thinking that the website should be strongly devoted to neutrality or objectivity, as is Wikipedia. To further this end, it is probably better to give the arguer under evaluation the benefit of the doubt in borderline cases. This would be a way of avoiding endless edit wars and ensure objectivity. Also, it's a way of making the contributors to the site concencrate their efforts on the more outrageous cases of cheating (which there are many of in most political debates and articles, in my view).

The hope is that a website like this would make the public debate transparent to an unprecedented degree. Argumentative cheaters thrive because their arguments aren't properly scrutinized. If light is shone on the public debate, it will become clear who cheats and who doesn't, which will give people strong incentives not to cheat. If people respected the site's neutrality, its objectivity and its integrity, and read what it said, it would in effect become impossible for politicians and others to bullshit the way they do today. This could mark the beginning of the realization of an old dream of philosophers: The End of Bullshit at the hands of systematic criticism. Important names in this venerable tradition include David HumeRudolf Carnap and the other logical positivists, and not the least, the guy standing statue outside my room, the "critical rationalist" (an apt name for this enterprise) Karl Popper.

Even though politics is an area where bullshit is perhaps especially common, and one where it does an exceptional degree of harm (e.g. vicious political movements such as Nazism are usually steeped in bullshit) it is also common and harmful in many other areas, such as science, religion, advertising. Ideally critical rationalists should go after bullshit in all areas (as far as possible). My hunch is, though, that it would be a good idea to start off with politics, since it's an area that gets lots of attention and where well-written criticism could have an immediate impact.

59 comments

Comments sorted by top scores.

comment by Daniel_Burfoot · 2014-06-04T19:32:10.045Z · LW(p) · GW(p)

It's a nice dream, and I would excited if you could do it, but I don't think it is possible given the reality of the modern sociopolitical situation. What I think you don't appreciate is that at the end of the day, most people really really don't care about building a better world. They care about promoting their own status, defeating their enemies, and justifying their hatreds.

To justify this claim, I'll cite a few Arthur Chu Facebook quotes on the subject of LessWrong and rationality:

Arthur Chu For the peanut gallery -- what people are deliberately dredging up is that I hate the Less Wrong/"rationalist" community precisely because of its "We are nerdy white guys, here to tell you why you are wrong" culture and its intense defensiveness and insecurity (sorry, "immune system") at being called on being a haven for bullshit, including stuff like "Stop saying nerdy white guy, that is its OWN FORM OF RACISM"!

Arthur Chu Oh, and if you don't even know what Less Wrong is, it's basically a nerdy white guy religion that started out as a bunch of people gathering donations to freeze themselves until a Computer AI Jesus can be built and create utopia. And if you've ever heard of it but don't give all your money to it Computer AI Jesus will rebuild you in the future and put you in Computer AI Hell. It's spun off into a whole bunch of other shit since but it's not a group of people that really has any business lecturing people on who is and isn't "sane".

Arthur Chu is especially willing to voice his prejudices, but my strong suspicion is that most other people think the same way. So even if you set up a completely objective and rational truth-finding web site, it would simply be attacked and destroyed by political actors for being a racist religion or for being run by nerdy white guys or whatever.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-06-04T20:00:05.872Z · LW(p) · GW(p)

Weird. What did we ever do to him? Aside from compliment him on his effectiveness, I mean?

Replies from: Nornagest
comment by Nornagest · 2014-06-04T20:26:16.642Z · LW(p) · GW(p)

I don't feel like digging up the whole sordid backstory (though this would be a good starting point), but I get the impression he's upset that we're not a vector for his politics.

That whole "mindkiller" thing really rubs some people the wrong way; for such a person, politics are so bound up with ideals of rationality that staying away from them looks not just ignorant but willfully and maliciously so. (Compare the "reality-based community" on the left, or Eric Raymond's "anti-idiotarianism" on the right. Not that we're entirely innocent of this sort of thinking ourselves.) Combine that with the absurdity heuristic and our bad habit of parochialism in some areas, and you've got most of the ingredients for a hatchet job.

Replies from: Eugine_Nier, None
comment by Eugine_Nier · 2014-06-05T02:54:09.659Z · LW(p) · GW(p)

I don't feel like digging up the whole sordid backstory (though this would be a good starting point), but I get the impression he's upset that we're not a vector for his politics.

More specifically, he's upset that we're willing to tolerate people who point out that many of his ideology's claims are in fact falsifiable and false.

comment by [deleted] · 2014-06-06T23:22:44.569Z · LW(p) · GW(p)

That whole "mindkiller" thing really rubs some people the wrong way; for such a person, politics are so bound up with ideals of rationality that staying away from them looks not just ignorant but willfully and maliciously so. (Compare the "reality-based community" on the left, or Eric Raymond's "anti-idiotarianism" on the right. Not that we're entirely innocent of this sort of thinking ourselves.) Combine that with the absurdity heuristic and our bad habit of parochialism in some areas, and you've got most of the ingredients for a hatchet job.

I think it's more that the things Arthur is responding to are, in fact, very racist.

Reading just his comments (I control-F'd 'arthur chu' on the second linked post), it seems that he clearly understands the dynamics of racial privilege, the concept of minstrelsy, and how those relate to contemporary social justice struggles in macro and micro (American belly dancers), but seemingly none of the people he is communicating with do.

This is a quote I can identify highly with:

That post [the one debunking false rape statistics] is exactly my problem with Scott. He seems to honestly think that it’s a worthwhile use of his time, energy and mental effort to download evil people’s evil worldviews into his mind and try to analytically debate them with statistics and cost-benefit analyses.

He gets mad at people whom he detachedly intellectually agrees with but who are willing to back up their beliefs with war and fire rather than pussyfooting around with debate-team nonsense.

There is a saying in the anarchist community, dating back to the Spanish Civil War: No Parasan. It means "No platform" and maps to "no platform for fascists." It's why groups like the "national anarchists" and other third-positionist fascist groups get kicked out (in a literal sense, their propaganda destroyed and their bodies thrown out) of anarchist events.

No Parasan actually has a very compelling social and cognitive explanation: The more you detachedly argue with fascists, the more normal and accepted their politics become. The BNP is a great example of this. And as it becomes viewed as an acceptable alternative, fascism gains support (which is why the BNP could succeed as a novel fascist organization in the UK).

When discussing existing oppressive social structures such as patriarchy, white supremacy, colonialism, and capitalism, attempting to dispassionately argue causes this phenomenon at best, and at worst actively silences the people victimized by such structures.

Scott Alexander is not an ally to rape survivors because he has normalized their oppression. He has tried to debate with a monster instead of annihilating it.

I think the real reason why Arthur Chu is so pissed off at LW is because LW has all the mental tools available to recognize that discussing certain things can be harmful when the topic is something rich white men care about: typically AI risk. They get that you only tell wizards strong enough of a particular spell, but then disregard it. In a word, Less Wrong is blind to its privilege.

This isn't surprising, because rich white men in general are blind to their privilege, and even relatively well-off women or people of color can be blind to their oppression if they're able to buy their way out of it, but that doesn't mean those concepts don't exist.

Less Wrong is right that politics is the mind killer. Less Wrong just had it's mind killed early on when it decided it would only ever accept the politics of the status quo. If Less Wrong had been around in the 1820's it would have supported slavery. If it was around in the 1940's it would have supported Jim Crow. The fact that Less Wrong can't collectively see this is infuriating to some people, including to an extent myself.

I'm not personally about to say Less Wrong is a religion or a robot cult or whatever, it has hilarious views sometimes that are pretty normal for any internet community, but as someone who identifies problems with the contemporary political order and wants to change them, Less Wrong cannot be my comrade in this fight.

As an aside, unrelated from anything I've said previously, anyone should agree that LW habitually loses even though it should win. CFAR is a good attempt and the most winning thing I think I've seen out of Less Wrong, but it's far from that I feel LW could produce. Maybe the people who could produce those things aren't on less wrong because they're producing them, but if they knew so much of their own success came from knowing basic rationality, why wouldn't they recruit here?

Edit: As another aside:

In other words, if a fight is important to you, fight nasty. If that means lying, lie. If that means insults, insult. If that means silencing people, silence.

Holy shit yes! If you have anything to protect use all of your available strength to protect it! Shut up and multiply, think for at least five minutes about the problem, apply every ounce of your technique and then win. I truly and sincerely hope that every last person working for MIRI would kill and die to bring about friendly AI. I hope that if I had the choice of sacrificing myself so that all of humanity could live forever I would take it. If a fight is so important that you must win it, you must win it. You can win by the long sword, or you can win by the short sword (to quote Musashi) but you must win.

Any rationalist should see this trivially. It is a failure of Scott's that he hasn't, though I suppose he could be appealing publicly to the a widely-held principle in order to win this particular debate.

Replies from: Viliam_Bur, Lumifer, None, Eugine_Nier, selylindi
comment by Viliam_Bur · 2014-06-07T10:15:33.221Z · LW(p) · GW(p)

There are some interesting ideas in what you wrote, but unfortunately, the whole comment is written in a mindkilling way. Yeah, that's probably your point, so... uhm...

he clearly understands the dynamics of racial privilege, the concept of minstrelsy, and how those relate to contemporary social justice struggles in macro and micro (American belly dancers), but seemingly none of the people he is communicating with do.

Well, one way to deal with people who don't understand what you are trying to tell them, is to explain. It's not the only way -- for example, you could also also bully them into submission -- but it is the way that most LW readers probably prefer. So, if this cause is so important to you, what don't you write an article here, explaining what Arthur Chu gets and we don't? And by explaining, I mean... explaining.

If Less Wrong had been around in the 1820's it would have supported slavery.

More likely, it would discourage object-level debates about slavery (both for and against), so that Americans from both North and South could debate about something else: rationality, etc.

By the way, libertarians are not exactly supporters of the status quo. (By which I am not suggesting that libertarians are most frequent here; but this is what LW is frequently accused of.)

When discussing existing oppressive social structures such as patriarchy, white supremacy, colonialism, and capitalism, attempting to dispassionately argue causes this phenomenon at best, and at worst actively silences the people victimized by such structures.

How about other oppressive social structures?

Let me give you an example. Every time I go to a LW meetup to nearby Vienna, I cross a line that 25 years ago would get me killed. And I usually remind myself about the fact, and how happy I am to be able to go to Vienna like no big deal, when so many people got killed for trying.

There is a memorial to all those killed people, on a border between Slovakia and Austria. I happen to visit it about once in a month; not for political reasons, it just happens to be on my favorite walking path in nature. You know, countries usually protect their borders to prevent other people from getting in. But socialist countries protected their borders to prevent people from running away. In socialism, people were considered property of their country. When they tried to escape their masters, that was a similar kind of crime as when a black person tried to run away from their master. And if they succeeded to run away, their families were punished instead. To legally leave the socialist country, e.g. on a vacation, you had to leave hostages at home. It happened when I was a child; in happened in a place I still live. The second most frequent cause of death on borders of socialist countries were allegedly the suicides of soldiers who couldn't bear anymore the moral burden of having to kill all those innocent people.

So, according to your arguments, what exactly it is that I am supposed to do about it? How exactly am I supposed to react to you? From my point of view, you are a blind and evil person. Should I scream "Freedom!", try to accuse you of random bad things, say you should be banned from LW, say that LW is a horrible website if it does not ban you immediately? Should I even use lies to support my case, because the most important thing is to win, and to destroy all those murderous socialism-sympathisers? Because otherwise I am dishonoring the memory of the millions who were tortured and murdered in the name of... things you defend, kind of. Is that the right thing to do?

The thing is, I understand this is not how it "feels from inside" to you. Which makes things a lot more complicated. Welcome in the real world, where the good things are not achieved by sorting people into the "good" ones and the "evil" ones, and then attacking the "evil" ones by whetever means available.

Replies from: None
comment by [deleted] · 2014-06-07T18:33:02.462Z · LW(p) · GW(p)

More likely, it would discourage object-level debates about slavery (both for and against), so that Americans from both North and South could debate about something else: rationality, etc.

Notice your confusion. Either your model is false or the data is wrong. You've decided the data (what I told you) was wrong.

But could your model be wrong?

How would such a policy support slavery? Why do I think that? Pretend that I am as intelligent as you and try to determine what would make you believe that.

Should I even use lies to support my case, because the most important thing is to win, and to destroy all those murderous socialism-sympathisers? Because otherwise I am dishonoring the memory of the millions who were tortured and murdered in the name of... things you defend, kind of. Is that the right thing to do?

Yes. You should. You are a rationalist and you should win. Never deceive yourself that losing is appropriate! It is only ever appropriate to win. It is only ever good to win. Losing is never good.

If you find this too complicated, think about it in the simplest possible terms. The truth is the truth and to win is to win.

If you truly oppose me to the same extent Arthur Chu opposes casual racism on the Internet and I oppose the concept of capitalism you should do whatever you can to win. If you in your full art as a rationalist decide that is the path to winning you must take that path.

But I don't think you do have that level of commitment in you. There's a very large difference between identifying a social suboptimality and truly having something to protect. And I think that even in the face of all the things you said, all the very true and very real horrors of Marxism, you could not even summon the internal strength to protect yourself against that.

This is a sort of resolve that Less Wrong does not teach. It's only found in true adversity, in situations where you have something to protect and you must fight to protect it.

I do not think you have fought that fight. Very few people have.

Yudkowsky says the Art cannot be for itself alone, or it will lapse into a wastefulness. This is what has happened to Less Wrong.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-06-07T18:43:09.582Z · LW(p) · GW(p)

You provided data of your imagination, I provided data of mine... there is no way to determine the outcome experimentally... even if we asked Eliezer, he couldn't know for sure what exactly Eliezer1820 would do... is there a meaningful way to settle this? I don't see any.

Replies from: None
comment by [deleted] · 2014-06-07T18:58:03.952Z · LW(p) · GW(p)

I'm sorry, are you aware of the reasons why I think what I do? Have you thought about this for even one minute?

If you're truly incapable of reconstructing that then maybe there isn't anything we can do. But I don't believe you're incapable.

I think the scenario you describe is exactly what would happen with 1820's LW. I also think that provides material support for slavery. I also think that when slavery was brought up, probably it would be similarly treated to discussions of racism now.

Informed by that, think about it for five minutes, and PM me your answer. We can go from there.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-06-07T22:09:32.885Z · LW(p) · GW(p)

I'm sorry, are you aware of the reasons why I think what I do?

Yes, you're part of a movement that believes lying is justified for their cause and as a result have started to believe their own lies.

comment by Lumifer · 2014-06-07T00:22:23.181Z · LW(p) · GW(p)

There is a saying in the anarchist community, dating back to the Spanish Civil War: No Parasan. It means "No platform" and maps to "no platform for fascists."

/facepalm

The saying that goes to the Spanish Civil War is No Pasaran which means "They shall not pass". It was used by Dolores Ibarruri in her famous speech and it it still popular in some anti-fascist circles. See e.g. here,

Holy shit yes! If you have anything to protect use all of your available strength to protect it! Shut up and multiply, think for at least five minutes about the problem, apply every ounce of your technique and then win. I truly and sincerely hope that every last person working for MIRI would kill and die to bring about friendly AI.

So, did you get your gun and bullets yet? How goes your list of people who will be the first against the wall?

Replies from: None
comment by [deleted] · 2014-06-07T01:07:59.056Z · LW(p) · GW(p)

72c9d439eaa864ff4f68583cfa6e80f0ee5e60b66596cad18d6e9eb0dbfd6f0aa9e33cc92629e55b5fd5dfa7eeeabbbde95ee383df3175a69ee701d9a45c0117

Replies from: Lumifer
comment by Lumifer · 2014-06-07T01:13:40.616Z · LW(p) · GW(p)

And what am I supposed to do with these 64 bytes?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-06-07T08:12:27.614Z · LW(p) · GW(p)

Verify his authorship of a posthumously published rant after he goes Kaczinski?

Replies from: Lumifer
comment by Lumifer · 2014-06-08T01:11:27.469Z · LW(p) · GW(p)

The list of enemies? X-D 512 bits is too short for a reasonable public key and too long for a symmetric key. Just right for a standard hash, though. Hashes don't verify authorship, of course...

comment by [deleted] · 2014-06-07T09:49:11.114Z · LW(p) · GW(p)

If everyone lies for their preferred cause, those who see through the lies trust no one, and those who don't see through them act on false information.

If everyone believes enemies of their preferred cause should be driven out of society, as many societies as causes arise, and none can so much as trade with another.

If everyone believes their opponents must be purged, everyone purges everyone else.

If everyone decides they must win by the sword, the Hobbesian state of nature results.

(Oh, hell, first I realize Kant was not an idiot, and now I realize Hobbes was not an idiot. Of course the state of nature is ahistorical -- that's part of the point!)

Breaking down the Schelling fence around the social norms built up over the state of nature is an effective way to gain power, but once you gain power, you have to make sure that the fence is restored -- and that's hard to do. It's easier to destroy than to build. You can't weigh winning by the sword against the status quo; you have to weigh one action with p probability of winning hard enough to restore the fence (and q probability of having your burning of the accumulated arrangements / store of knowledge be net-beneficial for whatever definition of 'net-beneficial' you're using, vs. 1-q probability of having them not be) and 1-p probability of just breaking the fence.

In reality, of course, fence-wreckers understand that the opposite side wants to preserve the fence, and use that to their advantage. (The rain it raineth on the just / And also on the unjust fella / But chiefly on the just, because / The unjust hath the just's umbrella.) Alinsky understood this: you appeal to moral principles when you're out of power, but if you get power, you crush the people who appeal to moral principles -- even the ones you espoused before you go power. If you have enough power to crush your opponents, you have enough power to crush your opponents -- but this represents... not quite a burning of capital, but a sick sort of investment which may or may not pay off. You may be able to crush your opponents, but it doesn't necessarily follow that your opponents aren't able to crush you. And if you crush your opponents, you can't use them, their skills, knowledge, etc.

This is the part where I attempt to avoid performing an amygdala hijack by using the phrase 'amygdala hijack', and reference Atlas Shrugged: the moochers crush the capitalists, so the capitalists leave, and the moochers don't have access to the benefits of their talents anymore so their society falls apart. It's not a perfect analogy -- it's been a while since I read it, but I don't think the moochers saw themselves as aligned against the capitalists. But it's close enough; if it helps, imagine they were Communists.

There ought to be a term for the difference between considering an action in and of itself and considering an action along with its game-theoretic effects, potential slippery slopes, and so on. Perhaps there already is. There also ought to be a term for the seemingly-irrational-and-actually-irrational-in-the-context-of-considering-an-action-in-and-of-itself cooperation-norms that you're so strongly arguing for defecting from.

Replies from: None
comment by [deleted] · 2014-06-07T18:47:52.594Z · LW(p) · GW(p)

There ought to be a term for the difference between considering an action in and of itself and considering an action along with its game-theoretic effects, potential slippery slopes, and so on. Perhaps there already is. There also ought to be a term for the seemingly-irrational-and-actually-irrational-in-the-context-of-considering-an-action-in-and-of-itself cooperation-norms that you're so strongly arguing for defecting from.

In the consequentialist ethics family, there's act consequentialism, rule consequentialism, and a concept I cannot recall the name of linked here or possibly written here long ago of what I will call winning consequentialism. It dictates you consider every action according to every possible consequentialism and you pick the one with the best consequences.

I think it was called plus-consequentialism in the post, or maybe n-consequentialism, but it seems to capture this.

But your failure lies in assuming that winning consequentialism will always result in this sort of clean outcome. Less Wrong attempts to change the world not by the sword, or by emotional appeals, not even base electoralism, but by comments on the Internet. Is it really the case that this is always the winning outcome?

An experiment: Suppose you find yourself engaged in a struggle (any struggle) where you correctly apply winning consequentialism considering all contexts and cooperation norms and find that you should crush your enemy. What do you then do?

Your consequentialism sounds suspiciously like the opposite and I wonder how deeply you are committed to it.

comment by Eugine_Nier · 2014-06-07T00:56:35.995Z · LW(p) · GW(p)

I think it's more that the things Arthur is responding to are, in fact, very racist.

Care to taboo what you mean by "racist". In particular is it "racist" to believe that traits like intelligence correlate with where someone's ancestors came from? Does it matter if there is evidence for the belief in question? Does it matter if the belief is true?

Also why is "racism" so uniquely awful. If you look at the history of the 20th century far more people have been killed in the name of egalitarian ideologies (specifically communism) then in the name of ideologies generally considered "racist".

When discussing existing oppressive social structures such as patriarchy, white supremacy, colonialism, and capitalism,

Taboo "opressive". Judging by how your calling capitalism oppressive it appears that improving the living standard of most of the world is "oppression". If so we could probably use more of it.

In other words, if a fight is important to you, fight nasty. If that means lying, lie.

If you find yourself needing to lie for your cause, what your effectively admitting is that the truth doesn't support it. You may want to consider updating on that fact when deciding whether you should really be supporting said cause.

Also as I explain here Yvain's reason for not lying for your cause is not the best one he could give. The biggest problem is that it will fill your cause with people who believe said lies.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-06-07T09:26:15.290Z · LW(p) · GW(p)

If you find yourself needing to lie for your cause, what your effectively admitting is that the truth doesn't support it.

Not necessarily. You may deal with irrational people, who will not be moved by truth. Or the inferential distances can be long, and you only have very short time to convince people before something irreversible happens -- although in this case, you are creating problems in the long run.

(I generally agree with what you said. This is just an example of how this generalization is also leaky. And of course, because we run on the corrupted hardware, every situation will likely seem to be the one where the generalization does not apply.)

comment by selylindi · 2014-06-09T00:15:27.556Z · LW(p) · GW(p)

In other words, if a fight is important to you, fight nasty. If that means lying, lie. If that means insults, insult. If that means silencing people, silence.

Holy shit yes! If you have anything to protect use all of your available strength to protect it! Shut up and multiply, think for at least five minutes about the problem, apply every ounce of your technique and then win.

Whatever you happened to believe, the winningest answer would be "No, never lie". Because now that you've claimed your political position is likely to be based on lies, I've updated to consider arguments from that position as having zero evidential weight.

I would have thought that The Boy Who Cried Wolf was an adequate explanation in childhood of the selfish reasons to be honest.

Replies from: None
comment by [deleted] · 2014-06-09T06:16:20.784Z · LW(p) · GW(p)

I don't think I have claimed that.

If the Butlerian Jihad is at your door looking for the FAI researchers in your floorboards, you lie and tell them you're a loyal luddite. If you need a little more funding to finish your FAI and you can get it by pretending to be working on the next Snapchat clone to get VC money, lie to your VCs.

Like literally everything in life, lying has risks. but if you in your art as a rationalist decide those risks are acceptable only base dogmatism dictates you be honest and turn over your friends to be executed or allow humans to continue to die.

The moral of The Boy Who Cried Wolf is not to be honest; it's to not get caught lying.

(As an entire aside, do you really think that any political position, or even any fact anywhere that has touched human minds, is not at least partially based on lies? You might as well update the world to have zero evidential weight and spend all your time on a webforum arguing about ethics instead of going into the real world and effecting your goals.)

comment by shminux · 2014-06-04T21:18:58.567Z · LW(p) · GW(p)

Argumentative cheaters thrive because their arguments aren't properly scrutinized.

This statement does not pass the "fact check".

People have been repeatedly shown to believe what they want to believe for various reasons including status, affiliation, cognitive dissonance, convenience and many others. They happily overlook and downplay the " fallacies, half-lies, evasions" by the home team while emphasizing those of the opponents/enemies.

The factcheck.org site hardly made a dent in the misrepresentations, and is rarely mentioned as an impartial fact checking site (I do not know whether it is one).

A better question to ask is "how to make people care for accuracy and impartiality?" Eliezer's approach was Hanson's OB-inspired "raising the sanity waterline", eventually evolving into CFAR, with a limited success so far. Maybe there are other options, who knows.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-06-04T21:49:44.773Z · LW(p) · GW(p)

Argumentative cheaters thrive because their arguments aren't properly scrutinized.

This statement does not pass the "fact check".

Well if scrutiny didn't do any good then why do we have peer review in science? This is a sort of peer review (but hopefully more effective than the standard scientific peer review) on a massive scale.

It's just obvious that rational criticism generally does improve argumentative standards.

Replies from: shminux
comment by shminux · 2014-06-04T22:49:38.578Z · LW(p) · GW(p)

Well if scrutiny didn't do any good then why do we have peer review in science?

a) scientists are slightly better than average in caring for accuracy
b) there is a contradictory evidence whether peer-reviews improve publication quality

It's just obvious that rational criticism generally does improve argumentative standards.

Eh... "obvious" is not a good criterion for either impartiality or accuracy.

comment by gwern · 2014-06-04T19:06:20.474Z · LW(p) · GW(p)
  1. deductive fallacies are useful inductive arguments. eg ad hominems - as gussied up under terms like 'conflict of interest' and 'risk of bias' - are excellent tools for evaluating studies.
  2. factchecking organizations have been, and still are, being tried; and such criticism forms regular columns in newspapers. Have you noticed it helping?
  3. On Bullshit defines bullshit as making claims without caring whether they're true.
  4. Politics is not about policy.
Replies from: Stefan_Schubert, Gunnar_Zarncke
comment by Stefan_Schubert · 2014-06-04T19:19:12.519Z · LW(p) · GW(p)

1) There are different definitions of a fallacy. What I am talking of here are clear cases of argumentative cheating. 2) I do think that factchecking does help, yes. Politicians would have lied much more if they hadn't known that they could be caught out with those lies.

Replies from: gwern
comment by gwern · 2014-06-04T21:12:59.585Z · LW(p) · GW(p)

What I am talking of here are clear cases of argumentative cheating.

Most people would consider ad hominems cheating if it were pointed out to them.

I do think that factchecking does help, yes.

Based on...?

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-06-04T21:32:13.379Z · LW(p) · GW(p)

I do think that factchecking does help, yes.

Based on...?

Media do now and then reveal that politicians have lied on important topics (Watergate, Clinton on Lewinsky, etc). This a) had negative political consequences for the lying politicians and b) arguably made all other politicians less likely to lie (since these incidents taught them which consequences that could have), though this latter point is harder to prove.

See also my comment above.

Replies from: gwern
comment by gwern · 2014-06-04T22:37:01.312Z · LW(p) · GW(p)

So, your justification for the claim that factchecking improves politics is based on 2 anecdotes: a scandal from 40 years ago; and another scandal from 20 years ago which to many epitomizes the irrational & tribal nature of politics in which partisan hacks look for any excuse to attack an enemy no matter how trivial or unrelated to the job of governing the country it is?

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-06-04T22:48:44.487Z · LW(p) · GW(p)

Obviously not. Please apply the principle of charity. These are some salient examples. Of course there are others.

You're a smart guy. I can't understand why you're being so nit-picky. It's not helpful.

Replies from: shminux, gwern, None
comment by shminux · 2014-06-04T23:12:43.495Z · LW(p) · GW(p)

gwern might be a smart guy, but he is below average at charitably interpreting opposing arguments, at least this is my impression based on my interaction with him here and on IRC. It's not an uncommon failing, Eliezer comes across as uncharitable, as well, especially when dealing with those perceived as lower status (he was very very charitable to Karnofsky).

Of course, the impression of uncharitabilty (uncharitableness? is it even a word?) is often given off when the person is a few levels above you and goes through the most charitable interpretations of your argument in their head quickly, realizes that they are all wrong, as well, and rejects the argument without explicitly discussing why the charitable versions are no better than the uncharitable ones. I don't know how to tell the difference.

comment by gwern · 2014-06-04T23:40:15.605Z · LW(p) · GW(p)

Obviously not. Please apply the principle of charity. These are some salient examples. Of course there are others.

Of course there are others, but I am not interested in arguing by anecdote especially when the anecdotes don't seem to support your thesis. (Seriously, of all the scandals you had to pick the Lewinsky scandal?) What exactly am I supposed to be applying charity to here? Do you have any systematic, concrete, empirical data that supports your claim that factchecking improves politics?

comment by [deleted] · 2014-06-07T09:51:31.421Z · LW(p) · GW(p)

How are either of the two examples of something being improved?

comment by Gunnar_Zarncke · 2014-06-04T21:14:33.363Z · LW(p) · GW(p)

Have you noticed it helping?

Maybe these were not well organized enough or didn't reach a critical mass.

There are related organizations like Vroniplag (explained on wikipedia) which did have a very notable effect - at least in Germany. These are specialized in pointing out very grave errors in doctoral thesis - esp. plagiarism and so can and do have significant consequences for the subject under scrutinity.

I think if you could reach a significant mass this could work.

Replies from: gwern
comment by gwern · 2014-06-04T22:38:13.669Z · LW(p) · GW(p)

Maybe these were not well organized enough or didn't reach a critical mass.

How were they not well-organized? Why do you think this sort of phenomenon has any sort of 'critical mass' effect to it? And why would any future effort not be doomed to fail to reach the critical mass just like all the past ones obviously?

These are specialized in pointing out very grave errors in doctoral thesis - esp. plagiarism and so can and do have significant consequences for the subject under scrutinity.

If that's the best you can point to, that does not fill me with hope. When are political questions ever as clear as copy-paste plagiarization? That is not a success story, that's something that fills me with horror - things are even worse than I thought:

Most of these revocations have held up in court. However, some universities disagreed with VroniPlag finding, even in cases of blatant plagiarism (between 40 and 70% of pages affected with plagiarism). The correct methods for dealing with plagiarism – and its prevention – remains an ongoing discussion in Germany.

And you hope factchecking can make a difference in real politics?!

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-06-05T06:47:27.507Z · LW(p) · GW(p)

Well. Politics is the mind-killer. Surely such a fact-checking site would be prone to all the hacks politics can master to ''limit'' its effect. Wikipedia and Vroniplag are good (real: illustrative) examples of this.

Whether I have ''hope''? My post wasn't about hope but intended to point out structures with 'critical mass' that did have an effect. One can learn from that. How to build on these, tweak their logic to maybe achieve a better result.

A critical mass is in my opinion always needed to have any noticable effect because local uncoordinated effects are dealt with by self-stabilizing effects of the existing norms (politic powers can use e.g. regression toward the mean, coordinated salami tactics, fogging and noise).

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-06-05T06:51:28.330Z · LW(p) · GW(p)

Politics is the mind-killer. Surely such a fact-checking site would be prone to all the hacks politics can master to ''limit'' its effect.

Not to mention the fact-checkers themselves are subject to being mind-killed.

comment by Desrtopa · 2014-06-11T01:59:14.710Z · LW(p) · GW(p)

This is kind of a tangent to the subject, but seeing someone bring up Critical Rationalism on Less Wrong still brings up some pretty massive negative associations for me. By far the majority of all the mentions of Critical Rationalism on this site, due to monomania and prolific rate of posting, come from the author of the most downvoted post ever to appear on Main, and possibly the most disruptive user ever to frequent the site.

comment by ChristianKl · 2014-06-04T19:52:39.321Z · LW(p) · GW(p)

I'm thinking that the website should be strongly devoted to neutrality or objectivity, as is Wikipedia.

In Wikipedia part of being objective means to accurately report what's generally known about a topic and not engaging in original research. I think you will have a hard time to judge fallacies without people engaging in something like original research. Fox news has it's own brand of fair & balanced with isn't exactly the same thing that most people think of when they hear the phrase.

What kind of objectivity do you want for your website?

The status quo bias of one person is another person's Chesterton's fence.

comment by James_Miller · 2014-06-04T19:55:38.643Z · LW(p) · GW(p)

In a better coordinated world in which more people cared about truth there would exist certification organizations that, for a fee, would read an article and if the article met certain standards would issue an "honest argument" certification that could be displayed on the article. Having such a certification would, ideally, attract more viewers giving the author more advertising revenue which, also ideally, would more than pay for the cost of certification.

Replies from: Nornagest, Gunnar_Zarncke, Stefan_Schubert
comment by Nornagest · 2014-06-04T21:33:16.752Z · LW(p) · GW(p)

Polifact is this for statements by US politicians and political pundits. I like it a lot, and it seems to have gotten the Pulitzer committee's attention, but I don't know how widely it's used.

comment by Gunnar_Zarncke · 2014-06-04T21:16:53.607Z · LW(p) · GW(p)

The contributors to other organizations like Wikipedia (which at least has the NPV policy) and Vroniplag and basically any public media which has a voting facility are contributing for nothing else than the feeling of a) doing some good or b) correcting some wrong.

comment by Stefan_Schubert · 2014-06-04T20:02:45.015Z · LW(p) · GW(p)

I've had the same idea. Such certification organizations could also certify e.g. ads. This could potentially bring in lots of profits if the certification organization had sufficiently good reputation, since companies have the money to pay for such certificates. (Of course, it would be important, though, that the certification organization weren't more lenient on companies which paid more, since that would ruin their reputation).

comment by Slider · 2014-06-06T09:49:35.936Z · LW(p) · GW(p)

Monolithic vs subjective As pointed out it's hard to gather everyones input to a single result. Rather than have a single fallacy / not fallacy rating have each user be able to express (and own) whether a statement is fallacious. In a usual case it would have the result of "95.4% people think this is a false dictonomy". However there is valuable information on cross-correlating what arguments pass which evaluator. You could have functionality to "ignore all evaluators that think this is a fair argument". People could also profilate themselfs as being quality evaluators. There is a problem/feature where the standard of the evaluator need not be rigour. You could for example have a profilic evaluator for each major political leaning. Or you could aggregate the information by cross-referencing proclaimed political identity ie "65% of self-identified democrats think this argument is fair"

Applicability vs context Being able to target already produced texts means there would be wide applicability. However I am a little concerned on selection effects on what makes it as a "thing to scrutinize". This kind of thing would be effective about small isolated arguments. However politicians that fit their arguments to fit the situation they are presented in could be wrongly presented in being judged outside of that speech situation. Maybe they know that there are better / more valid arguments for their position but choose to utter those they know their audience can relate to. Bringing those arguments under a close scrutiny would be to partly miss the point. I guess part of the idea would be to apply pressure to always use arguments that could pass harsher standards? However I can see many downsides to that. I would rather have all the arguments to be processed to be explicitly (re)created in the context of the website. Then it would be clear that everybody involved respects the clean play attitude and that the arguments are meant to be elaborate and precise. This could mean that only the core and essential points would be covered. That is, it would not be a witch hunt to harass other medias but be an internal matter.

explicitness vs summary score I would have each argument input in a special language/notation that forces every argument to be explicit and computer readable. The arguments would not be prose but collections and networks of semantic tokens. This would provide human language independence ie french and english users would render the tokens in their language but they would be manipulating the same exact ones when one makes a claim in french it would be accessible to the english user too. With the guarantee of computer readableness you could things like compare the axioms of two users and point where contradict, at such a point a discussion is possible. You could then track how often did those discussion shift opinions and which arguments were effective at which populations / belief bases. This could easily be rendered a tool for anti-knowledge seeking testing which manipulations work the best. If such a reduction is not done the meaning of any end result will be a bit nebulous. Its meaning would depend on the process by which it is produced and it would mask approval of a group in the guise of numeric inarguable data. If the vision of what the "clean play" consist off it could be useful but I doubt there is a single axis that would be so critically important to track. I would rather have metrics that tell stuff but don't give a conclusion than reach a conclusion I am not sure what it tells.

comment by Eugine_Nier · 2014-06-05T03:30:39.179Z · LW(p) · GW(p)

I'm thinking that the website should be strongly devoted to neutrality or objectivity, as is Wikipedia.

The problem is that wikipedia isn't that good at finding the truth about controversial topics.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-06-05T14:26:53.826Z · LW(p) · GW(p)

The problem is that wikipedia isn't that good at finding the truth about controversial topics.

What is?

comment by NancyLebovitz · 2014-06-04T21:35:14.797Z · LW(p) · GW(p)

Such a website would probably be a good idea-- you don't need to have everyone using it, you just need a good-sized audience. Snopes and polifact are doing alright.

To my mind, the interesting question is choosing a manageable task that will make it easy for the site to grow.

comment by Stefan_Schubert · 2014-06-04T19:59:15.266Z · LW(p) · GW(p)

This idea really isn't that original. Philosopher, journalists, and others have done this for quite some time, and this has, I'm confident, had significant positive effects on the public debate. Religious people were forced to improve their arguments as a result of philosophical criticism from Hume and others, for instance. Also, even if you might think that the political debate is bad, imagine how bad it would have been if there were no journalists reporting relatively objectively on politics.

My suggestion is thus just to make existing efforts at objective reporting and criticism more systematic and comprehensive: it's not a qualitative leap, but just a quantitative shift. I don't see why that would be impossible in principle (though it may be hard in practice) or why that would not have any effect, given how much our present institutions for objective criticism actually has achieved (in my view).

comment by Spenser Roberts (spenser-roberts) · 2018-04-27T12:27:35.905Z · LW(p) · GW(p)

I have a very good friend who has taught collegiate level debate for forty years. Just before he retired, he did an experiment where he and his students would actually do what you are proposing here and point out, list, highlight and rebut the various forms of argumentative cheating on Facebook and Twitter and see what happened. The result? Of the thirty students in one class, seventeen were banned from groups and had their friends list drop to below 50 people. The remaining students found that there followers dropped, that they received less over all views, and that they became targets of more and more abusive comments with threads that would begin as discussions and quickly de-escalate to that all time internet favorite the argumentum ad hominum. That pattern held good across eight classes and 240 students (give or take). I grant that one professors off the cuff experiment does not equal solid research but it is perhaps a bit disheartening.

And that leaves out both cognitive dissonance and the backfire effect. No one wants to be proven wrong, and many will fight for their worldviews and the facts that fit them, even facts that are not facts.

And given that people also frequently admit to not using the (biased but) existent research tools that are already out there why would they bother to search for and use a research tool designed for something like this? I can see such a site rapidly being used as a weapon by one side of an argument and being shouted down as "fake news" by the other.

Also, just as an aside, I am a student at Duke University- School of Medicine and we are absolutely FORBIDDEN to use Wikipedia or any other crowd-sourced knowledge base as a source in our own researches. As one of the administrators pointed out, at least eleven of the professorial staff and hundreds of the students contributed to Wiki at any given time and it was a common hobby to go rewrite a rival's work.

comment by bramflakes · 2014-06-05T09:32:15.681Z · LW(p) · GW(p)

There have been many proposals like this before. My favorite idea (which I cannot recall the name of right now) was a browser plugin that would overlay annotations onto arbitrary webpages. People could make it highlight certain questionable bits of text, link to opposing viewpoints or data, and discuss with each other whether the thing was accurate. Imagine a wiki talk page, but for every conceivable site.

Replies from: David_Gerard, None, pcm, Lumifer
comment by David_Gerard · 2014-06-05T10:21:07.268Z · LW(p) · GW(p)

There have been many of these indeed. rbutr is the latest one.

comment by [deleted] · 2014-06-07T09:54:16.571Z · LW(p) · GW(p)

There have been several of these. Thiblo is one; it no longer exists. I remember using another one once, but I can't remember what it was now.

There is also that thing that allows for arbitrary pictures to be drawn over arbitrary webpages. I can't remember what it was called either but it was mostly used for low-quality Homestuck porn.

comment by pcm · 2014-06-06T14:09:15.107Z · LW(p) · GW(p)

CritLink enabled this before the age of browser plugins.

What would motivate someone to use it when few others were using it?

comment by Lumifer · 2014-06-05T14:51:54.127Z · LW(p) · GW(p)

a browser plugin that would overlay annotations onto arbitrary webpages. People could make it highlight certain questionable bits of text, link to opposing viewpoints or data, and discuss with each other whether the thing was accurate.

And how would it deal with spam?

I doubt that a browser plugin which festoons each page with giant INCREASE YOUR MANHOOD "annotations" is going to be popular.

Replies from: bramflakes
comment by bramflakes · 2014-06-05T16:52:50.193Z · LW(p) · GW(p)

I dunno. CAPTCHAs, plus community policing? I don't remember whether it got off the ground so for all I know that might have killed it anyway.

comment by RomeoStevens · 2014-06-04T20:48:09.474Z · LW(p) · GW(p)

Pointing out fallacies in others is generally less useful than finding them in yourself. When you see someone committing a fallacy ask if you've made any similar errors.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-06-04T21:06:30.327Z · LW(p) · GW(p)

Yes, but that doesn't help in the given szenario.

comment by NancyLebovitz · 2014-06-04T18:55:17.552Z · LW(p) · GW(p)

The picture covers some text.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-06-04T18:59:27.993Z · LW(p) · GW(p)

Thanks. Fixed.