Only You Can Prevent Your Mind From Getting Killed By Politics

post by ChrisHallquist · 2013-10-26T13:59:59.544Z · LW · GW · Legacy · 144 comments

Contents

144 comments

Follow-up to: "Politics is the mind-killer" is the mind-killerTrusting Expert Consensus

Gratuitous political digs are to be avoided. Indeed, I edited my post on voting to keep it from sounding any more partisan than necessary. But the fact that writers shouldn't gratuitously mind-kill their readers doesn't mean that, when they do, the readers' reaction is rational. The rules for readers are different from the rules for writers. And it especially doesn't mean that when a writer talks about a "political" topic for a reason, readers can use "politics!" as an excuse for attacking a statement of fact that makes them uncomfortable.

Imagine an alternate history where Blue and Green remain important political identities into the early stages of the space age. Blues, for complicated ideological reasons, tend to support trying to put human beings on the moon, while Greens, for complicated ideological reasons, tend to oppose it. But in addition to the ideological reasons, it has become popular for Greens to oppose attempting a moonshot on the grounds that the moon is made of cheese, and any landing vehicle put on the moon would sink into the cheese.

Suppose you're a Green, but you know perfectly well that the claim the moon is made of cheese is ridiculous. You tell yourself that you needn't be too embarrassed by your fellow Greens on this point. On the whole, the Green ideology is vastly superior to the Blue ideology, and furthermore some Blues have begun arguing we should go to the moon because the moon is made of gold and we could get rich mining the gold. That's just as ridiculous as the assertion that the moon is made of cheese.

Now imagine that one day, you're talking with someone who you strongly suspect is a Blue, and they remark on how irrational it is for so many people to believe the moon is made of cheese. When you hear that, you may be inclined to get defensive. Politics is the mind-killer, arguments are soldiers, so the point about the irrationality of the cheese-mooners may suddenly sound like a soldier for the other side that must be defeated.

Except... you know the claim that the moon is made of cheese is ridiculous. So let me suggest that, in that moment, it's your duty as a rationalist to not chastise them for making such a "politically charged" remark, and not demand they refrain from saying such things unless they make it perfectly clear they're not attacking all Greens or saying it's irrational to oppose a moon shot, or anything like that.

Quoth Eliezer:

Robin Hanson recently proposed stores where banned products could be sold.  There are a number of excellent arguments for such a policy—an inherent right of individual liberty, the career incentive of bureaucrats to prohibit everything, legislators being just as biased as individuals.  But even so (I replied), some poor, honest, not overwhelmingly educated mother of 5 children is going to go into these stores and buy a "Dr. Snakeoil's Sulfuric Acid Drink" for her arthritis and die, leaving her orphans to weep on national television.

I was just making a simple factual observation.  Why did some people think it was an argument in favor of regulation?

Just as commenters shouldn't have assumed Eliezer's factual observation was an argument in favor of regulation, you shouldn't assume the suspected Blue's observation is a pro-moon shot or anti-Green argument.

The above parable was inspired by some of the discussion of global warming I've seen on LessWrong. According to the 2012 LessWrong readership survey, the mean confidence of LessWrong readers in human-caused global warming is 79%, and the median confidence is 90%. That's more or less in line with the current scientific consensus.

Yet references to anthropogenic global warming (AGW) in posts on LessWrong often elicit negative reactions. For example, last year Stuart Armstrong once wrote a post titled, "Global warming is a better test of irrationality than theism." His thesis was non-obvious, yet on reflection, I think, probably correct. AGW-denialism is a closer analog to creationism than theism. As bad as theism is, it isn't a rejection of a generally accepted (among scientists) scientific claim with a lot of evidence behind it just because the claim clashes with your ideological. Creationism and AGW-denialism do fall under that category, though.

Stuart's post was massively down voted—currently at -2, but at one point I think it went as low as -7. Why? Judging from the comments, not because people were saying, "yeah, global warming denialism is irrational, but it's not clear it's worse than theism." Here's the most-upvoted comment (currently at +44), which was also cited as "best reaction I've seen to discussion of global warming anywhere" in the comment thread on my post Trusting Expert Consensus:

Here's the main thing that bothers me about this debate. There's a set of many different questions involving the degree of past and current warming, the degree to which such warming should be attributed to humans, the degree to which future emissions would cause more warming, the degree to which future emissions will happen given different assumptions, what good and bad effects future warming can be expected to have at different times and given what assumptions (specifically, what probability we should assign to catastrophic and even existential-risk damage), what policies will mitigate the problem how much and at what cost, how important the problem is relative to other problems, what ethical theory to use when deciding whether a policy is good or bad, and how much trust we should put in different aspects of the process that produced the standard answers to these questions and alternatives to the standard answers. These are questions that empirical evidence, theory, and scientific authority bear on to different degrees, and a LessWronger ought to separate them out as a matter of habit, and yet even here some vague combination of all these questions tends to get mashed together into a vague question of whether to believe "the global warming consensus" or "the pro-global warming side", to the point where when Stuart says some class of people is more irrational than theists, I have no idea if he's talking about me. If the original post had said something like, "everyone whose median estimate of climate sensitivity to doubled CO2 is lower than 2 degrees Celsius is more irrational than theists", I might still complain about it falling afoul of anti-politics norms, but at least it would help create the impression that the debate was about ideas rather than tribes.

If you read Stuart's original post, it's clear this comment is reading ambiguity into the post where none exists. You could argue that Stuart was a little careless in switching between talking about AGW and global warming simpliciter, but I think his meaning is clear: he thinks rejection of AGW is irrational, which entails that he thinks the stronger "no warming for any reason" claim is irrational. And there's no justification whatsoever for suggesting Stuart's post could be read as saying, "if your estimate of future warming is only 50% of the estimate I prefer you're irrational"—or as taking a position on ethical theories, for that matter. 

What's going on here? Well, the LessWrong readership is mostly on-board with the scientific view on global warming. But many identify as libertarians, and they're aware that in the US many other conservatives/libertarians reject that scientific consensus (and no, that's not just a stereotype). So hearing someone say AGW denialism is irrational is really uncomfortable for them, even if they agree. This leaves them wanting some kind of excuse to complain, one guy thinks of "this is ambiguous and too political" as that excuse, and a bunch of people upvote it.

(If you still don't find any of this odd, think of the "skeptic" groups that freely mock ufologists or psychics or whatever, but which are reluctant to say anything bad about religion, even though in truth the group is dominated by atheists. Far from a perfect parallel, but it's still worth thinking about.)

When the title for this post popped into my head, I had to stop and ask myself if it was actually true, or just a funny Smokey the Bear reference. But in an important sense it is: the broader society isn't going to stop spontaneously labeling various straightforward empirical questions as Blue or Green issues. If you want to stop your mind from getting killed by whatever issues other people have decided are political, the only way is to control how you react to that.

144 comments

Comments sorted by top scores.

comment by Ishaan · 2013-10-27T20:20:17.517Z · LW(p) · GW(p)

Now imagine that one day, you're talking with someone who you strongly suspect is a Blue, and they remark on how irrational it is for so many people to believe the moon is made of cheese.

I'm a big fan of "Agree Denotationally But Object Connotationally" when this is the case

Or, when talking to your fellow Greens about the moon, you would "agree connotationally but object denotationally". I find that for me this is actually even more common than the reverse.

think of the "skeptic" groups that freely mock ufologists or psychics or whatever, but which are reluctant to say anything bad about religion, even though in truth the group is dominated by atheists.

Okay, let's run with that example. If someone says something like "Theist are stupid"...I agree denotatively in that I think theism is foolish and I'm aware that holding theistic beliefs is negatively correlated with intelligence. I disagree connotationally with the disdain and patronizing attitude which is implicit in the statement, and I dislike the motivations which the person probably had for making it. If the same person had said "religiosity is negatively correlated with intelligence", then I would have no objections -it's the exact same information but the tone indicates that they are simply stating a fact. For particularly charged topics, explicit disclaimers voiding the connotations which normally occur are helpful.

I'm not sure it's practical, as a reader, to read writing and extract purely the denotative information, simply because of the sheer volume of useful information which is embedded within the connotations. If language is about communicating mental states and inferring the mental states of others, you can't communicate nearly as effectively if you toss out connotation.

TL:DR for Yvain's post: "Your statement is technically true, but I disagree with the connotations. If you state them explicitly, I will explain why I think they are wrong"

comment by Jack · 2013-10-26T22:13:14.710Z · LW(p) · GW(p)

The whole idea of having a belief as a litmus test for rationality seems totally backward. The whole point is how you change your beliefs in response to new evidence.

Meanwhile, if a lot of people have a belief that isn't true it is almost necessarily politically salient. The existence of God isn't an issue that is debated in the halls of government: but it is still hugely about group identity which means that people can get mind-killed about it. The only reason it works as any kind of litmus test is that everyone here is/was already a part of the same group when it comes to theism.

I think the true objection to Stuart's post was less about climate change and more about branding Less Wrong with an issue that has ideological salience. And that seems totally fair to me. If you have a one issue litmus test it's sort of weird to make it one that isn't specific enough to screen out even the most irrational liberals. At the very least add a sub-test asking if a person thinks carbon emissions are responsible for the Hurricane Sandy disaster, their confidence that climate change causes more hurricanes and what (if any) existential risk they assign to it. Catch the folks who think the moon is made out of gold in the filter.

Replies from: hyporational, ChrisHallquist, Watercressed, Sengachi
comment by hyporational · 2013-10-27T07:27:02.991Z · LW(p) · GW(p)

The whole idea of having a belief as a litmus test for rationality seems totally backward. The whole point is how you change your beliefs in response to new evidence.

I think this is a very uncharitable interpretation of what the post in question is trying to say. First, the post isn't proposing a litmus test, but a test that is better than theism in identifying irrationality. Second, how would you know if someone changes their beliefs in response to new evidence without assessing their beliefs in relation to shared evidence? There's no way Stuart was stupid enough to think evidence shouldn't be shared for this to work.

ETA: I'm not a native speaker, and I'm not sure how people use the word litmus test anymore.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-27T17:34:31.915Z · LW(p) · GW(p)

"Litmus test" in common U.S. usage means a quick and treated-as-reliable proxy indicator for whether a system is in a given state. To treat X as a litmus test for rationality, for example, is to be very confident that a system is rational if the system demonstrates X, and (to a lesser extent) to be very confident that a system is irrational if the system fails to demonstrate X.

Replies from: Jack, hyporational
comment by Jack · 2013-10-28T07:26:11.903Z · LW(p) · GW(p)

This is how I meant it.

comment by hyporational · 2013-10-28T01:56:49.163Z · LW(p) · GW(p)

That's what I thought first too, but it seems to also have a political meaning.

treated-as-reliable

You mean the test can be completely unreliable, like many political litmus tests probably are?

Replies from: TheOtherDave, Nornagest
comment by TheOtherDave · 2013-10-28T02:25:14.286Z · LW(p) · GW(p)

Yes, I do mean that.

Replies from: hyporational
comment by hyporational · 2013-10-28T02:32:13.483Z · LW(p) · GW(p)

What a sadly disfigured figure of speech. Chemists would disapprove :(

I wonder if there are many more like it.

comment by Nornagest · 2013-10-28T01:59:56.603Z · LW(p) · GW(p)

That's pretty much the same meaning; just read "person or policy" for "system", and "ideologically acceptable" for "in a given state".

Replies from: hyporational
comment by hyporational · 2013-10-28T02:08:52.423Z · LW(p) · GW(p)

The main difference is the test doesn't have to be any good.

comment by ChrisHallquist · 2013-10-28T04:34:09.568Z · LW(p) · GW(p)

I kind of want to respond "what hypo rational said," but let me see if I can say it more clearly:

  • Yes the point of rationality is how you change your beliefs in response to new evidence, but some beliefs are evidence that the person who holds the belief isn't doing a very good job of that.
  • Admittedly, any single belief is just one bit of information about a person's rationality, and maybe Stuart should have acknowledged that. But it still makes sense to talk about which bits are more informative.
  • I doubt Stuart meant to suggest AGW should be "the" litmus test for LessWrong, or a central part of LessWrong's branding, or anything like that. Again, the question is just which bit is more informative.
Replies from: Jack
comment by Jack · 2013-10-28T08:17:14.712Z · LW(p) · GW(p)

Yes the point of rationality is how you change your beliefs in response to new evidence, but some beliefs are evidence that the person who holds the belief isn't doing a very good job of that.

That's certainly true. I just think you can get a lot more information much faster directly examining how someone's beliefs change in response to new evidence.

Admittedly, any single belief is just one bit of information about a person's rationality, and maybe Stuart should have acknowledged that. But it still makes sense to talk about which bits are more informative.

Well, it's definitely not the bit that isn't specific enough to provide (much) information about the vast number of people in the world who believe in climate change because it is a tribal signifier. The existence of God is pretty unique in being both insanely improbable and widely believed. Incidentally, Stuart's post doesn't actually argue otherwise. His argument actually doesn't even fit his thesis: what he's trying to say is that disbelief in anthropogenic climate change is indicative of a higher degree of irrationality than theism, not that it is more indicative. That might actually be true just based on the average denier of climate change but it's hard to apply that standard universally when the certainty of climate scientists is only at 95%. 5% uncertainty leaves a little room for intelligent, rational skepticism among people who already tend to be suspicious of many established scientific theories. Conversely the median probability assigned to God's existence in these parts is 0.

In other words: yes, the median climate change denier might indeed by less rational than the median theist. But the probability of anthropogentic climate change being wrong is much higher than the probability that God exists -- which makes in unreliable as a test. Also, that's clearly the quote my opponent will discover if I ever decide to run for public office.

I doubt Stuart meant to suggest AGW should be "the" litmus test for LessWrong, or a central part of LessWrong's branding, or anything like that. Again, the question is just which bit is more informative.

Eh. Here was his thesis:

Theism is often a default test of irrationality on Less Wrong, but I propose that global warming denial would make a much better candidate.

I sort of feel like the determination that theism is irrational and it's role as the Plimsoll line for participating at Less Wrong is pretty central to the brand. In a lot of ways the community grew out of the atheist blogosphere and we don't even really let theists argue here. I know some Right-leaning posters are already leery of a left-ward tilt to Less Wrong: I can imagine them being annoyed by how his proposal sounds.

But at this point I think we're over-analyzing the post.

Replies from: hyporational, None
comment by hyporational · 2013-10-28T10:29:20.163Z · LW(p) · GW(p)

I don't think Stuart's test is particularly useful by itself, so don't take this as me defending it. His post is also vague and short enough to allow for several interpretations.

That's certainly true. I just think you can get a lot more information much faster directly examining how someone's beliefs change in response to new evidence.

What do you mean by "directly examine"? What if you can't interact with the person but want to determine whether reading their book is worthwhile for example? Using a few belief litmus tests could be a great way to prevent wasting your time. There are other similar situations.

If there's anything good about a belief litmus test, it's that it's simpler to apply than anything else. Probing someone's belief structure might take a lot of time, and might be socially unacceptable in certain situations. It might not be easy to assess why a person fails to update, as they might have other conflicting beliefs you're not aware of. Like any test, there will be false positives and false negatives. I think it's a matter personal preference how many you're willing to accept, and depends on how much effort you're willing to put into testing.

Theism is often a default test of irrationality on Less Wrong, but I propose that global warming denial would make a much better candidate.

A default test, not the default test. I think we're both nitpicking here and it's pretty pointless.

I sort of feel like the determination that theism is irrational and it's role as the Plimsoll line for participating at Less Wrong is pretty central to the brand.

Please define Plimsoll line. Is there a reason you didn't use a more readily understandable word? I've seen theists stepping out of the closet and being upvoted here. It's just when they come here with the default arguments we've seen a million times that they get downvoted to oblivion.

comment by [deleted] · 2013-11-26T19:06:22.191Z · LW(p) · GW(p)

I know some Right-leaning posters are already leery of a left-ward tilt to Less Wrong:

That's truly bizarre, considering that I basically managed to lose 100 karma points for arguing fairly typical social-democratic positions on LessWrong just yesterday.

Now, yes, "politics is the mind-killer", but people get mind-killed in a direction, and the direction here is very definitely neoliberal, ie: economically market-populist proprietarian, culturally liberal.

Replies from: TheOtherDave, Lumifer
comment by TheOtherDave · 2013-11-26T19:56:43.887Z · LW(p) · GW(p)

That's truly bizarre, considering that I basically managed to lose 100 karma points for arguing fairly typical social-democratic positions

Well, one possibility is that fairly typical social-democratic positions are "left" of LW's earlier position according to those "Right-leaning posters," and therefore constitute a left-ward tilt from their perspective.

comment by Lumifer · 2013-11-26T19:26:20.171Z · LW(p) · GW(p)

considering that I basically managed to lose 100 karma points for arguing fairly typical social-democratic positions on LessWrong just yesterday.

Have you considered that you lost your karma not because you argued typical social-democratic positions, but because you argued them badly?

Replies from: None, JoshuaZ
comment by [deleted] · 2013-11-26T20:24:36.918Z · LW(p) · GW(p)

That is entirely possible. However, in that case, I would expect that other people would argue social-democratic positions well (assuming we hold that social-democratic positions have the same prior probability as those of any other ideology of equivalent complexity), and receive upvotes for it. Instead, I just saw an overwhelmingly neoliberal consensus in which I was actually one of the two or three people explaining or advocating left-wing positions at all.

Think of the Talmud's old heuristic for a criminal court: a clear majority ruling is reliable, but a unanimous or nearly unanimous ruling indicates a failure to consider alternatives.

Now, admittedly, neoliberal positions appear often appealingly simple, even when counterintuitive. The problem is that they appear simple because the complexity is hiding in unexamined assumptions, assumptions often concealed in neat little parables like "money, markets, and businesses arise as a larger-scale elaboration of primitive barter relations". These parables are simple and sound plausible, so we give them very large priors. Problem is, they are also complete ahistorical, and only sound simple for anthropic reasons (that is: any theory about history which neatly leads to us will sound simpler than one that leads to some alternative present, even if real history was in fact more complicated and our real present less genuinely probable).

So overall, it seems that for LessWrong, any non-neoliberal position (ie: position based on refuting those parables) is going to have a larger inferential distance and take a nasty complexity penalty compared to simply accepting the parables and not going looking for historical evidence. This may be a fault of anthropic bias, or even possibly a fault of Bayesian thinking itself (ie: large priors lead to very-confident belief even in the absence of definite evidence).

Replies from: Vaniver, Lumifer
comment by Vaniver · 2013-11-26T21:22:30.254Z · LW(p) · GW(p)

Now, admittedly, neoliberal positions appear often appealingly simple, even when counterintuitive. The problem is that they appear simple because the complexity is hiding in unexamined assumptions, assumptions often concealed in neat little parables like "money, markets, and businesses arise as a larger-scale elaboration of primitive barter relations". These parables are simple and sound plausible, so we give them very large priors. Problem is, they are also complete ahistorical, and only sound simple for anthropic reasons (that is: any theory about history which neatly leads to us will sound simpler than one that leads to some alternative present, even if real history was in fact more complicated and our real present less genuinely probable).

This particular example doesn't seem troublesome to me, because I'm comfortable with the idea of bartering for debt. That is, my neighbor gives me a cow, and now I owe him one- then I defend his home from raiders, and give him a chicken, and then we're even. A tinker comes to town, and I trade him a pot of alcohol for a knife because there's no real trust of future exchanges, and so on. Coinage eventually makes it much easier to keep track of these things, because then we don't have my neighbor's subjective estimate of how much I owe him versus my subjective estimate of how much I owe my neighbor, we can count pieces of silver.

Now, suppose I'm explaining to a child how markets work. There are simply less moving pieces to tell it as "twenty chickens for a cow" than "a cow now for something roughly proportional to the value of the cow in the future," and so that's the explanation I'll use, but the theory still works for what actually happened. (Indeed, no doubt you can explain the preference for debt over immediate bartering as having lower frictional costs for transactions.)

In general, it's important to keep "this is an illustrative example" separate from "this is how it happened," which I don't know if various neoliberals have done. Adam Smith, for example, claims that barter would be impractical, and thus people immediately moved to currency, which was sometimes things like cattle but generally something metal.

comment by Lumifer · 2013-11-26T20:40:01.021Z · LW(p) · GW(p)

I would expect that other people would argue social-democratic positions well

In this particular thread or on LW in general?

In the particular thread, it's likely that such people didn't have time or inclination to argue, or maybe just missed this whole thing altogether. On LW in general, I don't know -- I haven't seen enough to form an opinion.

In any case the survey results do not support your thesis that LW is dominated by neoliberals.

but a unanimous or nearly unanimous ruling indicates a failure to consider alternatives.

Haven't seen much unanimity on sociopolitical issues here.

On the other hand there is that guy Bayes... hmm... what did you say about unanimity? :-D

Problem is, they are also complete ahistorical, and only sound simple for anthropic reasons

Graeber's views are not quite mainstream consensus ones. And, as you say, *any* historical narrative will sound simple for anthropic reasons -- it's not something specific to neo-liberalism.

Not sure what you are proposing as an alternative to historical narratives leading to what actually happened. Basing theories of reality on counterfactuals doesn't sound like a good idea to me.

Replies from: None
comment by [deleted] · 2013-11-26T20:48:26.493Z · LW(p) · GW(p)

In any case the survey results do not support your thesis that LW is dominated by neoliberals.

The survey results are out? Neat!

Not sure what you are proposing as an alternative to historical narratives leading to what actually happened. Basing theories of reality on counterfactuals doesn't sound like a good idea to me.

I'm not saying we should base theories on counterfactuals. I'm saying that we should account for anthropic bias when giving out complexity penalties. The real path reality took to produce us is often more complicated than the idealized or imagined path.

Graeber's view are not quite mainstream consensus ones.

The question is: are they non-mainstream in economics, anthropology, or both? I wouldn't trust him to make any economic predictions, but if he tells me that the story of barter is false, I'm going to note that his training, employment, and social proof are as an academic anthropologist working with pre-industrial tribal cultures.

Replies from: Vaniver, Lumifer
comment by Vaniver · 2013-11-26T21:20:20.518Z · LW(p) · GW(p)

The survey results are out? Neat!

Previous years' survey results: 2012, 2011, 2009. The 2013 survey is currently ongoing.

comment by Lumifer · 2013-11-26T21:26:31.809Z · LW(p) · GW(p)

I'm saying that we should account for anthropic bias when giving out complexity penalties.

How would that work?

The question is: are they non-mainstream in economics, anthropology, or both?

I am not sure what the mainstream consensus in anthropology looks like, but I have the impression that Graeber's research is quite controversial.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-26T21:30:51.523Z · LW(p) · GW(p)

At minimum, it does seem like many anthropologists see Graeber's work as much more tied into his politics than things even often are in that field, and that's a field that has serious issues with that as a whole.

comment by JoshuaZ · 2013-11-26T19:47:03.143Z · LW(p) · GW(p)

Considering how many of their comments have been downvoted, including inquiries like this one, and other recent events, such as those discussed by Ialdabaoth and others here, my guess is that's not what is going on here.

Replies from: None, Lumifer
comment by [deleted] · 2013-11-26T20:25:48.833Z · LW(p) · GW(p)

To be clear, I don't think someone's net-stalking me. That would be ridiculous. But I do think there's a certain... tone and voice that's preferred in a LessWrong post, and I haven't learned it yet. There's a way to "sound more rational", and votes are following that.

comment by Lumifer · 2013-11-26T19:58:41.981Z · LW(p) · GW(p)

I hope you realize the epistemical dangers of automatically considering all negative feedback as malicious machinations of your dastardly enemies...

Replies from: Nornagest, JoshuaZ
comment by Nornagest · 2013-11-26T20:04:38.306Z · LW(p) · GW(p)

While I take your point, it seems unlikely that that's what's motivating the response here. eli_sennesh and Eugine_Nier are about as far apart from each other politically as you can get without going into seriously fringe positions, with ialdabaoth in the middle, but there's evidence of block downvoting for all of them. You'd need a pretty dastardly enemy to explain all of that.

(I don't think block downvoting's responsible for most of eli's recent karma loss, though.)

Replies from: None, Lumifer
comment by [deleted] · 2013-11-26T20:26:21.131Z · LW(p) · GW(p)

(I don't think block downvoting's responsible for most of eli's recent karma loss, though.)

Block, meaning organized effort? Definitely not. But I definitely find a -100 karma hit surprising, considering that even very hiveminded places like Reddit are very slow to accumulate comment votes in one direction or the other.

EDIT: And now I'm at +13 karma, which from -48 is simply absurd again. Is the system intended to produce dramatic swings like that? Have I invoked the "complain about downvoting, get upvoted like mad" effect seen normally on Reddit?

Replies from: TheOtherDave, Nornagest, Lumifer, JoshuaZ
comment by TheOtherDave · 2013-11-26T20:37:48.982Z · LW(p) · GW(p)

There's a fairly common pattern where someone says something that a small handful of folks downvote, then other folks come along and upvote the comment back to zero because they don't feel it deserves to be negative, even though they would not have upvoted it otherwise. You've been posting a lot lately, so getting shifts of several dozen karma back and forth due to this kind of dynamic is not unheard of, though it's certainly extreme.

comment by Nornagest · 2013-11-26T20:30:41.506Z · LW(p) · GW(p)

Concerted, not necessarily organized. It's possible for one person to put a pretty big dent in someone else's karma if they're tolerant of boredom and have a reasonable amount of karma of their own; you get four possible downvotes to each upvote of your own (upvotes aren't capped), which is only rate-limiting if you're new, downvoting everything you see, or heavily downvoted yourself.

This just happens to have been a sensitive issue recently, as the links in JoshuaZ's ancestor comment might imply.

Replies from: None
comment by [deleted] · 2013-11-26T20:32:49.199Z · LW(p) · GW(p)

This just happens to have been a sensitive issue recently, as the links in JoshuaZ's ancestor comment might imply.

Well, I'm sorry for kvetching, then.

comment by Lumifer · 2013-11-26T20:54:02.794Z · LW(p) · GW(p)

Block, meaning organized effort?

I understand block downvoting as a user (one, but possibly more) just going through each and every post by a certain poster and downvoting each one without caring about what it says.

It is not an "organized effort" in the sense of a conspiracy.

comment by JoshuaZ · 2013-11-26T20:29:29.672Z · LW(p) · GW(p)

Blockvoting may or may not be going on in this case, but at this point, I also assign a high probability that there are people who here downvote essentially all posts that potentially seem to be arguing for positions that are generally seen as to be on the left-end of the political spectrum. That seems include posts which are purely giving data and statistics.

Replies from: None
comment by [deleted] · 2013-11-26T20:35:05.560Z · LW(p) · GW(p)

Ah, well. I blame Clippy, then.

comment by Lumifer · 2013-11-26T20:24:56.838Z · LW(p) · GW(p)

As I mentioned, I accept the block downvoting exists, it's pretty obvious. However the question is what remains after you filter it out. And as you yourself point out, in this case the remainder is still negative.

comment by JoshuaZ · 2013-11-26T20:03:42.838Z · LW(p) · GW(p)

I hope you realize the epistemical dangers of automatically considering all negative feedback as malicious machinations of your dastardly enemies...

Of course that would be epistemically dangerous. Dare I say it, as assuming that all language used by people one doesn't like is adversarial?

More to the point, I haven't made any such assumption. There are contexts where negative feedback and discussion is genuine and useful, and some of eli's comments have been unproductive, and I've actually downvoted some of them. That doesn't alter the fact that there's nothing automatic going on: in the here and now, we have a problem involving at least one person, and likely more, downvoting due primarily for disagreement rather than anything substantial, and that that is coming from a specific end of the political spectrum. That doesn't say anything about "dastardly enemies"- it simply means that karma results on these specific issues are highly likely in this context to be not representative, especially when people are apparently downvoting Eli's comments that are literal answers to questions that they don't like, such as here.

Replies from: Lumifer
comment by Lumifer · 2013-11-26T20:15:20.473Z · LW(p) · GW(p)

The possibilities that Eli's comments were downvoted "politically" and that they were downvoted "on merits" are not mutually exclusive. It's likely that both things happened.

Block down- and up-voting certainly exists. However, as has been pointed out, you should treat this as noise (or, rather, the zero-information "I don't like you" message) and filter it out to the degree that you can.

Frankly, I haven't looked carefully at votes in that thread, but some of Eli's posts were silly enough to downvote on their merits, IMHO. I have a habit of not voting on posts in threads that I participate in, but if I were just an observer, I would have probably downvoted a couple.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-26T20:16:26.818Z · LW(p) · GW(p)

The possibilities that Eli's comments were downvoted "politically" and that they were downvoted "on merits" are not mutually exclusive. It's likely that both things happened.

I agree that both likely happened. But if a substantial fraction was happening to the first, what does that suggest?

However, as has been pointed out, you should treat this as noise (or, rather, the zero-information "I don't like you" message) and filter it out to the degree that you can.

And how do you suggest one do so in this context?

Replies from: Lumifer
comment by Lumifer · 2013-11-26T20:28:40.032Z · LW(p) · GW(p)

And how do you suggest one do so in this context?

Look at short neutral "utility" posts and add back the missing karma to all the rest.

For example if somewhere in the thread there were a post "Could you clarify?" and that post got -2 karma, you would just assume that two people block-downvoted everything and add 2 karma to every post in the thread.

If you want to be more precise about it, you can look at the "% positive" number which will help you figure out how much karma to add back.

I am not sure it's worth the bother, though.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-26T20:34:21.329Z · LW(p) · GW(p)

For example if somewhere in the thread there were a post "Could you clarify?" and that post got -2 karma, you would just assume that two people block-downvoted everything and add 2 karma to every post in the thread.

So, that seems like a plausible method, and that suggests there's a -2 to -3 range going on to Eli's stuff. But that's a lot of effort, and it means that people reading it or going to get a false feeling of a consensus on LW unless they are aware enough to do that, and moreover, it is, simply put, highly discouraging. Daenery and TimS have both stated that due to this sort of thing (and be clear it is coming disproportionately from a specific end of the political spectrum) that they are less frequently posting on LW. That means that people are actively using the karma system to force a political narrative. Aside from the obvious reasons why that's bad, that's also unhelpful if one is actually trying to have discussion that has any decent chance of actually finding out information about reality rather than simply seeing what "side" has won in any given context. I'd rather that LW not turn into the political equivalent of /r/politics on reddit, where despite the nominal goals, certain political opinions drown out almost all dissent. The fact that it would be occurring on the other end of the political spectrum doesn't help matters. And can easily be particularly damaging given LW's long-term goals are about rationality, not politics.

Replies from: None
comment by [deleted] · 2013-11-26T20:43:44.340Z · LW(p) · GW(p)

For my own trying-to-shut-up part, I do find one thing about "politics is the mind-killer" distinctly weird: the notion that we can seriously discuss morality, ethics, meta-ethics, and Taking Over The World thereby, and somehow expect never to arrive at a matter of political controversy.

For one example, an FAI would likely severely reduce the resource-income and social status of every single currently-active politician, left or right, up or down.

For another, more difficult, example, I can't actually think of how you would do, say, CEV without some kind of voting and weighting system over the particular varieties of human values. Once you've got some notion of having to measure the values of at least a representative sample of everyone in the world and extrapolate those, you are innately in "political" territory. Once you need to talk of resource tradeoffs between values, you are innately in "economic" territory. Waving your arms and saying, "Friendly Superintelligence!" won't actually tell us anything about what algorithm that thing is actually running.

Replies from: Nornagest, TheOtherDave, JoshuaZ
comment by Nornagest · 2013-11-26T21:02:38.089Z · LW(p) · GW(p)

If I may don my Evil Hansonian hat for a moment, conventional politics isn't so much about charting the future of our society as about negotiating the power relationships between tribal alignments. Values and ethical preferences and vague feelings of ickiness go into those alignments (and then proceed to feed back out of them), but it's far rarer for people to support political factions out of de-novo ethical reasoning than you'd guess from talking to them about it. The mind-killer meme is fundamentally an encouragement to be mindful of that, especially of the nasty ideological feedback loops that it tends to imply, and a suggestion to focus on object-level issues where the feedback isn't quite so intense.

One consequence of this is that political shifts happen at or above human timescales, as their subjects become things that established tribes notice they can fight over. If you happen to be a singularitarian, then, you probably believe that the kinds of technological and social changes that LW talks about will at some point -- probably soon, possibly already -- be moving faster than politics can keep up with. Speaking for myself, I expect anything that conventional legislatures or political parties say about AI to matter about as much as the RIAA did when they went after Napster, and still less once we're in a position to be talking seriously about strong, friendly artificial intelligence.

More importantly from our perspective, though, anything conventional politics doesn't care about yet is also something that we have a considerably better chance of talking about sanely. We may be -- in fact, we're certainly -- in the territory of politics in the sense of subjects relevant to the future of the polis, but as long as identity considerations and politics-specific "conventional wisdom" stay relatively distant from our reasoning, we can expect our minds to remain relatively happy and unkilled.

comment by TheOtherDave · 2013-11-26T20:55:01.835Z · LW(p) · GW(p)

Yeah, this comes up from time to time. My own approach to it is to (attempt as best as I can to) address the underlying policy question while avoiding language that gets associated with particular partisan groups.

For example, I might discuss how a Blue politician might oppose FAI because they value their social status, or how a Green politician might expect a Blue politician to oppose FAI for such a reason even though the Green politician is not driven purely by such motives, or whatever... rather than using other word-pairs like (Republican/Democrat), (liberal/conservative), (reactionary/progressive), or whatever.

If I get to a point in that conversation where the general points are clearly understood, and to make further progress I need to actually get into specifics about specific politicians and political parties, well, OK, I decide what to do when that happens. But that's not where I start.

And I agree that the CEV version of that conversation is more difficult, and needs to be approached with more care to avoid being derailed by largely irrelevant partisan considerations, and that the same is true more generally about specific questions related to value tradeoffs and, even more generally, questions about where human values conflict with one another in the first place.

comment by JoshuaZ · 2013-11-26T20:50:45.941Z · LW(p) · GW(p)

I don't think the usual mantra of politics is the mind-killer is meant to avoid all political issues, although that would be one interpretation. Rather, there are two distinct observations: one is purely descriptive, that politics can be a mind-killer. The second is proscriptive- to possibly refrain when possible from discussing politics until our general level of rationality improves. Unfortunately, that's fairly difficult, because many of these issues matter. Moreover, it connects with certain problems where counter-intuitive or contrarian ideas are seen as somehow less political than more mainstream ones.

Replies from: None
comment by [deleted] · 2013-11-26T20:54:59.356Z · LW(p) · GW(p)

Moreover, it connects with certain problems where counter-intuitive or contrarian ideas are seen as somehow less political than more mainstream ones.

That's... an excellent way of putting it. Non-mainstream political "tribes" are considered "less political" precisely because they don't stand any chance of actually winning elections in the real world, so they get a Meta-Contrarian Boost on the internet. The usual ones I see are anarchists, libertarians, and neo-reactionaries.

Replies from: Nornagest, Richard_Kennaway
comment by Nornagest · 2013-11-26T23:31:13.933Z · LW(p) · GW(p)

Empirically I don't think this is true. Minority political tribes sometimes get a pass for organizing themselves around things that aren't partisan issues, or are only minor partisan issues, in the mainstream -- the Greens sometimes benefit from this in US discourse, although they're a complicated and very regionally dependent case -- but as soon as you stake out a position on a mainstream claim, even if your reasoning is very different from the norm, you should expect to be attacked as viciously as any mainstream wonk. I expect neoreaction, for example, would have met with a much less heated reception if it weren't for its views on race.

Minority views do get a boost on the Internet, but I think that has more to do with the echo-chamber effects that it encourages. It's far easier to find or collect a group of people that all agree with you on Reddit or Tumblr than it is out there in the slow, short-range world of blood and bone.

comment by Richard_Kennaway · 2013-11-27T10:14:06.176Z · LW(p) · GW(p)

That's... an excellent way of putting it. Non-mainstream political "tribes" are considered "less political" precisely because they don't stand any chance of actually winning elections in the real world, so they get a Meta-Contrarian Boost on the internet. The usual ones I see are anarchists, libertarians, and neo-reactionaries.

Considered where, and by whom? Because that is completely unlike my experience. On the Usenet groups rec.arts.sf.*, it was (I have not read Usenet for many years) absolutely standard that Progressive ideas were seen as non-political, while the merest hint of disagreement would immediately be piled on as "introducing politics to the discussion". And the reactosphere is intensely aware that what they are talking is politics.

comment by Watercressed · 2013-10-27T00:04:51.619Z · LW(p) · GW(p)

I generally agree with this post, but since people's beliefs are evidence for how they change their beliefs in response to evidence, I would call it bias-inducing and usually tribal cheering instead of totally backwards.

Replies from: Jack, ChristianKl
comment by Jack · 2013-10-27T00:11:56.969Z · LW(p) · GW(p)

If not "totally backwards" surely "orthogonal". Why not a test that supplies it's own evidence and asks the one being tested to come to a conclusion? Like the Amanda Knox case was for people here who hadn't heard of it before reading about it here.

Replies from: hyporational, Watercressed
comment by hyporational · 2013-10-28T10:31:30.542Z · LW(p) · GW(p)

There are several situations where that's not possible. Also it takes effort to test someone like that.

comment by Watercressed · 2013-10-27T00:27:24.254Z · LW(p) · GW(p)

I wouldn't call it orthogonal either. Rationality is about having correct beliefs, and I would label a belief-based litmus test rational to the extent it's correct.

Writing a post about how $political_belief is a litmus test is probably a bad idea because of the reasons you mentioned.

Replies from: Jack
comment by Jack · 2013-10-27T01:09:34.497Z · LW(p) · GW(p)

Rationality is about have correct beliefs. But a single belief that has only two possible answers is never going to stand in for the entirety of a person's belief structure. That's why you have to look at the process by which a person forms beliefs to have any idea if they are rational.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-10-28T11:12:55.967Z · LW(p) · GW(p)

a single belief that has only two possible answers is never going to stand in for the entirety of a person's belief structure.

Exactly. If there is any hope in using a list of beliefs as a test of rationality, it will need multiple items.

You know, IQ tests also don't have a single question. Neither do any other personality tests.

Replies from: army1987
comment by A1987dM (army1987) · 2013-10-28T19:39:34.588Z · LW(p) · GW(p)

OTOH the Cognitive Reflection Test has a shockingly low three questions and I've been told it's surprisingly accurate.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-10-29T09:44:04.192Z · LW(p) · GW(p)

I'd call it the "Paying-Good-Attention-While-Doing-Simple-Math Test". :D

But yeah... I can imagine that something similarly simple could be an important part of rationality. Some simple task that predicts the ability to do more complex tasks of a similar type.

However, in that case the test will resemble a kind of puzzle, instead of pattern-matching "Do you agree with Greens?"

Specifically for updating, I can imagine a test where the person is gradually given more and more information; the initial information is an evidence of an outcome "A", but most of the latter information is an evidence of an outcome "B". The person is informally asked to make a guess soon after the beginning (when the reasonable answer is "A"), and at the end they are asked to provide a final answer. Some people would probably get stuck as "A", and some would update to "B". But the test would involve some small numbers, shapes, coins, etc.; not real-life examples.

Replies from: Vaniver, army1987
comment by Vaniver · 2013-11-03T18:33:57.046Z · LW(p) · GW(p)

Specifically for updating, I can imagine a test where the person is gradually given more and more information; the initial information is an evidence of an outcome "A", but most of the latter information is an evidence of an outcome "B". The person is informally asked to make a guess soon after the beginning (when the reasonable answer is "A"), and at the end they are asked to provide a final answer. Some people would probably get stuck as "A", and some would update to "B". But the test would involve some small numbers, shapes, coins, etc.; not real-life examples.

I've seen experiments that tested this; I thought they were mentioned in Thinking and Deciding or Thinking Fast and Slow, but I didn't see it in a quick check of either of those. If I recall the experimental setup correctly (I doubt I got the numbers right), they began with a sequence that was 80% red and 20% blue, which switched to being 80% blue and 20% red after n draws. The subjects' estimate that the next draw would be red stayed above 50% for significantly longer than n draws from the second distribution, and some took until 2n or 3n draws from the second distribution to assign 50% chance to each, at which point almost two thirds of the examples they had seen were blue!

comment by A1987dM (army1987) · 2013-11-02T20:10:40.647Z · LW(p) · GW(p)

But the test would involve some small numbers, shapes, coins, etc.; not real-life examples.

I dunno... people who do fine at the Wason selection task with ages and drinks get it wrong with numbers and colours. (I'm not sure whether that's a bug or a feature.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-11-03T16:32:16.267Z · LW(p) · GW(p)

That seems to me like a reason not to test the skill on real-life examples.

We wouldn't want a rationality test that a person can pass with original wording, but will fail if we replace "Republicans" by "Democrats"... or by Green aliens. We wouldn't want the person to merely recognize logical fallacies when spoken by Republicans. This is in my opinion a risk with real-life examples. Is the example with drinking age easier because it is easier to imagine, or because it is something we already agree with?

Okay, I am curious here... what exactly would happen if we replaced the Wason selection task with something that uses words from real life (is less abstract), but is not an actual rule (therefore it cannot be answered using only previous experience)? For example: "Only dogs are allowed at jumping competitions, cats are not allowed. We have a) a dog going to unknown competition; b) a cat going to unknown competition; c) an unknown animal going to swimming competition, and d) an unknown animal going to jumping competition -- which of these cases do you have to check thoroughly to make sure the rule is not broken?"

comment by ChristianKl · 2013-10-27T02:39:00.416Z · LW(p) · GW(p)

I generally agree with this post, but since people's beliefs are evidence for how they change their beliefs in response to evidence, I would call it bias-inducing and usually tribal cheering instead of totally backwards.

If I would want to estimate people rationality from beliefs I would look at whether the belief is nuanced. There are a lot of people who say irrational stuff such that they evidence we have for global warming is comparable to the evidence we have for evolution. In reality the p value doesn't even approach the 5 sigma level that you need to validate a result about a new result in particle physics.

It's just as irrational as being a global warming denier who thinks that p(global warming)<0.5.

Yet we do see smart people making both mistakes. You have smart people who claim that the evidence for global warming is comparable to evolution and you have smart people who are global warming deniers.

People don't get mind killed by political issues because they are dumb. It might be completely rational for them because signaling is more important for them. If you want a useful metric do judge someone rationality don't take something where group identities matter a good deal.

The metric is just too noisy because the person might get something from signaling group identity. I think the only reason to choose such a metric is because you get yourself mindkilled and want to label people who don't belong to your tribe as irrational and seek some rationalisation for it.

As far as empirics go, collegue educated Republicans just have a higher rate of climate change denial than Republicans who didn't go to collegue.

While we can discuss whether collegue causes people to be more rational it certainly correlates with it.

If you want to use beliefs to judge people rationality, calibrate the test. Give people ratioanlity quizes and quiz them for their beliefs. If you get strong correlations you have something that you can use. Don't intellectually analyse the content of the beliefs and think about what rational people should believe if you want an effective metric.

Replies from: hyporational
comment by hyporational · 2013-10-27T08:07:05.907Z · LW(p) · GW(p)

RETRACTED: It wasn't my intention to start another global warming debate.

If I would want to estimate people rationality from beliefs I would look at whether the belief is nuanced.

Lots of insane beliefs are nuanced.

In reality the p value doesn't even approach the 5 sigma level that you need to validate a result about a new result in particle physics.

Requiring the same strength of evidence from climate science as from particle physics would be insane.

There are a lot of people who say irrational stuff such that they evidence we have for global warming is comparable to the evidence we have for evolution.

From Stuart's post: "Of course, reverse stupidity isn't intelligence: simply because one accepts AGW, doesn't make one more rational."

People don't get mind killed by political issues because they are dumb. It might be completely rational for them because signaling is more important for them.

Choosing to signal wouldn't be mindkill as it's understood hjhink the only reason to choose such a metric is because you get yourself mindkilled and want to label people who don't belong to your tribe as irrational and seek some rationalisation for it.

Labeling people seems to be exactly what you're doing yourself here. I can think of at least three more reasons.

I think Stuart simply underestimated the local mindkill caused by global warming debate in other people, or failed to understand that local mindkill isn't necessarily a good metric for irrationality. Neither of those require him to be mindkilled about the topic himself. One possibility is he failed to evaluate evidence of global warming himself and overestimated the probability of the relevant propositions.

You seem to be conflating intelligence and rationality in this comment. You probably know they're not the same thing.

All this being said, I don't agree with what Stuart was saying in his post. I have no opinion of global warming and haven't read about it much.

Replies from: ChristianKl
comment by ChristianKl · 2013-10-27T14:31:56.192Z · LW(p) · GW(p)

Requiring the same strength of evidence from climate science as from particle physics would be insane.

What do you mean with "require"? If I say that climate science has the same strength of evidence as evolution than we can debate whether climate change does fulfill the 5 sigma criteria criteria.

I think it does, therefore the strength of evidence for climate change is not the same as the strength of evidence for evolution.

Why does it matter? It a X-risk that global warming doesn't really exist and we do geoengineering that seriously wrecks our planet. That risk might be something like p=0.001 but it does exist. It's greater than the risk of an asteroid destroying our civilisation in the next 100 years.

To the extend that one cares about X-risks it's important to distinguish claims with 2-3 sigma from those who pass 5 sigmas. It's just not the same level of evidence.

If we want to stay alive over the next hundred years it's important that decision makers in our society don't manuver us into an X-risk because they treat 2-3 sigma the same way as they treat treat 5 sigmas.

You seem to be conflating intelligence and rationality in this comment.

I don't use the word intelligence in the comment you quote. I use it in another post as a proxy variable. I equate rationality for the ability to update your beliefs in order to win.

Replies from: hyporational
comment by hyporational · 2013-10-27T15:19:21.478Z · LW(p) · GW(p)

You used the words smart and dumb, I suppose that counts. I failed to understand most of your reply.

What do you mean with "require"?

I mean you don't need to be even nearly that certain for the findings to be actionable.

It a X-risk that global warming doesn't really exist and we do geoengineering

What's the expected utility of that compared to the expected utility of AGW? If you're too uncertain, why not just try to drastically reduce emissions instead of do major geoengineering? What's the expected utility of reducing emissions?

Replies from: ChristianKl, Moss_Piglet
comment by ChristianKl · 2013-10-27T16:37:50.959Z · LW(p) · GW(p)

I mean you don't need to be even nearly that certain for the findings to be actionable.

If I ask "What's the evidence for global warming being real?" in searching for an accurate description of the world. Having accurate maps of the world is useful.

In the above example, saying that the evidence for global warming is like that for evolution is like claiming the moon is made of cheese.

The belief might help you to convince people to reduce emissions. Believing that the moon is made of cheese might help you to discourage people from going to the moon.

If the reason that someone advocates the ridiculous claim that the evidence for global warming is comparable to that for evolution, is that it helps him convince people to lower emission that person is mindkilled by his politics.

What's the expected utility of that compared to the expected utility of AGW? If you're too uncertain, why not just try to drastically reduce emissions instead of do major geoengineering? What's the expected utility of reducing emissions?

Right, because our political leaders excel at doing rational good expected utility comparisions... Memes exist in the real world. They have effects. Promoting false beliefs about the certainity of science has dangers.

I'm not in the position to have the power to choose that the world drastically reduces emissions or whether it does major geoengineering and scientists aren't either. Scientists do have a social responsiblity to promote accurate beliefs about the world.

Whether or not we should reduce emissions is a different question. If you can't mentally separate: "Should we reduce emissions" from "What's the evidence for global warming?" you likely mindkilled about the second question and hold beliefs that aren't accurate descriptions of reality.

comment by Moss_Piglet · 2013-10-27T18:28:37.509Z · LW(p) · GW(p)

What's the expected utility of that compared to the expected utility of AGW? If you're too uncertain, why not just try to drastically reduce emissions instead of do major geoengineering? What's the expected utility of reducing emissions?

The current understanding of climate sensitivity is that since Carbon Dioxide gas will remain in the upper atmosphere for decades (and possibly centuries) even a complete halt on emissions will not avert warming predicted for the next century or so. And the models currently favored have pretty dire predictions for that level of warming, even if they're less severe than the alternative.

The only realistic solution, and naturally the one most strongly opposed by environmental groups, is solar radiation management. This would be very expensive, about $700M a year according to David Keith, and has potential risks which should be tested before any implementation plan. So not a silver bullet, but still much cheaper and safer in the long run than the standard environmental agenda even according to their own data.

(Note: I am assuming for the sake of argument that current climate models are accurate, but that is an assumption which should be questioned. Climate modeling is still in it's infancy and most existing models have difficulty with predictions even as close as a decade out. Warming is probably happening but that does not mean that any given prediction of warming is accurate, for reasons which should be obvious.)

Replies from: army1987, Jack
comment by A1987dM (army1987) · 2013-10-28T09:07:16.390Z · LW(p) · GW(p)

The current understanding of climate sensitivity is that since Carbon Dioxide gas will remain in the upper atmosphere for decades (and possibly centuries) even a complete halt on emissions will not avert warming predicted for the next century or so.

Methane has a shorter lifetime, though (though my five minutes' research tells me we've already stopped increasing methane emissions).

comment by Jack · 2013-10-27T20:20:02.851Z · LW(p) · GW(p)

Are you saying that solar radiation management is an alternative to long-term emissions reduction? Or that, in addition to eventually tapering off greenhouse gas emissions, we're going to have to do something to keep temperatures down, and the best option is solar radiation management?

(edit: apparently I wrote social radiation management)

Replies from: Moss_Piglet
comment by Moss_Piglet · 2013-10-27T22:53:41.963Z · LW(p) · GW(p)

Reducing emissions is a good goal, but energy needs will continue to increase even as we decrease the number of tons of carbon dioxide per kWh. As the population increases and becomes more wealthy there's not much we can do but put out more carbon dioxide; that's one of the reasons people bent on lowering world population and wealth have attached themselves to the environmental movement.

If the stigma against nuclear power goes away, or the technological issues which make speculative energy sources like wind/solar/fusion unprofitable are resolved, we could see a bigger dip but even then the century-long trend will probably be one of increase. SRM is the most realistic way I can think of to head off serious disasters until then.

comment by Sengachi · 2013-10-28T10:58:26.930Z · LW(p) · GW(p)

The whole point is how you change your beliefs in response to new evidence.

Of course the general concept of using a belief as a litmus test for rationality is foolish. But frankly, it's not possible at this point to have not been introduced to evidence about human-caused global warming. The people to which this test would be applied have been introduced to this new evidence and already failed to update.

And if someone lives in such a secluded bubble of information that they are truly getting information that would lead a rationalist to decry AGW, I think it safe to say that that person is probably not a rationalist. Someone in such a bubble would have no impetus to become a rationalist in the first place.

comment by blacktrance · 2013-10-30T03:26:46.891Z · LW(p) · GW(p)

To contribute a "trick" that, in my experience, makes this easier, when you hear a political point, disentangle the empirical claims from the normative claims, and think to yourself, "Even if their empirical claims are correct, that doesn't necessarily mean I should accept their normative claims. I should examine the two separately."

Replies from: None, Lumifer
comment by [deleted] · 2013-11-26T17:47:14.779Z · LW(p) · GW(p)

In general, your internal type-checker should reject any and all mixing of descriptive and normative claims. It doesn't matter if the domain is politics or chess.

comment by Lumifer · 2013-10-30T14:41:57.929Z · LW(p) · GW(p)

Yep, good advice. Disentangling descriptive from normative is a useful habit in general, not only in politics.

comment by TheOtherDave · 2013-10-26T17:32:24.839Z · LW(p) · GW(p)

Just as commenters shouldn't have assumed Eliezer's factual observation was an argument in favor of regulation,

But did they assume it?
Or did they conclude it based on inferences from Eliezer's comment and the broader context?

To recast that in more local-jargon, Bayesian terms... how high was their prior probability that Eliezer was making an argument in favor of regulation, and how much evidence in favor of that proposition was the comment itself, and did they over-weight that evidence?

Beats me, I wasn't there.
I might not be able to tell, even if I had been there.
But saying they "assumed" it in this context connotes that their priors were inappropriately high.

I'm not sure that connotation is justfiied, either in the specific case you quote Eliezer as discussing, or in the general case you and he treat it as illustrative of.

Maybe, instead, they were overweighting the evidence provided by the comment itself.

Or maybe they were weighting the evidence properly and arriving at, say, a .7 confidence that Eliezer was making an argument in favor of regulation, and (quite properly) made their bet as though that was the case... and turned out, in this particular case, to be wrong, as they should expect in 3 out of 10 cases.

you shouldn't assume the suspected Blue's observation is a pro-moon shot or anti-Green argument.

Sure, agreed. But here again, not assuming it doesn't preclude me from concluding it.

When I choose to make an utterance, I am not only providing you with the utterance's propositional content. I am also providing you with the information entailed by the fact that I chose to utter it.

When you make inferences about my motives from that information, you might of course be mistaken. But that doesn't mean you shouldn't make such inferences.

The same goes for your hypothetical Blue.

Replies from: fubarobfusco, Douglas_Knight, hyporational
comment by fubarobfusco · 2013-10-26T17:46:15.262Z · LW(p) · GW(p)

But did they assume it?
Or did they conclude it based on inferences from Eliezer's comment and the broader context?

People often say "assume" when they mean "jump to a conclusion" or "invalidly or incorrectly infer". That seems to be what's meant here.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-26T18:08:22.132Z · LW(p) · GW(p)

Agreed. But as I said, it's not clear to me that inferring the propositions under discussion is invalid or incorrect, so to the extent that "invalidly or incorrectly infer" is what's meant, I'm skeptical of the claim. Ditto for "jump to a conclusion" for the most common connotations of that phrase.

When I wrote the comment it seemed more charitable to give the claim the reading under which I agree with it, and then point out the more complicated reality of which it is a narrow slice, than to give the claim the reading under which I simply doubt that it's true. In retrospect, though, I'm not sure it was.

Either way, though, my main point is that inferring that someone is making a covert argument while seeking to maintain the social cover of just making a factual observation is not necessarily unjustified in cases like these.

comment by Douglas_Knight · 2013-10-26T18:55:06.154Z · LW(p) · GW(p)

You weren't there. You can't reconstruct what it was like to be there. But you can read his comment. It contains the word "tradeoff" four times. Can you suggest what disclaimers he should have used instead?

(but the comments responding to Eliezer seem pretty reasonable to me.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-26T19:30:43.051Z · LW(p) · GW(p)

Can you suggest what disclaimers he should have used instead?

Let's assume for the sake of comity that I can't.
What follows?

To address your broader question, though: it seems likely to me that there is no wording which reliably causes observers to believe that I'm genuinely just making a factual observation and that I'm not covertly implying any arguments, since I can't think of any way of preventing people who are covertly implying arguments from using the same wording, which will shortly thereafter cause clever observers to stop trusting that wording.

This certainly includes bald assertions like "Hey, guys, I'm genuinely just making a factual observation here and totally NOT covertly implying any arguments, OK?" which even unsophisticated deceivers know enough to use, but it also covers more sophisticated variations.

That said, it also seems likely to me that for any given audience there exists wording that will manipulate that audience into believing I'm genuinely just making a factual observation, and a sufficiently skilled manipulator can find that wording. I don't claim to be such a manipulator. (Of course, if I were, it would probably be in my best interests not to claim to be.)

Then again, such a manipulator could presumably do this even when that belief is false.

The approach I usually endorse in such cases is to not worry about it and concentrate on more generally behaving in a trustworthy way, counting on observant members of the community to recognize that and to consequently trust me to not be playing rhetorical games. (That's not to say I always succeed, nor that I never play rhetorical games.) In other words, I count on the cultivation of personal reputation over iterated trials.

Of course, deceivers of all stripes similarly count on the cultivation of personal reputation over iterated trials.

Expensive signaling helps here, of course, but isn't always an option.

comment by hyporational · 2013-10-27T09:41:25.418Z · LW(p) · GW(p)

The more important question is whether people should state hostile inferences based on usually flimsy evidence. I think vocally pointing out intentions behind factual claims is a very effective way to discourage rational discussion and cause mindkill because the rate of false positives is so high. Manufacturing plausible deniability by just stating facts works precisely because deniability in such a case should be plausible to have any relevant discussion at all.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-27T17:29:09.725Z · LW(p) · GW(p)

I don't think I agree.

To take your comment as an example... on one level, it's a series of claims. "X is the more important question." "Y is an effective way to discourage rational discussion." "The rate of false positives in Y is very high." Etc. And I could respond to it on that level, discussing whether those claims are accurate or not. And that seems to be the kind of discussion you're encouraging.

Had you instead responded by saying "The average rainfall in Missouri is 3.5 inches per year" I could similarly discuss whether that claim is accurate or not.

But that would be an utterly bizarre response. Why would it be bizarre? Because I would have no idea what the intention behind citing that fact could possibly be. Your comment, by contrast, seems to have a fairly clear intention behind it, so it's not bizarre at all.

So far, I don't think I've said anything in the least bit controversial. (If you disagree with any of the above, probably best to pause here and resolve that disagreement before continuing.)

Continuing... so, OK. You have certain intentions in making the comment you made... call those intentions I1. I have inferred certain intentions on your part... call those I2. And, as above, were I to lack a plausible I2, I would be utterly bewildered by the whole conversation, as in the Missouri rainfall example... which I'm not.

Now... if I understand your view correctly, you believe that if I articulate I2 I will effectively discourage rational discussion and cause mindkill, because I'm likely to be mistaken... that is, I2 is not likely to equal I1. It's better, on your view, for me to continue holding I2 without articulating it.

Yes? Or have I misunderstood your view?

If I've understood your view correctly, I disagree with it completely.

Replies from: hyporational
comment by hyporational · 2013-10-28T11:10:38.591Z · LW(p) · GW(p)

I tried to focus on people attacking negative intentions/connotations. I was expressing myself poorly and my comment had a lot of hidden assumptions. My comment was not even wrong. Your response is clear and helpful, thanks. I'm not sure I can improve upon my original comment, but here are some thoughts on the matter:

I think it would be useful to categorize intentions/connotations further. I see no problem in articulating hostile intentions behind a comment rudely stating that someone is fat for example. I think the reason for this is that the connotations of that kind of a statement are common knowledge and high probability. If you disapprovingly point out such connotations, nobody can claim that you're trying to sneak them into the other person's comment to dismiss it unfairly.

Then again I think there's this category of statements where it seems to me that connotations can vary wildly. Even if you have a good reason to think that some particular connotation is the most probable, it's just one option among many. Here the rate of false positives will be high. I feel in such situations attacking one connotation over another seems like a dishonest way to dismiss a statement.

I acknowledge that situational factors complicate matters further.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-28T13:46:05.642Z · LW(p) · GW(p)

Even if you have a good reason to think that some particular connotation is the most probable, it's just one option among many. Here the rate of false positives will be high.

Sure, that's true. We might disagree about how high my confidence in a particular most-probable-interpretation of the motives behind a particular statement can legitimately be, but it's clear that for some statements that confidence will be fairly low.

I feel in such situations attacking one connotation over another seems like a dishonest way to dismiss a statement.

Do you have any sense of why you feel this way?

For example, do you believe it is a dishonest way to dismiss a statement? Or just that it seems that way? (Seems that way to whom?)

comment by Douglas_Knight · 2013-10-26T16:47:48.076Z · LW(p) · GW(p)

a minor typo:

median confidence ... is 79%, and the mean confidence is 90%.

That is impossible with confidence bounded by 100%. Take an extreme case: just over half the population puts 79%, half 100%. Then the mean is just under 89.5. I checked that you switched the mean and median.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-10-26T16:51:18.110Z · LW(p) · GW(p)

Fixed.

comment by ChristianKl · 2013-10-26T23:23:00.710Z · LW(p) · GW(p)

He think Stuart is factually wrong and the global warming question isn't a good predictor. Fortunately that's something we can test.

Before we run the numbers, what's your confidence interval for the IQ difference in the LessWrong poll of 2012 between on the people who believe that p(global warming)>0.9 versus people p(global warming)<0.5?

If you just correlate p values with IQ, what's your confidence interval for the resulting correlation coefficient?

As IQ might not be rationality, how do you think the global warming answer will predict whether someone gives rational answers to the CFAR questions?

Replies from: Nornagest, Lumifer
comment by Nornagest · 2013-10-27T03:07:33.833Z · LW(p) · GW(p)

I'll bite.

My 90% confidence interval for the correlation between IQ and p(global warming) is orgjrra ebhtuyl artngvir mreb cbvag bar naq cbfvgvir mreb cbvag gjb, jvgu n crnx pybfr gb mreb. V'q or yrff fhecevfrq gb frr n pbeeryngvba orgjrra c(tybony jnezvat) naq gur PSNE dhrfgvbaf (gubhtu V'q whfg hfr 5-7, nf gur bguref frrz gb unir zber cbgragvny pbasbhaqref), ohg V'q fgvyy rkcrpg dhvgr n ybj bar.

(ROT13ed to avoid anchoring future readers.)

comment by Lumifer · 2013-10-27T03:52:11.515Z · LW(p) · GW(p)

You need to specify the "global warming" part better. "The global climate has warmed since the beginning of the XX century" is a different claim from "Human emissions of CO2 caused the warming of the global climate" which is a different claim from "The current warming is unprecedented in known history" which is a different claim from "We need to reduce the CO2 emissions".

Replies from: ChristianKl, roystgnr, None
comment by ChristianKl · 2013-10-27T05:02:54.431Z · LW(p) · GW(p)

In this post I intend to reference the Lesswrong census. In it the question was worded:

P(Warming)
What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions?

Hopefully we will have another census this year. If you think that there is better question to get at the hard core of the global warming issue, I also invite you to make prediction about how such a question would correlate. The question could be added to the next poll and we could then see how the results of the question correlate.

Replies from: Lumifer
comment by Lumifer · 2013-10-27T05:18:09.002Z · LW(p) · GW(p)

The way the question was worded it asked two different questions (maybe even three) and I'm not sure the respondents treated it as a logical expression along the lines of is.true((A OR B) AND C)...

I don't know what do you mean by the "hard core of the global warming issue".

Replies from: army1987, ChristianKl
comment by A1987dM (army1987) · 2013-10-27T16:31:05.295Z · LW(p) · GW(p)

The way the question was worded it asked two different questions (maybe even three) and I'm not sure the respondents treated it as a logical expression along the lines of is.true((A OR B) AND C)...

That would probably correlate with rationality too.

comment by ChristianKl · 2013-10-28T14:22:19.106Z · LW(p) · GW(p)

I'm not responsible for the question being worded the way it is. I don't think the wording is optimal.

If you think the question gets interpreted by different people in a different way, propose a better question to measure global warming beliefs for the next census.

Replies from: Lumifer, JoshuaZ
comment by Lumifer · 2013-10-28T15:51:07.116Z · LW(p) · GW(p)

propose a better question to measure global warming beliefs

The first question is what is it that you want to measure.

comment by JoshuaZ · 2013-10-28T14:28:42.733Z · LW(p) · GW(p)

Whether you are responsible or not is distinct from whether it will do a good job measuring what you want it to measure.

Replies from: ChristianKl
comment by ChristianKl · 2013-10-28T16:07:19.950Z · LW(p) · GW(p)

Whether you are responsible or not is distinct from whether it will do a good job measuring what you want it to measure.

Responsibility changes the meaning of the word 'good'. If I design something to measure Y I have a higher standard for 'good' than when I search for an already existing measure of Y.

If people who read the post say: "I don't think IQ correlates with the answer of that question" that an answer that moves the discussion forward.

If they say: "I think IQ correlates with the answer to a differently worded question about global warming" that also moves the discussion forward. We can test that hypothesis in the next census.

If you don't like IQ as proxy than we had the CFAR questions in the last census to measure rationality. They are also not perfect and we can think up a better metric for the next census.

comment by roystgnr · 2013-10-27T16:40:48.848Z · LW(p) · GW(p)

For that matter, "I estimate human emissions of CO2 caused 49% of the warming of the global climate" is a different question from "I estimate human emissions of CO2 caused 51% of the warming of the global climate". Is it really a fantastic expression of rationality to say that people making the first claim are basically creationists, but people making the second claim are upstanding rationalists whose numbers help to demonstrate how much popular support I have?

If you try to lump people into discrete categories over a continuously varying question then you are inherently introducing ambiguity; the first step toward setting up a Worst Argument in the World is the creation of overly-broad categories, after all. If you demand that Turquoise people self-identify as Blues or Greens, you shouldn't be surprised when you get suspected of having motives other than the pure refinement of rational thought.

Replies from: None
comment by [deleted] · 2013-10-27T17:26:30.288Z · LW(p) · GW(p)

Well you can probably say that anyone who thinks humans are entirely responsible, or not responsible at all is irrational on that question.

comment by [deleted] · 2013-11-26T19:13:32.762Z · LW(p) · GW(p)

Scientists have already found p(null hypothesis) < 0.05 on AGW. It's time we stopped variations of probability estimates over nuanced versions of possible positions and accepted the proposition supported by statistically significant evidence and a consensus of experts behind that evidence.

(Side note: Yes, I know I just blasphemed against the Great God Bayes by invoking frequentist statistics. Too bad.)

comment by Vladimir_Nesov · 2013-10-26T18:00:21.163Z · LW(p) · GW(p)

You shouldn't assume the suspected Blue's observation is a pro-moon shot or anti-Green argument.

("Shouldn't assume", taken literally, sounds like an endorsement of forming beliefs for reasons other than their correctness. I think I agree with the intended point, but I'd put it somewhat differently.)

Rather than focusing on the factual question of whether a remark is motivated by identity signaling, it's sufficient to disapprove of participation in any moves that are clearly motivated by signaling or engage with the question of whether other moves are motivated by signaling (when that's not clear). It's the same principle as with not engaging with attention-seeking trolling: there is no "assuming" that someone isn't acting in bad faith, but engagement in that mode is discouraged.

comment by Vaniver · 2013-10-27T21:22:19.721Z · LW(p) · GW(p)

Just as commenters shouldn't have assumed Eliezer's factual observation was an argument in favor of regulation

Eliezer's response there always struck me as odd. Was he making a simple factual observation? When you read the comment in question, it reads to me as the summary of an argument that regulation is necessary. Eliezer doesn't endorse that argument- he doesn't think that regulation should be necessary- but he's making the claim "society will require regulation because of argument X." Unsurprisingly, people respond to X as an argument for regulation, but a cursory glance doesn't show me any comments where people attribute to Eliezer endorsement of that argument.

Replies from: falenas108
comment by falenas108 · 2013-10-28T03:51:00.693Z · LW(p) · GW(p)

That isn't how it read to me. He says, "Some poor, honest, well-intentioned, stupid mother of 5 kids will shop at a banned store and buy a Snake's Sulfuric Acid Drink for her arthritis and die, leaving her orphaned children to cry on national television. Afterward the banned stores will be immediately closed down, based on that single case, regardless of their net benefit."

That sounds to me like he's saying this will happen regardless, and it still might be a net plus but it's something proponents will have to address.

Replies from: Vaniver
comment by Vaniver · 2013-10-28T14:39:23.848Z · LW(p) · GW(p)

That sounds to me like he's saying this will happen regardless

The bolded section means that Eliezer doesn't endorse the argument, not that it is not an argument.

it still might be a net plus but it's something proponents will have to address.

Why would the proponents have to address it, unless it was an argument against their position? Otherwise it would be a non sequitor.

[Edit] To be clear, I agree that policy debates should not be one-sided. But the way I interpret that is that there are both positive and negative consequences for any policy, and the positive consequences are arguments for and the negative consequences are arguments against.

Replies from: falenas108
comment by falenas108 · 2013-10-28T19:17:05.504Z · LW(p) · GW(p)

Okay, seems like it was mostly a semantics disagreement then.

Though I am a bit caught up on your saying Eliezer doesn't endorse the argument. Using your terminology, I think he does endorse the argument, meaning he thinks that's a legitimate point against having "banned stores." But, he also endorses other arguments for them, and to him, those weigh more.

Replies from: Vaniver
comment by Vaniver · 2013-10-28T19:58:39.452Z · LW(p) · GW(p)

I believe Eliezer endorses the decision principle "choose the option with largest net benefit," but predicts that democratic societies will operate under the decision principle "choose the option which can be best defended publicly."

That is, his comment as a whole makes three related points: first, a consequence of having stores where banned products are sold is that unintelligent customers will kill or seriously injure themselves with the products sold therein, second, this consequence is sad, and third, democratic societies are unwilling to allow consequences that are visibly that sad. For me to say he endorses the argument, I would require that he say or imply "and those societies are right," when I think he heavily implies that he understands but disagrees with their argument.

comment by somervta · 2013-10-27T00:32:41.844Z · LW(p) · GW(p)

typo:

where Blue and Green remain important remain important political identities

comment by JoshuaZ · 2013-10-28T02:06:56.381Z · LW(p) · GW(p)

and no, that's no just a stereotype)

Typo- "no" should be "not".

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-10-28T04:19:09.530Z · LW(p) · GW(p)

Thanks. Fixed.

comment by A1987dM (army1987) · 2013-10-27T10:34:26.960Z · LW(p) · GW(p)

Typo: “a rejection of and” should be “a rejection of a”.

comment by NoSignalNoNoise (AspiringRationalist) · 2013-10-26T16:16:24.447Z · LW(p) · GW(p)

If you still don't find any of this odd, think of the "skeptic" groups that freely ufologists or psychics or whatever

Is that statement missing a word?

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-10-26T16:34:33.532Z · LW(p) · GW(p)

Yup. Fixed.

comment by katydee · 2013-11-27T18:53:48.830Z · LW(p) · GW(p)

You know, when I first read this post I thought "You have some interesting points, but this is obviously just a clever argument that's going to be used to justify posting stupid bullshit to LessWrong," so I downvoted. I didn't make that remark in public, though, because it would be rude and maybe I would end up being wrong.

Now that I see what this post is being used to justify, it seems clear that my prediction was correct.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-11-28T03:41:52.214Z · LW(p) · GW(p)

Why are people upvoting a comment that doesn't actually object to anything in this post, and just refers to another post I wrote as "stupid bullshit?"

Replies from: katydee
comment by katydee · 2013-11-28T22:45:14.674Z · LW(p) · GW(p)

Why are people upvoting a comment that doesn't actually object to anything in this post, and just refers to another post I wrote as "stupid bullshit?"

Perhaps they agree with me?

I honestly didn't want to post any of this, and indeed withheld my objection at first, because I (like Eliezer) think "that's just a clever argument" can quickly become a fully general debating tactic. But it's striking to me how quickly my prediction was, in my view, proven correct, so I thought it was worth drawing attention to.

comment by BaconServ · 2013-10-27T07:52:29.609Z · LW(p) · GW(p)

Christ is it hard to stop constantly refreshing here and ignore what I know will be a hot thread.

I've voted on the article, I've read a few comment, cast a few votes, made a few replies myself. I'm precommitting to never returning to this thread and going to bed immediately. If anyone catches me commenting here after the day of this comment, please downvote it.

Damn I hope nobody replies to my comments...

comment by ChristianKl · 2013-10-27T00:25:01.436Z · LW(p) · GW(p)

If you read Stuart's original post, it's clear this comment is reading ambiguity into the post where none exists. You could argue that Stuart was a little careless in switching between talking about AGW and global warming simpliciter, but I think his meaning is clear: he thinks rejection of AGW is irrational, which entails that he thinks the stronger "no warming for any reason" claim is irrational. And there's no justification whatsoever for suggesting Stuart's post could be read as saying, "if your estimate of future warming is only 50% of the estimate I prefer you're irrational"—or as taking a position on ethical theories, for that matter.

No. It's not about the binary choice of whether or not global warming is real. Someone who thinks that there a great amount of uncertainity in the science of global warming wouldn't be labeled irrational by steven0461 criteria as long as he admits to a certain median estimate.

There are various rational arguments that indicate that climate scientists are overconfident in their own knowledge. But even if you believe in them the conclusion that the median estimate of climate sensitivity to doubled CO2 is lower than 2 degrees Celsius is irrational.

Replies from: None
comment by [deleted] · 2013-11-26T19:15:40.032Z · LW(p) · GW(p)

There are various rational arguments that indicate that climate scientists are overconfident in their own knowledge.

Really? It's always possible to make plausible-sounding, rational-sounding arguments for almost any proposition, especially when you can formulate them as conditional probabilities. It's much harder to actually gather the statistics to back those up. I'd like to see these, please.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2013-11-26T21:27:34.671Z · LW(p) · GW(p)

As a start, there are plenty of study that show that most humans are most of the time overconfident.

Long-Term Capital Management sank because of what their creators considered to be a 10 sigma event. I would guess that climate models quite often use normal distributions as a proxy for things that in 99% of the cases behave the same way as normal distributions.

A second issue is that climate scientists generally validate their models through "hindcasts". They think making accurate hindcasts is nearly the same as making accurate forecasts.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-26T21:57:53.468Z · LW(p) · GW(p)

As a start, there are plenty of study that show that most humans are most of the time overconfident.

Beware fully general counterarguments. In this case, the issue of confidence goes just as well to professional climate scientists as people who aren't, and then other cognitive biases, such as Dunning-Kruger start becoming relevant.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-27T03:23:31.528Z · LW(p) · GW(p)

Overconfidence shouldn't let us believe that p =0.5. It however would make sense to deduct a few percentage points from the result.

If a climate scientists tell you something is 0.99 likely to be true, maybe it makes sense to treat the event as 0.95 likely to be true.

You don't need to fully understand how something works to know that someone doesn't have 0.99 certainity for a claim.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-27T03:46:39.605Z · LW(p) · GW(p)

Ok. That seems like a reasonable argument. So how much of a reduction is warranted may be up in the air then. There's also a serious denotative v. connotative issue here, since one needs to carefully distinguish the actual statement "Climate scientists are likely overconfident just as almost everyone is" with all the statements made doubting climate science, anthropogenic global warming, etc. If you are only talking about a drop from .99 to .95 (or even from say .99 to .9) that isn't going to impact policy considerations much.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-27T04:01:27.160Z · LW(p) · GW(p)

If you are only talking about a drop from .99 to .95 (or even from say .99 to .9) that isn't going to impact policy considerations much.

I think it matters when it comes to geoengineering policy making. If the policy community thinks that climate scientists are really good at predicting climate, I think there a good chance that they will go sooner or later for geoengineering.

If we want to stay alive in the next century it would be good if policy makers can distinguish events with 0.9, 0.99 and 0.999 certainty.

Even a 0.001 chance that a given asteroid will extinguish humanity is too high. It's valuable to keep in mind that small chances happen from time to time and that you have to do scenario planning that integrates them.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-27T04:28:03.709Z · LW(p) · GW(p)

Sure, but right now, almost no one is talking about geoengineering as a serious solution. The policy focus right now is much more on carbon dioxide production reduction. So in the context of where these discussions are occurring, these differences will matter. Right now, I'd focus much more on getting policy makers to be able to reliably distinguish something like 0.9 from something like 0.1. In this particular issue, even that is apparently difficult. Getting an ability to appreciate an extremely rough estimate is a much higher policy priority.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-27T13:07:54.917Z · LW(p) · GW(p)

Sure, but right now, almost no one is talking about geoengineering as a serious solution.

That a very poor perspective when you care about existential risk. Memes do have effects 10 or 20 years down the road.

It bad to say things that are clearly false like that the evidence for climate change is comparable to that for evolution. Evolution being true is something with much better evidence than p=0.999.

The point of Lesswrong isn't to focus on ideas with short-term considerations. It's rather to focus on finding methods to think rationally about issues. It's about letting a lot of people in their twenties learn those methods. Than when those smart people are in positions of authority when they are in their thirties or forties, you get a payoff.

If scientists lie to the world to get policy makers to make good short-term policy decision that expensive over the long term. Scientists shouldn't orient themselves towards short-term decision making but focus on staying with the truth.

comment by Lumifer · 2013-11-26T19:28:11.372Z · LW(p) · GW(p)

I'd like to see these, please.

Go, start reading

http://wattsupwiththat.com/
http://climateaudit.org/

comment by Douglas_Knight · 2013-10-26T16:44:28.585Z · LW(p) · GW(p)

If you read Stuart's original post, it's clear

I hate this rhetoric. I did read Stuart's post.

If you'd read Vaniver's comment, you'd agree that Stuart was acting in bad faith. So you didn't read it, but then you responded to it! It is extremely rude to respond to a comment you haven't read.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-10-26T16:51:58.356Z · LW(p) · GW(p)

Do you have an actual argument that there was ambiguity in Stuart's post?

Replies from: Vaniver, Douglas_Knight
comment by Vaniver · 2013-10-27T21:49:33.007Z · LW(p) · GW(p)

How about Stuart_Armstrong's response to satt's comment? It looks to me like Stuart agrees there was ambiguity there.

(And, to be clear, by "ambiguity there" I am using ambiguity as a one-place word by choosing the maximum of the two-place ambiguity among the actual readers of the post. Stuart has no ambiguity about what Stuart meant, but Steven does, and so the one-place ambiguity is Steven's ambiguity.)

comment by Douglas_Knight · 2013-10-27T14:04:20.233Z · LW(p) · GW(p)

If you'd read my comment, it's clear that I am objecting to your rhetoric. Only you can prevent the jump to the assumption that I have a dog in the fight.

comment by [deleted] · 2013-10-27T01:36:11.081Z · LW(p) · GW(p)

the broader society isn't going to stop spontaneously labeling various straightforward empirical questions as Blue or Green issues. If you want to stop your mind from getting killed by whatever issues other people have decided are political, the only way is to control how you react to that.

This is true and embodies the quality of Tsuyoku naritai.

comment by Sophronius · 2013-10-27T22:23:54.726Z · LW(p) · GW(p)

Thank you Chris for writing this. Your article covers about 70% of what I was trying to get across in my recent article. However, your post is much better optimized for persuasion than mine, I have to admit. It's just as obvious in your post which party is the one holding the crazy view in your example, and you are presenting the same viewpoint that people here should stop pretending that crazy viewpoints are any more valid for being considered "political", yet somehow it comes across better. Maybe because you avoid using the word "crazy" and instead compare these ideas to the moon being made of gold, though it amounts to the same thing. You have my gratitude either way.

I wish I could turn all my downvotes into upvotes so I could give them to you, but just the one will have to do.

Edit: On second reading, I notice that you are specifically addressing republicans/libertarians (greens) and writing from their perspective ("Green ideology is vastly superior to the Blue ideology"). This is probably a large part of why it is better received.

comment by Dagon · 2013-10-26T16:31:43.996Z · LW(p) · GW(p)

That's a LOT of text, without a clear thesis or recommendation. Can you summarize your point and then outline the evidence rather than going purely on detailed examples?

Are you just trying to say that it's difficult to separate your beliefs and values, difficult to discuss only a segment of a popular belief cluster, and still more difficult to signal to others that you're doing so?

Replies from: Ben_LandauTaylor, peter_hurford, TheOtherDave
comment by Ben_LandauTaylor · 2013-10-26T17:47:08.493Z · LW(p) · GW(p)

A lot of text? It's 1400 words. An average adult can read this in five minutes. That is not too much time to invest in a top-level post.

Replies from: Viliam_Bur, Vladimir_Nesov
comment by Viliam_Bur · 2013-10-26T18:29:12.498Z · LW(p) · GW(p)

On internet no one behaves as an adult, and no one has the patience to spend five minutes reading an article without pictures. This is why Science invented abstracts. :D

tl;dr: use abstracts

Replies from: Ben_LandauTaylor
comment by Ben_LandauTaylor · 2013-10-26T19:03:24.157Z · LW(p) · GW(p)

I agree that this phenomenon occurs, but I respond to it differently. Part of what I love about Less Wrong is that it's less tolerant than most places of the "tl;dr lol" approach to skimming content that you describe. I want to maintain or even increase the force of that social norm.

I'm in favor of summaries and abstracts on long pieces. This is not a long piece. The first paragraph is the summary. A separate "abstract" section would only encourage people to skip the body of the essay, and that would be bad.

Replies from: Dagon
comment by Dagon · 2013-10-27T07:12:31.168Z · LW(p) · GW(p)

Fair enough - it's not all that long if it was necessary for a novel or interesting point. It's too long for something relatively simply that I already have in my toolbox, and there was no way to figure out if that's all it was without reading the whole thing.

Replies from: BaconServ
comment by BaconServ · 2013-10-27T07:18:39.545Z · LW(p) · GW(p)

So because you already have the tool, nobody else needs to be told about it? I feel like I'm strawmanning here, but I'm not sure what your point is if not, "I didn't need to read this."

Replies from: Dagon
comment by Dagon · 2013-10-27T07:32:14.229Z · LW(p) · GW(p)

"I didn't need to read this" is probably close to what prompted my comment. Along with "and I suspect most readers also won't get much out of it",

I should have just said "this should have gone in discussion first, then (if it was popular) rewritten as a top-level post with a clearer summary". Since it's gotten a reasonable amount of comments and upvotes, I think I was incorrect in my assessment that most readers would be like me,

Replies from: BaconServ
comment by BaconServ · 2013-10-27T07:45:19.739Z · LW(p) · GW(p)

Thank you. I no longer suspect you of being mind-killed by "politics is the mind-killer." Retracted.

Maybe I'm being too hasty trying to pinpoint people being mind-killed here, but it's hard to ignore that it's happening. I think I probably need to take my own advice right about now if I'm trying to justify my jumping to conclusions with statements like, "It's hard to ignore that it's happening."

I was planning to make a top-level comment here to the effect of, "INB4obvious mind-kill," but I think I just realized why the thoughts that thought that up were flawed from a basic level. Still, I think someone should point out that the comments here are barely touching the content of this article, which is odd for LessWrong.

comment by Vladimir_Nesov · 2013-10-26T18:04:34.590Z · LW(p) · GW(p)

That is not too much time to invest in a top-level post.

It's not a lot of time, but it can simultaneously be too much, if the value of the post is even smaller than the small cost.

Replies from: Ben_LandauTaylor
comment by Ben_LandauTaylor · 2013-10-26T18:25:06.288Z · LW(p) · GW(p)

True! But (1) that's not the case for this post, and (2) if it were the case, then this post would not belong in Main, and adding a summary would not fix the fundamental problem of low value.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-10-27T00:49:50.869Z · LW(p) · GW(p)

The fact that the author puts a piece in main, or that the community votes it highly, or that the administrators do not remove it from main, is only very weak evidence that I want to read it.

Replies from: BaconServ
comment by BaconServ · 2013-10-27T07:15:21.538Z · LW(p) · GW(p)

Do you have an actual complaint here or are you disagreeing for the sake of disagreeing

Because it sounds a damn lot like you're upset about something but know better than to say what you actually think, so you're opting to make sophomoric objections instead.

comment by Peter Wildeford (peter_hurford) · 2013-10-27T03:44:19.968Z · LW(p) · GW(p)

The thesis / recommendation seems pretty clear to me from the opening paragraph -- if you see a mind-killing political example, just calm down as a reader and refuse to be mind-killed.

Replies from: Dagon
comment by Dagon · 2013-10-27T07:24:47.221Z · LW(p) · GW(p)

IMO, that's not helpful advice. It provides very few tools for diagnosing when you're overreacting, and no techniques for actually implementing this refusal.

More importantly, it ignores the fact that you need mutual knowledge, not just calm, that you AND ALL READERS are interpreting this as only a value-free fact estimate, and not the overwhelmingly more common cluster of topics that includes how to act on it.

Replies from: BaconServ
comment by BaconServ · 2013-10-27T07:35:19.349Z · LW(p) · GW(p)

We can only go a step at a time. The other recent post about politics in Discussion was rife with obvious mind-kill. I'm seeing this thread filling up with it too. I'd advocate downvoting of obvious mind-kill, but it's probably not very obvious at all and would just result in mind-killed people voting politically without giving the slightest measure of useful feedback. I'm really at a loss for how to get over the mind-kill of politics and the highly paired autocontrarian mind-kill of "politics is the mind-killer" other than just telling people to shut the fuck up, stop reading comments, stop voting, go lie down, and shut the fuck up.

comment by TheOtherDave · 2013-10-26T17:37:54.789Z · LW(p) · GW(p)

FWIW, I would summarize the substantive point as "Given behavior B from agent A where B is differentially characteristic of group G and trait T is also differentially characteristic of group G, don't infer that A has T."

To which I would respond "Nah, go ahead and infer it, but be aware that you might be wrong and keep your confidence levels as well-calibrated as you can."

That said, I might be missing the OP's intended point.