Some Thoughts Are Too Dangerous For Brains to Think

post by WrongBot · 2010-07-13T04:44:12.287Z · LW · GW · Legacy · 318 comments

Contents

  A few examples (in approximately increasing order of controversy):
  If you proceed anyway...
None
318 comments
[EDIT - While I still support the general premise argued for in this post, the examples provided were fairly terrible. I won't delete this post because the comments contain some interesting and valuable discussions, but please bear in mind that this is not even close to the most convincing argument for my point.]
A great deal of the theory involved in improving computer and network security involves the definition and creation of "trusted systems", pieces of hardware or software that can be relied upon because the input they receive is entirely under the control of the user. (In some cases, this may instead be the system administrator, manufacturer, programmer, or any other single entity with an interest in the system.) The only way to protect a system from being compromised by untrusted input is to ensure that no possible input can cause harm, which requires either a robust filtering system or strict limits on what kinds of input are accepted: a blacklist or a whitelist, roughly.
One of the downsides of having a brain designed by a blind idiot is that said idiot hasn’t done a terribly good job with limiting input or anything resembling “robust filtering”. Hence that whole bias thing. A consequence of this is that your brain is not a trusted system, which itself has consequences that go much, much deeper than a bunch of misapplied heuristics. (And those are bad enough on their own!)
In discussions of the AI-Box Experiment I’ve seen, there has been plenty of outrage, dismay, and incredulity directed towards the underlying claim: that a sufficiently intelligent being can hack a human via a text-only channel. But whether or not this is the case (and it seems to be likely), the vulnerability is trivial in the face of a machine that is completely integrated with your consciousness and can manipulate it, at will, towards its own ends and without your awareness.
Your brain cannot be trusted. It is not safe. You must be careful with what you put into it, because it  will decide the output, not you. We have been warned, here on Less Wrong, that there is dangerous knowledge; Eliezer has told us that knowing about biases can cause us harm. Nick Bostrom has written a paper describing dozens of ways in which information can hurt us, but he missed (at least) one.
The acquisition of some thoughts, discoveries, and pieces of evidence can lower our expected outcomes, even when they are true. This can be accounted for; we can debias. But some thoughts and discoveries and pieces of evidence can be used by our underhanded, untrustworthy brains to change our utility functions, a fate that is undesirable for the same reason that being forced to take a murder pill is undesirable.
(I am making a distinction here between the parts of your brain that you have access to and can introspect about, which for lack of better terms I call “you” or “your consciousness”, and the vast majority of your brain, to which you have no such access or awareness, which I call “your brain.” This is an emotional manipulation, which you are now explicitly aware of. Does that negate its effect? Can it?)

A few examples (in approximately increasing order of controversy):

Identity PoliticsPaul Graham and Kaj Sotala have covered this ground, so I will not rehash their arguments. I will only add that, in the absence of a stronger aspect of your identity, truly identifying as something new is an irreversible operation. It might be overwritten again in time, but your brain will not permit an undo.
Power Corrupts: History is littered with examples of idealists seizing power only to find themselves betraying the values they once held dear. No human who values anything more than power itself should seek it; your brain will betray you. There has not yet been a truly benevolent dictator and it would be delusional at best to believe that you will be the first. You are not a mutant. (EDIT: Michael Vassar has pointed out that there have been benevolent dictators by any reasonable definition of the word.)
Opening the Door to Bigotry: I place a high value on not discriminating against sentient beings on the basis of artifacts of the birth lottery. I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.
One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them. Based on all the evidence I have, I’ve made a conscious decision to avoid seeking out information on sex differences in intelligence and other, similar kinds of research. I might be able to resist my brain’s attempts to change what I value, but I’m not willing to take that risk; not yet, not with the brain I have right now.
If you know of other ways in which a person’s brain might stealthily alter their utility function, please describe them in the comments.

If you proceed anyway...

If the big red button labelled “DO NOT TOUCH!” is still irresistible, if your desire to know demands you endure any danger and accept any consequences, then you should still think really, really hard before continuing. But I’m quite confident that a sizable chunk of the Less Wrong crowd will not be deterred, and so I have a final few pieces of advice.
  • Identify knowledge that may be dangerous. Forewarned is forearmed.
  • Try to cut dangerous knowledge out of your decision network. Don’t let it influence other beliefs or your actions without your conscious awareness. You can’t succeed completely at this, but it might help.
  • Deliberately lower dangerous priors, by acknowledging the possibility that your brain is contaminating your reasoning and then overcompensating, because you know that you’re still too overconfident.
  • Spend a disproportionate amount of time seeking contradictory evidence. If believing something could have a great cost to your values, make a commensurately great effort to be right.
  • Just don’t do it. It’s not worth it. And if I found out, I’d have to figure out where you live, track you down, and kill you.
Just kidding! That would be impossibly ridiculous.

318 comments

Comments sorted by top scores.

comment by simplicio · 2010-07-14T02:39:39.248Z · LW(p) · GW(p)

I upvoted this post because it's a fascinating topic. But I think a trip down memory lane might be in order. This 'dangerous knowledge' idea isn't new, and examples of what was once considered dangerous knowledge should leap into the minds of anybody familiar with the Coles Notes of the history of science and philosophy (Galileo anyone?). Most dangerous knowledge seems to turn out not to be (kids know about contraception, and lo, the sky has not fallen).

I share your distrust of the compromised hardware we run on, and blindly collecting facts is a bad idea. But I'm not so sure introducing a big intentional meta-bias is a great idea. If I get myopia, my vision is not improved by tearing my eyes out.

Replies from: simplicio
comment by simplicio · 2010-07-14T03:11:28.274Z · LW(p) · GW(p)

On reflection, I think I have an obligation to stick my neck out and address some issue of potential dangerous knowledge that really matters, rather than the triviality (to us anyway) of heliocentrism.

Suppose (worst case) that race IQ differences are real, and not explained by the Flynn effect or anything like that. I think it's beyond dispute that that would be a big boost for the racists (at least short-term), but would it be an insuperable obstacle for those of us who think ontological differences don't translate smoothly into differences in ethical worth?

The question of sex makes me fairly optimistic. Men and women are definitely distinct psychologically. And yet, as this fact has become more and more clear, I do not think sexual equality has declined. Probably the opposite - a softening of attitudes on all sides. So maybe people would actually come to grips with race IQ differences, assuming they exist.

More importantly, withholding that knowledge could be much more disastrous.

(1) If the knowledge does come out, the racists get to yell "I told you so," "Conspiracy of silence" etc. Then the IQ difference gets magnified 1000x in the public imagination.

(2) If the knowledge does not come out, then underrepresentation of certain races in e.g., higher learning stands as an ugly fact sans explanation. Society beats its head against a problem of supposed endemic racism for eternity, when the real culprit is statistical differences in mean IQ. Even though - public perceptions be damned - statistical IQ differences should have all the moral weight of "pygmies are underrepresented in basketball."

Knowing about (potential) racial IQ differences is dangerous; so is a perpetual false presumption of racism resulting from ignoring those differences if they exist. Which one generates the most angst, long-term? I don't know. But the truth is probably more sustainable than a well-intentioned fib.

Replies from: WrongBot, None, MichaelVassar
comment by WrongBot · 2010-07-14T03:50:58.543Z · LW(p) · GW(p)

I'm inclined to agree with you.

I certainly don't think that avoiding dangerous knowledge is a good group strategy, due to (at least) difficulties with enforcement and unintended side-effects of the sort you've described here.

The question of sex makes me fairly optimistic. Men and women are definitely distinct psychologically. And yet, as this fact has become more and more clear, I do not think sexual equality has declined. Probably the opposite - a softening of attitudes on all sides. So maybe people would actually come to grips with race IQ differences, assuming they exist.

While the scientific consensus has become more clear, I'm not sure that it's reflected in popular or even intellectual opinion. Note the continuing popularity of Judith Butler in non-science academic circles, for example. Or the media's general tendency to discuss sex differences entirely outside of any scientific context. This may not be the best example.

Replies from: simplicio
comment by simplicio · 2010-07-14T04:08:02.496Z · LW(p) · GW(p)

This may not be the best example.

Perhaps not for society at large, but what about empirically-based intellectuals themselves? Do you think knowledge of innate sex differences leads to more or less sexism among them? I think it leads to less, although my evidence is wholly anecdotal.

There is another problem with avoiding dangerous knowledge. Remember the dragon in the garage? In order to make excuses ahead of time for missing evidence, the dragon proponent needs to have an accurate representation of reality somewhere in their heart-of-hearts. This leads to cognitive dissonance.

Return to the race/IQ example. Would you rather

  • know group X has a 10 points lower average IQ than group Y, and just deal with it by trying your best to correct for confirmation bias etc., OR

  • intentionally keep yourself ignorant, while feeling deep down that something is not right.

?

I suspect the second option is worse for your behaviour towards group X. It would still be difficult for a human to do, but I'd personally rather swallow the hard pill of a 10-point average IQ difference and consciously correct for my brain's crappy heuristics, than feel queasy around group X in perpetuity because I know I'm lying to myself about them.

comment by [deleted] · 2011-02-22T02:41:22.895Z · LW(p) · GW(p)

(1) If the knowledge does come out, the racists get to yell "I told you so," "Conspiracy of silence" etc. Then the IQ difference gets magnified 1000x in the public imagination.

I think we are seeing that among the for now (fortunately) small group of relatively intelligent unconformist people who change their opinion on this subject once looking at the data.

It biases them towards unduly sympathetic judgements of everyone else who happens to hold the same opinion.

"Oh that guy is just a bit rough around the edges, he's just ranting in that post. I kind of understand him, its so hard to keep seeing the same lies and falsehoods behind repeated over and over and over again."

or

"I'm mean sure he's wrong on that, but we can't be picky in choosing allies in our fight for truth when our opposition has the full weight of the state on their side. "

or eventually

"Yes the guy is a douche and bigot but if we are ever to stop wasting 4% of our GDP on policies based on this falsehood we need to be political realists and just work with them."

Leaking unconfromist, driven, principled (as in truth seeking even when it costs them status) intelligent people to otherwise unworthy causes? This may prove to be dangerous in the long term.

One can't overestimate the propaganda value of calling out a well intentioned lie out as a lie and then proving that it actually is, you know, a lie. Our biases make us very vulnerable to be overly suspicious of someone who has been shown to be a liar. This is doubly true of our tendency to question their motives.

comment by MichaelVassar · 2010-07-15T16:57:25.302Z · LW(p) · GW(p)

Possibly, but faith in the truth winning out also looks like faith to me. Also, publicly at least people have to pick their battles.

comment by MichaelVassar · 2010-07-15T17:17:33.665Z · LW(p) · GW(p)

I flat-out disagree that power corrupts as the phrase is usually understood, but that's a topic worthy of rational discussion (just not now with me).

The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion, a key point of faith in the American democratic religion and no more worthy of discussion than whether the Earth is old, at least for usual meanings of the word 'benevolent' and for meanings of 'dictator' which avoid the no true Scotsman fallacy. There have been benevolent democratically elected leaders in the usual sense too. How confident do you think you should be that the latter are more common than the former though? Why?

I'm seriously inclined to down-vote the whole comment community on this one except for Peter, though I won't, for their failure to challenge such an overt assertion of such an absurd claim. How many people would have jumped in against the claim that without belief in god there can be no morality or public order, that the moral behavior of secular people is just a habit or hold-over from Christian times, and that thus that all secular societies are doomed? To me it's about equally credible.

BTW, just from the 20th century there are people from Ataturk to FDR to Lee Kuan Yew to Deng Chou Ping. More generally, more or less The Entire History of the World especially East Asia are counter-examples.

Replies from: Mass_Driver, Vladimir_M, JanetK, Aurini, WrongBot, satt, Carinthium
comment by Mass_Driver · 2010-07-15T18:53:19.418Z · LW(p) · GW(p)

that's a topic worthy of rational discussion (just not now with me).

If this is a plea to be let alone on the topic, then, feel free to ignore my comment below -- I'm posting in case third parties want to respond.

The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion,

Perhaps it's phrased poorly. There have certainly been plenty of dictators who often meant well and who often, on balance, did more good than harm for their country -- but such dictators are rare exceptions, and even these well-meaning, useful dictators may not have been "truly" benevolent in the sense that they presided over hideous atrocities. Obviously a certain amount of illiberal behavior is implicit in what it means to be a dictator -- to argue that FDR was non-benevolent because he served four terms or managed the economy with a heavy hand would indeed involve a "no true Scotsman" fallacy. But a well-intentioned, useful, illiberal ruler may nevertheless be surprisingly bloody, and this is a warning that should be widely and frequently promulgated, because it is true and important and people tend to forget it.

BTW, just from the 20th century there are people from Ataturk to FDR to Lee Kuan Yew to Deng Chou Ping. More generally, more or less The Entire History of the World especially East Asia are counter-examples.

*Ataturk is often accused of playing a leading role in the Armenian genocide, and at the very least seems to have been involved in dismissing courts that were trying war criminals without providing replacement courts, and in conquering territories where Armenians were massacred shortly after the conquest.

*Deng Chou Ping was probably the most powerful person in China at the time of the Tiananmen Square massacres, and it is not clear that he exerted any influence to attempt to disperse the protesters peacefully or even with a minimum of violence: tanks were used in urban areas and secret police hunted down thousands of dissidents even after the protests had ended. One might have hoped that a benevolent illiberal ruler, when confronted with peaceful demands for democracy, would simply say "No." and ignore the protesters except in so far as they were creating a public nuisance.

*FDR presided over the internment of hundreds of thousands of American citizens in concentration camps solely on the basis of race, as well as the firebombing of Dresden, Hamburg, and Tokyo. The first conflagration of a residential area could have been an accident, but there is no evidence of which I am aware that the Allies ever took steps to prevent tens of thousands of civilians from being burnt alive, such as, e.g., taking care to only bomb non-urban industrial targets on hot, dry, summer days. Although Hitler is surely far more responsible than FDR for the Holocaust, a truly benevolent ruler would probably have spared an air raid or two to cut the railroad tracks that led from Jewish ghettos to German death camps. Whatever you might think about FDR's leadership (I would not presume to judge him or to say that I could have done better in his place), it was surprisingly bloody for a benevolent person.

Lee Kuan Yew seems to have been a fairly good dictator, but in his autobiography, he claims to have directly benefited from the US's war efforts in Vietnam, and he says that he would not have remained in power but for the US efforts. For its part, the US State Department explicitly claimed that the Vietnam war was intended to prevent countries like Lee Kuan Yew's Singapore from falling like dominoes after a possible Communization of Vietnam. Although it would probably be unfair to lay moral culpability for, e.g., Mai Lai or Agent Orange on Lee Kuan Yew (and thus I do not say he is in any way to blame), it is still worth noting that Yew's dictatorship was indirectly maintained by years of surprisingly bloody violence. Thus, Yew may be an exception that proves the rule -- even when you yourself, as an aspiring dictator, do not get your hands bloody as power corrupts you, it is possible that you are saved from bloody hands only by a friend who gets his hands bloody for you.

Replies from: MichaelVassar, Kevin
comment by MichaelVassar · 2010-07-15T21:04:33.884Z · LW(p) · GW(p)

I simply deny the assertion that dictators who wanted good results and got them were rare exceptions. Citation needed.

Admittedly, dictators have frequently presided over atrocities, unlike democratic rulers who have never presided over atrocities such as slavery, genocide, or more recently, say the Iraq war, Vietnam, or in an ongoing sense, the drug war or factory farming.

Human life is bloody. Power pushes the perceived responsibility for that brute fact onto the powerful. People are often scum, but avoiding power doesn't actually remove their responsibility. Practically every American can save lives for amounts of money which are fairly minor to them. What are the relevant differences between them and French aristocrats who could have done the same? I see one difference. The French aristocrats lived in a Malthusian world where tehy couldn't really have impacted total global suffering with the local efforts available?

How is G.W. Bush more corrupt than the people who elected him. He seems to care more for the third world poor than they do, and not obviously less for rule of law or the welfare of the US.

Playing fast and loose with geopolitical realities, (Iraq is only slightly about oil, for instance) I'd like to conclude with the observation that even when you yourself, as a middle class American, don't get your hands bloody as cheap oil etc corrupt you, it is possible that you are saved from bloody hands by an elected representative who you hired to do the job.

Replies from: prase, Mass_Driver, kodos96
comment by prase · 2010-07-16T09:07:14.583Z · LW(p) · GW(p)

I simply deny the assertion that dictators who wanted good results and got them were rare exceptions. Citation needed.

The standards of evaluation of goodness should be specified in greater detail first. Else it is quite difficult to tell whether e.g. Atatürk was really benevolent or not, even if we agree on goodness of his individual actions. Some of the questions

  • are the points scored by getting desired good results cancelled by the atrocities, and to what extent?
  • could a non-dictatorial regime do better (given the conditions in the specific country and historical period), and if no, can the dictator bear full responsibility for his deeds?
  • what amount of goodness makes a dictator benevolent?

Unless we first specify the criteria, the risk of widespread rationalisation in this discussion is high.

Replies from: Blueberry
comment by Blueberry · 2010-07-16T17:22:53.018Z · LW(p) · GW(p)

Upvoted for the umlaut!

Replies from: prase
comment by prase · 2010-07-16T18:19:37.319Z · LW(p) · GW(p)

That was perhaps the cheapest upvote I ever got. Thanks. (Unfortunately Ceauşescu was anything but benevolent, else he would be mentioned and I could gather additional upvotes for the comma.)

comment by Mass_Driver · 2010-07-16T05:49:52.261Z · LW(p) · GW(p)

Citation needed.

It's hard to find proof of what most people consider obvious, unless its part of the Canon of Great Moments in Science (tm) and the textbook industry can make a bundle off it. Tell you what -- if you like, I'll trade you a promise to look for the citation you want for a promise to look for primary science on anthropogenic global warming. I suspect we're making the climate warmer, but I don't know where to read a peer-reviewed article documenting the evidence that we are. I'll spend any reasonable amount of time that you do looking -- 5 minutes, 15 minutes, 90 minutes -- and if I can't find anything, I'll admit to being wrong.

unlike democratic rulers who have never presided over atrocities such as slavery, genocide, or more recently, say the Iraq war, Vietnam, or in an ongoing sense, the drug war or factory farming.

Slavery, genocide, and factory farming are examples of imperfect democracy -- the definition of "citizen" simply isn't extended widely enough yet. Fortunately, people (slowly) tend to notice the inconsistency in times of relative peace and prosperity, and extend additional rights. Hence the order-of-magnitude decrease in the fraction of the global population that is enslaved, and, if you believe Stephen Pinker, in the frequency of ethnic killings. As for factory farming, I sincerely hope the day when animals are treated as citizens when appropriate will come, and the quicker it comes the better I'll be pleased. On the other hand, if you glorify dictatorship, or if you give dictatorship an opening to glorify itself, it tends to pretty effectively suppress talk about widening the circle of compassion. Better to have a hypocritical system of liberties than to let vice walk the streets without paying any tribute to virtue at all; such tributes can collect compound interest over the centuries.

The Vietnam war is generally recognized as a failure of democracy; the two most popular opponents of the war were assassinated, and the papers providing the policy rationale for the war were illegally hidden, ultimately causing the downfall of President Nixon. The drug war seems to be winding down as the high cost of prisons sinks in. The war on Iraq is probably democracy's fault.

Human life is bloody. Power pushes the perceived responsibility for that brute fact onto the powerful.

True enough, but it also pushes some of the real responsibility onto the powerful. I would much rather kill one person than stand by and let ten die, but I would much rather let one person die than kill one person -- responsibility counts for something.

it is possible that you are saved from bloody hands by an elected representative who you hired to do the job.

God forbid, if you'll excuse the expression. I'm not paying anybody to butcher for me, although sometimes, despite my best efforts, they take my tax dollars for the purpose. So far as I can manage it without being thrown in jail, it's not in my name; I vote against any incumbent that commits atrocities, and campaign for people who promise not to, and buy renewable energy from the power company and fair-trade imports from the third world and humanely-labeled meat from the supermarket. I'm sure that I still benefit from all kinds of bloody shenanigans, but it's not because I want to.

Finally, are you any relation to Michael Vassar, the political philosopher and scholar of just war theory? You seem to have a mind that is open like his, and a similarly agile debating style, but you also seem considerably bitterer than his published works.

Replies from: MichaelVassar, Emile, rela
comment by MichaelVassar · 2010-07-16T08:42:32.445Z · LW(p) · GW(p)

Good writing style!

I don't think I glorify dictatorship, but I do think that terribly dictatorships, like Stalinist Russia, have sometimes spoken of widening circles of compassion.

I do think you are glorifying democracy. Do you have examples of perfect democracy to contrast with imperfect democracy? Slaves frequently aren't citizens, but on other occasions, such as in the immense and enslaving US prison system (with its huge rates of false conviction and of conviction for absurd crimes), or the military draft they are. The reduction in slavery may be due to philosophical progress trickling down to the masses, or it may simply be that slavery has become less economically competitive as markets have matured.

Responsibility counts for something, but for far less among the powerful. As power increases, custom weakens, and situations become more unique, acts/omissions distinctions become less useful. As a result, rapid rises in power do frequently leave people without a moral compass, leading to terrible actions.

I appreciate your efforts to avoid indirectly causing harms.

I didn't know about the other Michael Vassar. It's an uncommon name, so I'm surprised to hear it.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-07-18T04:30:13.959Z · LW(p) · GW(p)

Good writing style!

By which you mean, I suppose, that my skill as a rhetorician has exceeded my skill as a rationalist. Well, you may be right. Supposing you are, what do you suggest I do about it?

I do think you are glorifying democracy.

Well, yes, I am. Not our democracy, not any narrow technique for promoting democracy, but democracy as the broad principle that people should have a decisive say in the decisions that affect them strikes me as pretty awesome. I guess I might be claiming benefits for democracy in excess of what I have evidence to support, and that if I were an excellent rationalist, I would simply say, "I do not know what the effects of attempting democracy are."

I am not an excellent rationalist. What I do is to look hard for the answers to important questions, and then, if after long searching I cannot find the answers and I have no hope of finding the answers, but the questions still seem important, I choose an answer that appeals to my intuition.

I spent the better part of my undergraduate years trying to understand what democracy is, what violence is, and whether the two have any systematic relation to each other. Scientifically speaking, my answer is that we do not know, and will not know, in all likelihood, for quite some time. Violence happens in places where researchers find it difficult or impossible to record it; death tolls are so biased by partisans of various stripes, by the credulity of an entertainment-based media, and by the fog of war that one can almost never tell which of two similarly-sized conflicts was more violent. Democracy is, at best, a correlation among several variables, each of which can only be specified with 2 or 3 bits of meaningful information, and each of which might have different effects on violence. Given the confusion, to scientifically state a relationship between democracy and violence would be ridiculous.

And, yet, I find that I very much want to know what the relationship is between democracy and violence. I can oppose all offensive wars designed to change another country's regime type on the grounds that science supports no prediction that the certain deaths from war will be outweighed by bloodiness removed in an allegedly safer regime. What about defensive wars? I find that I cannot bring myself to say, "I would not fight to preserve my region's measure of democracy against an outside autocratic invader, because I do not know, scientifically, that such a fight would reduce total bloodiness." I would fight, believing without scientific evidence that such a war would be better than surrender.

Am I simply deluding myself? Most people on Less Wrong will think so. I do not particularly care. I am far more concerned about the danger of reasoning myself into a narcissistic, quiet-ist corner where I never take political action than I am about the danger of backing an ideal that turns out to be empty.

Using the Hansonian "far-view reference class," the odds that an ideal chosen based on "things I believe in because I was taught to believe in them" is worth killing for are near zero. Using the same method, the odds that an ideal chosen based on "things that I believe in after carefully examining all available evidence and finding that I cannot think of a good reason to overturn my culture's traditions, despite having actively questioned them" is worth killing for are high enough that I can sleep at night. If you believe I should be awake, I look forward to your reply.

Replies from: MichaelVassar, MichaelVassar
comment by MichaelVassar · 2010-07-18T16:39:43.990Z · LW(p) · GW(p)

Not at all. Rhetorical skill IS a good thing, and properly contributes to logic. Your argument seems rational to me, in the non-Spock sense that we generally encourage here. What to do? Keep on thinking AND caring!

If the search you use is as fair and unbiased as you can make it, this looking hard for answers is the core of what being a good rationalist is. Possibly, you should look harder for the causes of systematic differences between people's intuitions, to see whether those causes are entangled with truth, but analysis has to stop at some point.

In practice, rationalists may back themselves into permanent inaction due to uncertainty, but the theory of rationality we endorse here says we should be doing what you claim to be doing. I find it extremely disturbing that we aren't communicating this effectively, though its clearly our fault since we aren't communicating it effectively enough to ourselves for it to motivate us to be more dynamic either.

comment by MichaelVassar · 2010-07-18T16:27:01.220Z · LW(p) · GW(p)

When you say you glorify Democracy though, I think you mean something much closer to what I would call Coherent Extrapolated Volition than it is to what I would call Democracy. Something radically novel that hasn't ever been tried, or even specified in enough detail to call it a proposal without some charity.

As a factual matter, I would suggest that the systems of government that we call Democracies in the US may typically be a bit further in the CEV direction than those we typically call dictatorships, but if they are, its a weak tendency, like the tendency of good painters to be good at basketball or something. You might detect it statistically, if you had properly operationalized it first, or vaguely suspect its there based on intuitive perception, but you couldn't ever be very confident it was there.

It's obviously wrong to overturn cultural traditions which have been questioned but not refuted. Such traditions have some information value, if only for anthropic reasons, and more importantly, they are somewhat correlated with your values. In this particular case, if you limit your options under consideration to 'fight against invaders or do nothing' I have no objections. Real life situations usually present more options, but those weren't specified.

As an off-the-cuff example, I think its obvious that a person who fought against the NAZIs in WWII was doing something better than they would by staying home even though the NAZIs didn't invade the US and even valuing their lives moderately more highly than those of others. OTOH, the marginal expected impact of a soldier on the expected outcome of the war was surely SO MUCH less than the marginal expected impact of an independent person who put in serious effort to be an assassin, while the risk was probably not an order of magnitude smaller, so I think its fair to say that they were still being irrational, judged as altruists, and were in most cases, well, only following orders. If they valued victory enough to be a soldier they should have done something more effective instead. (have I just refuted Yossarian or confirmed him?)

I think that they should definitely sleep at night. Should feel happy and proud even... but in their shoes I wouldn't.

Whole thread voted up BTW.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-07-18T20:43:30.534Z · LW(p) · GW(p)

Thanks! Wholeheartedly agree, btw.

comment by Emile · 2010-07-16T10:26:58.878Z · LW(p) · GW(p)

Finally, are you any relation to Michael Vassar, the political philosopher and scholar of just war theory? You seem to have a mind that is open like his, and a similarly agile debating style, but you also seem considerably bitterer than his published works.

I think you're referring to Michael Walzer.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-07-18T04:05:07.401Z · LW(p) · GW(p)

Right! Thank you.

comment by rela · 2010-08-13T18:57:22.565Z · LW(p) · GW(p)

Tell you what -- if you like, I'll trade you a promise to look for the citation you want for a promise to look for primary science on anthropogenic global warming. I suspect we're making the climate warmer, but I don't know where to read a peer-reviewed article documenting the evidence that we are.

I don't know if you're still looking for this, and if this would be an appropriate place to post links. But:

Primary Evidence:

Other Supporting evidence:

Contradicting evidence:

  • extremes of monthly average temperatures in Central England do not appear to match either a "high extremes after 1780s/1850s only" or "low extremes before 1780s/1850s only" hypothesis. Manley. [Central England temperatures: Monthly means 1659 to 1973] (http://onlinelibrary.wiley.com/doi/10.1002/qj.49710042511/abstract). Quarterly Journal of the Royal Meteorological Society. Volume 100, Issue 425, pages 389–405, July 1974

Hope that's helpful.

comment by kodos96 · 2010-08-13T21:51:41.869Z · LW(p) · GW(p)

atrocities such as slavery, genocide, or more recently, say the Iraq war, Vietnam, or in an ongoing sense, the drug war or factory farming

factory farming? huh?

comment by Kevin · 2010-07-16T08:25:45.314Z · LW(p) · GW(p)

One might have hoped that a benevolent illiberal ruler, when confronted with peaceful demands for democracy, would simply say "No." and ignore the protesters except in so far as they were creating a public nuisance.

In America, we have grown jaded towards protests because they don't ever accomplish anything. But at their most powerful, protests become revolutions. If Deng would have just ignored the protesters indefinitely, the CCP would have fallen. Perhaps the protest could have been dispersed without loss of life, but it's only very recently that police tactics have advanced to the point of being able to disburse large groups of defensively-militarized protesters without killing people. See http://en.wikipedia.org/wiki/Miami_model and compare to the failure of the police at the Seattle WTO protests of 1999.

This is a recent story about Deng's supposed backing of Tiananmen violence. http://www.nytimes.com/2010/06/05/world/asia/05china.html?_r=1

comment by Vladimir_M · 2010-07-15T18:20:54.594Z · LW(p) · GW(p)

MichaelVassar:

I'm seriously inclined to down-vote the whole comment community on this one except for Peter, though I won't, for their failure to challenge such an overt assertion of such an absurd claim.

I was tempted to challenge it, but I decided that it's not worth to open such an emotionally charged can of worms.

The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion, a key point of faith in the American democratic religion and no more worthy of discussion than whether the Earth is old, at least for usual meanings of the word 'benevolent' and for meanings of 'dictator' which avoid the no true Scotsman fallacy. There have been benevolent democratically elected leaders in the usual sense too. How confident do you think you should be that the latter are more common than the former though? Why?

These are some good remarks and questions, but I'd say you're committing a fallacy when you contrast dictators with democratically elected leaders as if it were some sort of dichotomy, or even a typically occurring contrast. There have been many non-democratic political arrangements in human history other than dictatorships. Moreover, it's not at all clear that dictatorships and democracies should be viewed as disjoint phenomena. Unless we insist on a No-True-Scotsman definition of democracy, many dictatorships, including quite nasty ones, have been fundamentally democratic in the sense of basing their power on majority popular support.

Replies from: rhollerith_dot_com, MichaelVassar
comment by RHollerith (rhollerith_dot_com) · 2010-07-16T18:03:35.306Z · LW(p) · GW(p)

There have been many non-democratic political arrangements in human history other than dictatorships.

Good point. For example, if you squint hard enough, the choosing of a council or legislature through lots as was done for a time in the Venetian state, is "democratic" in that everyone in some broad class (the people eligible to be chosen at random) had an equal chance to participate in the government, but would not meet with the approval of most modern advocates of democracy, even though IMHO it is worth trying again.

The Venetians understood that some of the people chosen by lot would be obviously incompent at governing, so their procedure alternated phases in which a group was chosen by lot with phases in which the group that is the output of the previous phase vote to determine the makeup of the input to the next phase with the idea that the voting phases would weed out those who were obviously incompetent. So, though there was voting, it was done only by the relatively tiny number of people who had been selected by lot -- and (if we ignore information about specific individuals) they had the same chance of becoming a legislator as the people they were voting on.

IMHO probably the worst effect of Western civilization's current overoptimism about democracy will be to inhibit experiments in forms of non-democratic government that would not have been possible before information technology (including the internet) became broadly disseminated. (Of course such experiments should be small in scale till they have built up a substantial track record.)

Replies from: Vladimir_M
comment by Vladimir_M · 2010-07-16T18:51:47.122Z · LW(p) · GW(p)

rhollerith_dot_com:

IMHO probably the worst effect of Western civilization's current overoptimism about democracy will be to inhibit experiments in forms of non-democratic government that would not have been possible before information technology (including the internet) became broadly disseminated.

I beg to differ. The worst effect is that throughout recent history, democratic ideas have regularly been foisted upon peoples and places where the introduction of democratic politics was a perfect recipe for utter disaster. I won't even try to quantify the total amount of carnage, destruction, and misery caused this way, but it's certainly well above the scale of those political mass crimes and atrocities that serve as the usual benchmarks of awfulness nowadays. Of course, all this normally gets explained away with frantic no-true-Scotsman responses whenever unpleasant questions are raised along these lines.

For full disclosure, I should add that I care particularly strongly about this because I was personally affected by one historical disaster that was brought about this way, namely the events in former Yugoslavia. Regardless of what one thinks about who bears what part of the blame for what happened there, one thing that's absolutely impossible to deny is that all the key players enjoyed democratic support confirmed by free elections.

Replies from: cousin_it, rhollerith_dot_com
comment by cousin_it · 2010-07-16T19:31:58.472Z · LW(p) · GW(p)

Seconded. I live in Russia, and if you compare the well-being of citizens in Putin's epoch against Yeltsin's, Putin wins so thoroughly that it's not even funny.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T10:24:03.619Z · LW(p) · GW(p)

You could attribute the difference to many correlated features, such as the year beginning with "20" instead of "19".

Replies from: LucasSloan
comment by LucasSloan · 2010-07-19T19:36:44.995Z · LW(p) · GW(p)

Also: The economy in Yeltsin's day was unusually bad, in deep recession due to pre-collapse economic problems, combined with the difficulties of switching over. In addition, today's economy benefits from a relatively high price for oil.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T19:51:24.842Z · LW(p) · GW(p)

That would be a less absurdist version of my point.

Replies from: LucasSloan
comment by LucasSloan · 2010-07-20T01:17:31.148Z · LW(p) · GW(p)

I assumed you meant that economic growth (in general) meant that the wellbeing of people is generally going to be greater when the year count is greater. I was providing specific reasons why the economy at the time would have been worse than regressing economic growth would suggest, other than political leadership.

comment by RHollerith (rhollerith_dot_com) · 2010-07-16T21:47:39.720Z · LW(p) · GW(p)

Yes, that is a very bad effect of the overoptimism about democracy.

Another example: even the vast majority of those (the non-whites) who could not vote in Rhodesia were significantly better off than they came to be after the Jimmy Carter administration forced the country (now called Zimbabwe) to give them the vote.

comment by MichaelVassar · 2010-07-16T08:22:19.731Z · LW(p) · GW(p)

I agree with everything in your paragraph. The important distinction between states as I see it is more between totalitarian and non-totalitarian than between democratic and non-democratic, as the latter tends to be a fairly smooth continuum. I was working within the local parlance for an American audience.

comment by JanetK · 2010-07-15T17:40:47.076Z · LW(p) · GW(p)

I agree that statements like all As are Bs are likely to be only approximately true and if you look you will find counter examples. But... 'power corrupts' is a fairly reliable rule of thumb as rules of thumb go. I include a couple of refs that took all of 3 minutes to find although I couldn't find the really good one that I noticed a year or so ago.

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1298606 abstract: We investigate the effect of power differences and associated expectations in social decision-making. Using a modified ultimatum game, we show that allocators lower their offers to recipients when the power difference shifts in favor of the allocator. Remarkably, however, when recipients are completely powerless, offers increase. This effect is mediated by a change in framing of the situation: when the opponent is without power, feelings of social responsibility are evoked. On the recipient side, we show that recipients do not anticipate these higher outcomes resulting from powerlessness. They prefer more power over less, expecting higher outcomes when they are more powerful, especially when less power entails powerlessness. Results are discussed in relation to empathy gaps and social responsibility.

http://scienceblogs.com/cortex/2010/01/power.php from J Lehrer's comments: The scientists argue that power is corrupting because it leads to moral hypocrisy. Although we almost always know what the right thing to do is - cheating at dice is a sin - power makes it easier to justify the wrongdoing, as we rationalize away our moral mistake.

Replies from: gwern, MichaelVassar
comment by gwern · 2010-11-23T18:47:21.431Z · LW(p) · GW(p)

Somewhat relevant:

"Monarchs, more so than other autocrats, tend to develop norms that help elites solve their collective action problem. Such a “political culture” makes monarchs’ commitments credible. Therefore, monarchs should exhibit longer tenures and faster growth than non-monarchs. Time-series cross-sectional analyses corroborate these hypotheses for the Middle East and North Africa between 1950 and 2004. Monarchs are less likely to suffer coups, revolutions, or government crises. Additionally, as oil rents increase in monarchies, they generate higher economic growth - which does not happen in non-monarchies. A case study of Qatar’s political history puts flesh on a theory of monarchical political culture."

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1548222

Replies from: JanetK
comment by JanetK · 2010-12-07T10:05:47.256Z · LW(p) · GW(p)

I can think of a number of reasons why monarchs may suffer somewhat less from the 'power corrupts' norm. (1) often educated from childhood to use power wisely (2) often feel their power is legit and therefore less fearful of overthrow (3) tend to get better 'press' than other autocrats so that abuse of power less noticeable (4) often have continuity and structure in their advisors inherited from previous monarch.

Despite this, there have been some pretty nasty monarchs through history - even ones that are thought of as great like Good Queen Bess. However, if I had to live in an autocratic state I would prefer an established monarchy, all others things being equal.

comment by MichaelVassar · 2010-07-15T18:04:32.504Z · LW(p) · GW(p)

Voted up for using data, though I'm very far from convinced by the specific data. The first seems irrelevant or at best very weakly suggestive. Regarding the second, I'm pretty confident that scientists profoundly mis-understand what sort of thing hypocrisy is as a consequence of the same profound misunderstanding of what sort of thing mind is which lead to the failures of GOFAI. I guess I also think they misunderstand what corruption is, though I'm less clear on that.

It's really critical that we distinguish power corrupting from fear and weakness producing pro-social submission and from fearful people invoking morality to cover over cowardice. In the usual sense of the former concept corruption is something that should be expected, for instance, to be much more gradual. One should really notice that heroes in stories for adults are not generally rule-abiding, and frequently aren't even typically selfless. Acting more antisocial, like the people you actually admire (except when you are busy resenting their affronts to you) do, because like them you are no longer afraid is totally different from acting like people you detest.

I don't think that "power corrupts" is a helpful approximation at the level of critical thinking ability common here. (what models are useful depends on what other models you have).

comment by Aurini · 2010-07-16T06:13:36.595Z · LW(p) · GW(p)

Perhaps it would be more accurate to state "The structural dynamics of dictatorial regimes demands coercion be used, while decentralized power systems allow dissent"; even the Philosopher King must murder upstarts who would take the throne. Mass Driver's comments (below) support this, with Lee Kuan Yew's power requiring violent coercion being performed on his behalf, and the examples of Democratic Despotism largely boil down to a lack of accountability and transparency in the elected leaders - essentially they became (have become) too powerful.

"Power corrupts" is just the colloquial form.

(It is possible that I am in a Death Spiral with this idea, but this analysis occurred to me spontaneously - I didn't go seeking out an explanation that fit my theory)

Replies from: MichaelVassar
comment by MichaelVassar · 2010-07-16T08:19:34.564Z · LW(p) · GW(p)

Voted up for precision.
I see decentralization of power as less relevant than regime stability as an enabler of non-violence. Kings in long-standing monarchies, philosophical or not, need use little violence. New dictators (classically called tyrants) need use much violence. In addition, they have the advantage of having been selected for ability and the disadvantage of having been poorly educated for their position.

Of course, power ALWAYS scales up the impact of your actions. Lets say that I'm significantly more careful than average. In that case, my worst actions include doing things that have a .1% chance of killing someone every decade. Scale that up by ten million and its roughly equivalent to killing ten thousand people once during a decade long reign over a mid-sized country. I'd call that much better than Lincoln (who declared marshal law and was an elected dictator if Hitler was one) or FDR but MUCH worse than Deng. OTOH, Lincoln and FDR lived in an anarchy, the international community, and I don't. I couldn't be as careful/scrupulous as I am if I lived in an anarchy.

comment by WrongBot · 2010-07-15T18:22:25.167Z · LW(p) · GW(p)

While I'd disagree with your description of FDR as a dictator, you're quite right about Ataturk, and your other examples expose my woefully insufficient knowledge of non-Western history. My belief has been updated, and the post will be as well, in a moment.

Thanks.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-07-16T08:24:11.561Z · LW(p) · GW(p)

Thank you! I'm so happy to have a community where things like this happen. Are you in agreement with my description of Lincoln as a dictator below? He's less benevolent than FDR but I'd still call him benevolent and he's a more clear dictator.

Replies from: WrongBot
comment by WrongBot · 2010-07-16T17:14:49.911Z · LW(p) · GW(p)

Lincoln's a little more borderline, but so far as I'm aware, he didn't do anything to mess with the 1864 elections; I think most people would think that that keeps him on the non-dictator end of the spectrum

Of course, the validity of that election was based on a document that he was actively violating at the time, so there definitely seems to be room for debate.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-07-17T01:30:50.619Z · LW(p) · GW(p)

In addition, there's the fact that most of the Southern States couldn't vote at the time. It was basically unthinkable that he could have lost the elections. Democratic and dictatorial aren't natural types, but I'd say Lincoln is at least as far in the dictatorial direction as Putin, Nazarbayev, or almost any other basically sane ex-Soviet leader.

comment by satt · 2010-07-16T03:14:52.029Z · LW(p) · GW(p)

I'm seriously inclined to down-vote the whole comment community on this one except for Peter, though I won't, for their failure to challenge such an overt assertion of such an absurd claim.

I didn't challenge it because I didn't find it absurd. I've asked myself in the past whether I could think of heads of state whose orders & actions were untarnished enough that I could go ahead and call them "benevolent" without caveats, and I drew a blank.

I'd guess my definition of a benevolent leader is less inclusive than yours; judging by your child comment it seems as if you're interpreting "benevolent dictator" as meaning simply "dictators who wanted good results and got them". To me "benevolent" connotes not only good motives & good policies/behaviour but also a lack of very bad policies/behaviour. Other posters in this discussion might've interpreted it like I did.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-07-16T08:05:37.547Z · LW(p) · GW(p)

Possibly. OTOH, the poster seems to have been convinced. I draw a blank on people, dictators or not, who don't engage in very bad policies/behavior on whatever scale they are able to act on. No points for inaction in my book.

comment by Carinthium · 2010-11-23T09:04:42.034Z · LW(p) · GW(p)

I know somebody who used to work for Lee Kuan Yew, who has testified that in quite a few ways he at least has been corrupted (things such as creating a slush fund, giving a man who saved his life a public house he didn't qualify for etc).

Replies from: gwern
comment by gwern · 2010-11-23T18:44:25.611Z · LW(p) · GW(p)

That doesn't sound very corrupted to me.

If your standard of corruption is that stringent, you could probably make a case for Barack Obama being corrupted - the Rezko below-market-price business, his aunt getting asylum and public housing, etc.

(And someone like George W. Bush is even easier; Harken Energy, anyone?)

Replies from: Vaniver
comment by Vaniver · 2010-11-23T20:34:11.450Z · LW(p) · GW(p)

Um, you're going to have a hard time claiming Obama isn't corrupted, or that he was uncorrupt to begin with. (As you mention, such a claim is even harder for Bush.)

Replies from: MichaelVassar, gwern
comment by MichaelVassar · 2010-11-24T09:41:51.144Z · LW(p) · GW(p)

If the standard makes ALL leaders corrupt it doesn't favor democratic over dictatorial ones, nor is it a very useful standard. Relative to their power, are the benefits Obama, Lee Kuan Yew or even Bush skim greater than those typical Americans seek in an antisocial manner? Even comparable?

Replies from: Vaniver
comment by Vaniver · 2010-11-24T20:08:39.986Z · LW(p) · GW(p)

If the standard makes ALL leaders corrupt it doesn't favor democratic over dictatorial ones, nor is it a very useful standard.

Useful for what? I agree it's not terribly useful for choosing whether person A or person B should hold role X, but I feel that question is a distraction- your design of role X is more important than your selection of a person to fill that role. And so the question of how someone acquired power is less interesting to me than the power that person has, and I think the link between the two is a lot weaker than people expect.

comment by gwern · 2010-11-23T20:37:45.043Z · LW(p) · GW(p)

I'm presenting a dilemma. Either your standards for corruption are so high that you have to call both Yew & Obama corrupt, or your standards are loose enough that neither fits according to listed examples.

I prefer to bite the latter bullet, but if you want to bite the former, that's your choice.

Replies from: Carinthium
comment by Carinthium · 2010-11-23T23:02:01.441Z · LW(p) · GW(p)

Isn't the intelligent solution to talk about degrees of corruption and minimisisation? Measures to increase transperancy over this sort of thing are almost certainly the solution to Obama-level corruption.

Replies from: gwern
comment by gwern · 2010-11-23T23:40:34.339Z · LW(p) · GW(p)

No, because that's a much more complex argument. Start with the simplest thing that could possibly work. If you don't reach any resolution or make any progress, then one can look into more sophisticated approaches.

Replies from: Carinthium
comment by Carinthium · 2010-11-24T00:07:10.530Z · LW(p) · GW(p)

The reason to look at it that way is because it deals with problems of what is or isn't "corrupt" in general- instead, levels to get rid of (assuming one is in a position to supress corruption in the first place) can be set and corruption above a maximum level dealt with.

comment by Blueberry · 2010-07-13T20:42:06.880Z · LW(p) · GW(p)

If knowing the truth makes me a bigot, then I want to be a bigot. If my values are based on not knowing certain facts, or getting certain facts incorrect, then I want my values to change.

It may help to taboo "bigot" for a minute. You seem to be lumping a number of things under a label and calling them bad.

There's the question of how we treat people who are less intelligent (regardless of group membership). I'm fine with discriminating in some ways based on intelligence of the individual, and if it does turn out that Group X is statistically less intelligent, then maybe Group X should be underrepresented in important positions. This has consequences for policy decisions. Of course, there may be a way of increasing the intelligence of Group X:

Based on all the evidence I have, I’ve made a conscious decision to avoid seeking out information on sex differences in intelligence and other, similar kinds of research.

How are you going to help a disadvantaged group if you're blinding yourself to the details of how they're disadvantaged?

Replies from: WrongBot
comment by WrongBot · 2010-07-13T20:57:02.579Z · LW(p) · GW(p)

I'm fine with discriminating in some ways based on intelligence of the individual, and if it does turn out that Group X is statistically less intelligent, then maybe Group X should be underrepresented in important positions. This has consequences for policy decisions.

Agreed. But I should not make decisions about individual members of Group X based on the statistical trend associated with Group X, and I doubt my (or anyone's) ability to actually not do so in cases where I have integrated the belief that the statistical trend is true.

How are you going to help a disadvantaged group if you're blinding yourself to the details of how they're disadvantaged?

The short answer is that I'm not going to. I'm not doing research on human intelligence, and I doubt I ever will. The best I can hope to do is not further disadvantage individual members of Group X by discriminating against them on the basis of statistical trends that they may not embody.

People who are doing research that relates to human intelligence in some way should probably not follow this exact line of reasoning.

Replies from: Vladimir_M, Simplicius
comment by Vladimir_M · 2010-07-14T05:30:17.398Z · LW(p) · GW(p)

WrongBot:

But I should not make decisions about individual members of Group X based on the statistical trend associated with Group X [...]

Really? I don't think it's possible to function in any realistic human society without constantly making decisions about individuals based on the statistical trends associated with various groups to which they happen to belong (a.k.a. "statistical discrimination"). Acquiring perfectly detailed information about every individual you ever interact with is simply not possible given the basic constraints faced by humans.

Of course, certain forms of statistical discrimination are viewed as an immensely important moral issue nowadays, while others are seen simply as normal common sense. It's a fascinating question how and why exactly various forms of it happen (or fail) to acquire a deep moral dimension. But in any case, a blanket condemnation of all forms of statistical discrimination is an attitude incompatible with any realistic human way of life.

Replies from: WrongBot
comment by WrongBot · 2010-07-14T06:29:35.827Z · LW(p) · GW(p)

The "deep moral dimension" generally applies to group memberships that aren't (perceived to be) chosen: sex, gender, race, class, sexual orientation, religion to a lesser extent.

These are the kinds of "Group X" to which I was referring. Discriminating against someone because they majored in Drama in college or believe in homeopathy are not even remotely equivalent to racism, sexism, and the like.

Replies from: mattnewport, Vladimir_M, Matt_Simpson, None
comment by mattnewport · 2010-07-14T07:21:35.430Z · LW(p) · GW(p)

The well documented discrimination against short men and ugly people and the (more debatable) discrimination against the socially inept and those whose behaviour and learning style does not conform to the compliant workers that schools are largely structured to produce are examples of discrimination that appears to receive less attention and concern.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-14T12:04:51.996Z · LW(p) · GW(p)

Opposition to discrimination doesn't just happen. It has to be organized and promoted for an extended period before there's a effect.

Afaik, that promotion typically has to include convincing people in the discriminated group that things can be different and that opposing discrimination is worth the risks and effort. In some cases, it also includes convincing them that they don't deserve to be mistreated.

comment by Vladimir_M · 2010-07-14T17:42:55.616Z · LW(p) · GW(p)

WrongBot:

The "deep moral dimension" generally applies to group memberships that aren't (perceived to be) chosen: sex, gender, race, class, sexual orientation, religion to a lesser extent.

This is not an accurate description of the present situation. To take the most blatant example, every country discriminates between its own citizens and foreigners, and also between foreigners from different countries (some can visit freely, while others need hard to get visas). This state of affairs is considered completely normal and uncontroversial, even though it involves a tremendous amount of discrimination based on group memberships that are a mere accident of birth.

Thus, there are clearly some additional factors involved in the moralization of other forms of discrimination, and the fascinating question is what exactly they are. The question is especially puzzling considering that religion is, in most cases, much easier to change than nationality, and yet the former makes your above list, while the latter doesn't -- so the story about choice vs. accident of birth definitely doesn't hold water.

I'm also puzzled by your mention of class. Discrimination by class is definitely not a morally sensitive issue nowadays the way sex or race is. On the contrary, success in life is nowadays measured mostly by one's ability to distance and insulate oneself from the lower classes by being able to afford living in low-class-free neighborhoods and joining higher social circles. Even when it comes to you personally, I can't imagine that you would have exactly the same reaction when approached by a homeless panhandler and by someone decent-looking.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-14T18:19:10.950Z · LW(p) · GW(p)

Discrimination by class is definitely not a morally sensitive issue nowadays the way sex or race is. On the contrary, success in life is nowadays measured mostly by one's ability to distance and insulate oneself from the lower classes

Without disagreeing much with your comment, I have to point out that this is a non sequitur. Moral sensitivity has nothing to do with (ordinary) actions. Among countries where the second sentence is true, there are both ones where the first is true and ones where the first is false. I don't know so much about countries where the second sentence is false.

As to religion, in places where people care about it enough to discriminate, changing it will probably alienate one's family, so it is very costly to change, although technically possible. Also, in many places, religion is a codeword for ethnic groups, so it can't be changed (eg, Catholics in US 1850-1950).

Replies from: Vladimir_M
comment by Vladimir_M · 2010-07-15T06:00:05.558Z · LW(p) · GW(p)

You're right that my comment was imprecise, in that I didn't specify to which societies it applies. I had in mind the modern Western societies, and especially the English-speaking countries. In other places, things can indeed be very different with regards to all the mentioned issues.

However, regarding your comment:

Moral sensitivity has nothing to do with (ordinary) actions.

That's not really true. People are indeed apt to enthusiastically extol moral principles in the abstract while at the same them violating them whenever compliance would be too costly. However, even when such violations are rampant, these acts are still different from those that don't involve any such hypocritical violations, or those that violate only weaker and less significant principles.

And in practice, when we observe people's acts and attitudes that involve their feeling of superiority over lower classes and their desire to distance themselves from them, it looks quite different from analogous behaviors with respect to e.g. race or sex. The latter sorts of statements and acts normally involve far more caution, evasion, obfuscation, and rationalization. To take a concrete example, few people would see any problem with recommending a house by saying that it's located in "a nice middle-class neighborhood" -- but imagine the shocked reactions if someone praised it by talking about the ethnic/racial composition of the neighborhood loudly and explicitly, even if the former description might in practice serve as (among other things) a codeword for the latter.

comment by Matt_Simpson · 2010-07-14T06:42:15.095Z · LW(p) · GW(p)

The "deep moral dimension" generally applies to group memberships that aren't (perceived to be) chosen: sex, gender, race, class, sexual orientation, religion to a lesser extent.

But you still discriminate based on sex, gender, race, class, sexual orientation and religion every day. You don't try to talk about sports with every girl you meet, you safely assume that they probably aren't interested until you receive evidence to the contrary. But if you meet a guy, then talking about sports moves higher on the list of conversation topics just because he's a guy.

Replies from: WrongBot
comment by WrongBot · 2010-07-14T16:49:42.426Z · LW(p) · GW(p)

Well, I actually try to avoid talking about sports entirely, because I find the topic totally uninteresting.

But! That is mere nitpicking, and the thrust of your argument is correct. I can only say that like all human beings I regularly fail to adhere to my own moral standards, and that this does not make those standards worthless.

Replies from: Matt_Simpson, HughRistik
comment by Matt_Simpson · 2010-07-14T16:55:15.658Z · LW(p) · GW(p)

Well, I actually try to avoid talking about sports entirely, because I find the topic totally uninteresting.

For some reason I expected that answer. ;)

I can only say that like all human beings I regularly fail to adhere to my own moral standards, and that this does not make those standards worthless.

I find it odd that you still hold on to "not statistically discriminating" as a value. What about it do you think is immoral? (I'm not trying to be condescending here, I'm genuinely curious)

Replies from: WrongBot
comment by WrongBot · 2010-07-14T18:44:03.937Z · LW(p) · GW(p)

I value not statistically discriminating (on the basis of unchosen characteristics or group memberships) because it is an incredibly unpleasant phenomenon to experience. As a white American man I suffer proportionally much less from the phenomenon than do most people, and even the small piece of it that I pick up from being bisexual sucks.

It's not a terminal value, necessarily, but in practice it tends to act like one.

comment by HughRistik · 2010-07-14T17:42:26.616Z · LW(p) · GW(p)

I can only say that like all human beings I regularly fail to adhere to my own moral standards, and that this does not make those standards worthless.

If following your moral standards is impractical, maybe those standards aren't quite right in the first place.

It is a common mistake for idealists to choose their morality without reference to practical realities. A better search plan would be to find all the practical options, and then pick whichever of those is the most moral.

If you spare women you meet from discussion of sports (or insert whatever interest you have that exhibits average sex differences) until she expresses interest in the subject, you have not failed any reasonable moral standards.

Replies from: WrongBot, SilasBarta
comment by WrongBot · 2010-07-14T18:48:22.478Z · LW(p) · GW(p)

It is a common mistake for idealists to choose their morality without reference to practical realities. A better search plan would be to find all the practical options, and then pick whichever of those is the most moral.

Most moral by what standard? You're just passing the buck here.

Replies from: HughRistik
comment by HughRistik · 2010-07-14T18:51:04.365Z · LW(p) · GW(p)

Moral according to your standards. I'm just suggesting a different order of operation: understanding the practicalities first, and then trying to find which of the practical options you judge most moral.

Replies from: WrongBot
comment by WrongBot · 2010-07-14T19:01:26.297Z · LW(p) · GW(p)

But those standards are moral standards. If you're suggesting that one should just choose the most moral practical option, how is that any different from consequentialism?

Your first comment sounded like you were suggesting that people should choose the most moral practical standard.

comment by SilasBarta · 2010-07-14T17:47:47.305Z · LW(p) · GW(p)

If you spare women you meet from discussion of sports (or insert whatever interest you have that exhibits average sex differences) until she expresses interest in the subject, you have not failed any reasonable moral standards.

Well, until you factor in the unfortunate tendency of women to be attracted to men who are indifferent to their interests :-P

comment by [deleted] · 2011-02-22T02:04:29.975Z · LW(p) · GW(p)

People don't get to choose how intelligent they are.

comment by Simplicius · 2011-02-23T21:14:27.867Z · LW(p) · GW(p)

People who are doing research that relates to human intelligence in some way should probably not follow this exact line of reasoning.

Those people depend upon funding that is contingent on public opinion of how valid their research is.

Also by making a research question disreputable, talented people might avoid it and those with ulterior motives might flock to it.

Currently the only people who dare to touch this field in any meaningful way are those who are already tenured, and while that is the whole purpose of tenure, the fact remains that even if these people are due to their age (as the topic wasn't always taboo) not really showing the negative effects of the above paragraph they are still old. And old brains just don't work that well when it comes to coming up with new stuff.

Deciding a piece of knowledge should be considered dangerous knowledge will necessarily lead to the deception of others and self on many different levels and in many different ways. I agree with the estimation made by some others that will produce Dragon in the garage dynamics which will induce many of the same bad results and biases you seem to wish to ameliorate.

comment by HughRistik · 2010-07-14T07:29:04.439Z · LW(p) · GW(p)

I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them.

The rest of the post was good, but these claims seem far too anecdotal and availability heuristicky to justify blocking yourself out of an entire area of inquiry.

When well-meaning, intelligent people like yourself refuse to examine certain areas of controversy, you consign those discourses to people with less-enlightened social attitudes. When certain beliefs are outlawed, only outlaws will hold those beliefs.

SarahC has raised some alternative ideas about how people may respond to dangerous knowledge.

As for:

Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.

Why are you so comfortable with such a hasty generalization? I'm not extremely widely-read on the subject of group differences, but I've run into some writing on the subject by people who doesn't seem to be bigots. See Gender, Nature, and Nurture by Richard Lippa, for instance.

Why would you make a hasty generalization and then shut yourself off to evidence that could disconfirm it?

A consequence of this is that your brain is not a trusted system, which itself has consequences that go much, much deeper than a bunch of misapplied heuristics. (And those are bad enough on their own!)

Your post itself demonstrates this. You are accepting certain empirical and moral beliefs that have not been justified, such as the notion of cognitive equality between groups. Regardless of whether this hypothesis is true or not, it seems to get inordinately privileged for ideological reasons. (In my view, suspended judgment on group differences is a more rational initial attitude.)

Privileging certain hypotheses for mainly ideological reasons is not rationality, even when your ideology is really warm and fuzzy.

If you are comfortable freezing your belief system in certain areas, that's a strong symptom that your mind got hacked somewhere, and the virus is so bad that it is disabled your own epistemic immune system.

Personally, like simplicio, I'm not comfortable pulling an ostrich maneuver and basing my values on empirical notions that could turn out to be lies. What a great way to destroy my own conviction in my values! I would prefer to investigate these subjects, even at risk of shaking up my values. So far, like SarahC, I haven't found my values to be shaken up all that much (though maybe I'm biased in that perception).

Replies from: WrongBot
comment by WrongBot · 2010-07-14T17:52:17.695Z · LW(p) · GW(p)

I think it may be helpful to clearly distinguish between epistemic and instrumental rationality. The idea proposed in this post is actively detrimental to the pursuit of epistemic rationality; I should have acknowledged that more clearly up front.

But if one is more concerned with instrumental rationality ("winning"), then perhaps there is more value here. If you've designated a particular goal state as a winning one and then, after playing for a while, unconsciously decided to change which goal state counts as a win, then from the perspective of the you that began the game, you've lost.

I do agree that my last example was massively under-justified, especially considering the breadth of the claim.

comment by cousin_it · 2010-07-14T09:31:22.991Z · LW(p) · GW(p)

In the comments here we see how LW is segmenting into "pro-truth" and "pro-equality" camps, just as it happened before with pro-PUA and anti-PUA, pro-status and anti-status, etc. I believe all these divisions are correlated and indicate a deeper underlying division within our community. Also I observe that discussions about topics that lie on the "dividing line" generate much more heat than light, and that people who participate in them tend to write their bottom lines in advance.

I'm generally reluctant to shut people up, but here's a suggestion: if you find yourself touching the "dividing line" topics in a post or comment, think twice whether it's really necessary. We may wish ourselves to be rational, but it seems we still lack the abstract machinery required to actually update our opinions when talking about these topics. Nothing is to be gained from discussing them until we have the more abstract stuff firmly in place.

Replies from: WrongBot, None, whpearson, Emile, CarlShulman
comment by WrongBot · 2010-07-14T19:27:24.770Z · LW(p) · GW(p)

My hypothesis is that this is a "realist"/"idealist" divide. Or, to put it another way, one camp is more concerned with being right and the other is more concerned with doing the right thing. ("Right" means two totally different things, here.)

Quality of my post aside (and it really wasn't very good), I think that's where the dividing line has been in the comments.

Similarly, I think most people who value PUA here value it because it works, and most people who oppose it do so on ethical or idealistic grounds. Ditto discussions of status.

The reason the arguments between these camps are so unfruitful, then, is that we're sorting of arguing past each other. We're using different heuristics to evaluate desirability, and then we're surprised when we get different results; I'm as guilty of this as anyone.

Replies from: HughRistik, MichaelVassar, HughRistik, wedrifid, ChristianKl
comment by HughRistik · 2010-07-15T08:03:26.743Z · LW(p) · GW(p)

Here is another example of the way that pragmatism and idealism interact for me, from the world of pickup:

I was brought up with up with the value of gender equality, and with a proscription against dominating women or being a "jerk."

When I got into pickup and seduction, I encountered the theory that certain masculine behaviors, including social dominance, are a factor in female attraction to men. This theory matched my observation of many women's behavior.

While I was uncomfortable with the notion of displaying stereotypically masculine behavior (e.g. "hegemonic masculinity" from feminist theory) and acting in a dominant manner towards women, I decided to give it a try. I found that it worked. Yet I still didn't like certain types of masculine and dominance displays, and the type of interactions they created with women (even while "working" in terms of attraction and not being obviously unethical), so I started experimenting and practicing styles less reliant on dominance.

I found that there were ways of attracting women that worked quite well, and didn't depend on dominance and a narrow version of masculinity. It just took a bit of practice and creativity, and I needed my other pickup tools to be able to pull it off. Practicing a traditional form of masculinity got me the social experience necessary to figure out ways to drop that sort of masculinity.

In conclusion, I eventually affirmed my value of having equal interactions with women and avoiding dominating them. And I discovered "field tested" ways to attain success with women while adhering to that value, so I confirmed that it wasn't a silly, pie-in-the-sky ideal.

I call this an empirical approach to selecting and accomplishing a value.

comment by MichaelVassar · 2010-07-15T16:49:18.472Z · LW(p) · GW(p)

I strongly agree with this. Count me in the camp of believing true things in literally all situations, as I think that the human brain is too biased for any other approach to result, in expectation, in doing the right thing, but also as in the camp of not necessarily sharing truths that might be expected to be harmful.

comment by HughRistik · 2010-07-15T05:36:51.786Z · LW(p) · GW(p)

My hypothesis is that this is a "realist"/"idealist" divide.

I was thinking the same thing, when I insinuated that you were being idealistic ;) Whether this dichotomy makes sense is another question.

Similarly, I think most people who value PUA here value it because it works, and most people who oppose it do so on ethical or idealistic grounds. Ditto discussions of status.

I think this an excellent example of what the disagreements look like superficially. I think what is actually going on is more complex, such as differences of perception of empirical matters (underlying "what works"), and different moral philosophies.

For example, if you have a deontological prescription against acting "inauthentic," then certain strategies for learning social skills will appear unethical to you. If you are a virtue ethicist, then holding certain sorts of intentions may appear unethical, whereas a consequentialist would look more at the effects of the behavior.

Although I would get pegged on the "realist" side of the divide, I am actually very idealistic. I just (a) revise my values as my empirical understanding of the world changes, and (b) believe that empirical investigation and certain morally controversial behaviors are useful to execute on my values in the real world.

For example, even though intentionally studying status is controversial, I find that social status skills are often useful for creating equality with people. I study power to gain equality. So am I a realist, or an idealist on that subject?

Another aspect of the difference we are seeing may be in this article's description of "shallowness."

comment by wedrifid · 2010-11-26T15:01:37.932Z · LW(p) · GW(p)

(Prompted by but completely irrelevant to the recent bump.)

My hypothesis is that this is a "realist"/"idealist" divide.

Come now. This is lesswrong. It is an "idealist"/"idealist" divide with slightly different ideals. :P

One side's ideal just happens to be "verbal symbols should be used to further epistemic accuracy". It is very much an 'ethical or idealistic' position with all the potential for narrow mindedness that entails.

comment by ChristianKl · 2010-08-15T15:34:44.047Z · LW(p) · GW(p)

The evidence that PUA works is largely anecdotal. A lot of people claim that one shouldn't believe in acupuncture based on anecdotal evidence.

PUA however is a theory that plays well with other reductionist beliefs while acupuncture doesn't.

I think the following two are open questions: Given the same amount of approaches, does a guy who has read PUA theories have higher success of getting laid?

If the man has a goal to have a fulfilling long term relationship with an attractive woman, is it benefitial for him to go down the PUA road?

The evidence for the status hypothesis is also relatively weak.

Being reductionist does have nothing to do with being realist. Being reductionist brings you problem when you are faced with a system that's more complex than your model. In biology students get taught these days that even when you know all parts of a system you don't necessarily know what the system does. That reductionism is wrong and that you actually need real evidence for theories such as the status hypothesis.

Replies from: Vaniver, NancyLebovitz, wedrifid, WrongBot
comment by Vaniver · 2010-11-23T20:41:59.247Z · LW(p) · GW(p)

I think the following two are open questions: Given the same amount of approaches, does a guy who has read PUA theories have higher success of getting laid?

Isn't one of the benefits of PUA is that your number of actual approaches increases (while single, at least)?

Replies from: ChristianKl
comment by ChristianKl · 2010-11-25T18:19:43.179Z · LW(p) · GW(p)

Isn't one of the benefits of homeopathy that you get to talk to a person who promises you that you will feel better? If the control for homeopathy is doing nothing that you find that homeopathy works. If you however do a double blind trial you will probably find that homeopathy doesn't work.

If you truly belief in rationalism and don't engage in it to signal status, I see no reason to use another standard for judging whether homeopathy is true than for judging whether PUA works.

Replies from: Vaniver
comment by Vaniver · 2010-11-26T14:09:22.105Z · LW(p) · GW(p)

If you truly belief in rationalism and don't engage in it to signal status, I see no reason to use another standard for judging whether homeopathy is true than for judging whether PUA works.

Aww, I respect you as a person too! (What were you trying to accomplish with this comment?)

As you point out, which control you pick is significant, but my point is that what test you pick is significant too. Let's talk about basketball: you can try and determine how good players are by their free throw percentage, or you can try and determine how good players are by their average points scored per game. You're suggesting the analog of the first, which seems ludicrous because it ignores many critical skills. If someone is interested primarily in getting laid, it seems that the number they care about is mean time between lays, not percentage success on approaches.

I won't comment much about your homeopathy example, except to say that even if one considers it relevant it undermines your position. Homeopathy is better than both nothing and harmful treatments (my impression is most people come to PUA from not trying at all or trying ineffectively). Generally, for any homeopathic treatment you could take there is a superior mainstream treatment, but for some no treatment is more effective than placebo (and so you're just making the decision of whether or not to pay for the benefits of placebo). Likewise, even if the only benefit of PUA is increased confidence, you have to trick yourself into that confidence somehow- and so if PUA boosts confidence PUA increases your chances, even though it did it indirectly.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-26T14:52:32.421Z · LW(p) · GW(p)

Your statement concerning homeopathy turns out not to be correct. In practice, homeopathy is harmful because it replaces effective treatments in the patients' minds and It soaks up medical funding.

Edit: Actually, yes, I do agree with Vaniver's point as explained below: at the time of its invention, homeopathy (i.e., water) frequently gave better results than the actively harmful things many doctors were doing to their patients. That said, I'm not sure the analogy with PUAs is usably solid even in those terms ... need to come up with one that might be.

Replies from: Vaniver
comment by Vaniver · 2010-11-27T03:23:07.310Z · LW(p) · GW(p)

Precision in language: my statement concerning homeopathy is correct, but has debatable relevance. At present, homeopathy underperforms mainstream medicine for nearly everything (like I explicitly mentioned). But I strongly suspect the only reason we're talking about an alternative medicine that originated 200 years ago is because it predated the germ theory of disease by 70 years.

So, it had at least 70 years of growth as an often superior alternative to mainstream medicine, which was murdering its patients through ignorance.* As well, Avogadro's number was measured about the same time as the germ theory was put forward by Pasteur, and so for that time homeopathy had as solid a theoretical background as mainstream medicine.

My feeling is that insomuch as PUA should be compared to homeopathy, it should be compared to homeopathy in 1840- the proponents may be totally wrong about why it works and quality data either way is likely scarce, but the paucity of strong alternatives means it's a good choice.** Heck, it might even be the analog of germ theory instead of the analog of homeopathy.

*The story of Ignaz Semmelweis ought not be forgot.

**Is there anyone else trying a "scientific" approach to relationships? I know there are a number of sexologists, but they seem more descriptive and less practical than PUA. Not to mention they seem more interested in the physical aspects than the tactical/strategic ones.

comment by NancyLebovitz · 2010-08-15T23:39:38.160Z · LW(p) · GW(p)

A reductionist approach to acupuncture-- it claims that all the ideas about mystical energy are mistranslations, and explains acupuncture in terms of current biology.

comment by wedrifid · 2010-11-26T14:37:32.495Z · LW(p) · GW(p)

The evidence that PUA works is largely anecdotal. A lot of people claim that one shouldn't believe in acupuncture based on anecdotal evidence.

There is an implied argument in here that is triggering my bullshit senses. The worst part is that it uses what is a valid consideration (the lamentable lack of research into effective attraction strategies) and uses it as a facade over an untenable analogy and complete neglect of the strength of anecdotal evidence.

The evidence for the status hypothesis is also relatively weak.

Relative to what, exactly? The 'gravity' hypothesis? The evidence is overwhelming.

Replies from: ChristianKl
comment by ChristianKl · 2010-11-28T23:54:25.592Z · LW(p) · GW(p)

How do you determine the strength of anecdotal evidence to decide that PUA works and acupuncture doesn't? I know quite a few people both online and offline who claim that acupuncture has helped them with various issues.

I know people online who claimed that PUA helped them. I know people online who say that they concluded after spending over a year in the PUA community that the field is a scam. I also know people online who have radically changed their social life without going the PUA road.

The worst part is that it uses what is a valid consideration (the lamentable lack of research into effective attraction strategies) Whether there should be more research into a theory is a different issue than whether there's enough evidence to support a theory. If you don't take separate them mentally you run into the problem of being overconfident when information is scarce and underconfident when there's plenty of information.

As a good skeptic it important to know that you simply don't have enough information to decide certain questions.

Relative to what, exactly? The 'gravity' hypothesis? The evidence is overwhelming. Of course there are some effects when you get approval from other people. I however don't think that there is peer reviewed research that suggests that the effect is as strong as it gets seen in this community.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2010-11-29T01:15:32.654Z · LW(p) · GW(p)

How do you determine the strength of anecdotal evidence to decide that PUA works and acupuncture doesn't?

comment by wedrifid · 2010-11-29T00:23:41.821Z · LW(p) · GW(p)

As a good skeptic it important to know that you simply don't have enough information to decide certain questions.

And as an effective homo-hypocritus it is important to recognize when the 'good skeptic' role will be a beneficial one to adopt, completely independent on the evidence.

comment by WrongBot · 2010-08-16T18:35:55.424Z · LW(p) · GW(p)

In biology students get taught these days that even when you know all parts of a system you don't necessarily know what the system does.

This is only true if you have insufficient math/computing ability to simulate the interactions of the system's parts. For it to be otherwise, either your information would have to actually be incomplete, or magic would have to happen.

Replies from: ChristianKl
comment by ChristianKl · 2010-08-18T15:31:45.743Z · LW(p) · GW(p)

Thanks to Heisenberg your information is also always incomplete. In real life you do have insufficient math/computing ability to simulate the interactions of many systems.

Whether weak reductionism is true doesn't matter much for this debate. People who believe in strong reductionism find appeal in Pua theory.

They believe that they have sufficient mental resources and information to calculate complex social interactions in a way that allows them to optimize those interactions.

Because of the belief in strong reductionism they believe in Pua based on anecdotal evidence and don't believe in acupuncture based on anecdotal evidence.

comment by [deleted] · 2010-07-14T14:15:18.982Z · LW(p) · GW(p)

If there's a discussion about whether or not we should seek truth -- at a site about rationality -- that's a discussion worth having. It's not a side issue.

Like whpearson, I think we're not all on one side or another. I'm pro-truth. I'm anti-PUA. I don't know if I'm pro or anti status -- there's something about this community's focus on it that unsettles me, but I certainly don't disapprove of people choosing to do something high-status like become a millionaire.

You're basically talking about the anti-PC cluster. It's an interesting phenomenon. We've got instinctively and vehemently anti-PC people; we've got people trying to edge in the direction of "Hey, maybe we shouldn't just do whatever we want"; and we've got people like me who are sort of on the dividing line, anti-PC in theory but willing to walk away and withdraw association from people who actually spew a lot of hate.

I think it's an interesting issue because it deals with how we ought best to react to controversy. In the spirit of the comments I made to WrongBot, I don't think we should fear to go there; I know my rationality isn't that fragile and I doubt yours is either. (I've gotten my knee-jerk emotional responses burned out of me by people much ruder than anyone here.)

Replies from: cousin_it, Douglas_Knight, Risto_Saarelma
comment by cousin_it · 2010-07-14T14:35:03.471Z · LW(p) · GW(p)

Anti-PC? Good name, I will use it.

I know my rationality isn't that fragile and I doubt yours is either.

What troubles me is this: your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I'm pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I'd be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different "positions". There is only one truth.

We have had some big successes already. (For example, most people here know better than be confused by talk of "free will".) I don't think the anti-PC issue can be resolved by the drawn-out positional war we're waging, because it isn't actually making anyone change their opinions. It's just a barrage of rationalizations from all sides. We need more insight. We need a breakthrough, or maybe several, that would point out the obviously correct way to think about anti-PC issues.

Replies from: None, None, Blueberry
comment by [deleted] · 2011-02-22T03:01:29.093Z · LW(p) · GW(p)

Anti-PC? Good name

I don't think using this name is a good idea. It has strong political connotations. And while I'm sure many here aren't aware of them or are willing to ignore them, I fear this may not be true:

  • For potential new readers and posters
  • Once the "camps" are firmly established.
comment by [deleted] · 2010-07-14T20:35:45.608Z · LW(p) · GW(p)

I think it actually is a value difference, just like Blueberry said.

I do not want to participate in nastiness (loosely defined). It's related to my inclination not to engage in malicious gossip. (Folks who know me personally consider it almost weird how uncomfortable I am with bashing people, singly or in groups.) It's not my business to stop other people from doing it, but I just don't want it as part of my life, because it's corrosive and makes me unhappy.

To refine my own position a little bit -- I'm happy to consider anti-PC issues as matters of fact, but I don't like them connotationally, because I don't like speaking ill of people when I can help it. For example, in a conversation with a friend: he says, "Don't you know blacks have a higher crime rate than whites?" I say, "Sure, that's true. But what do you want from me? You want me to say how much I hate my black neighbors? What do you want me to say?"

I don't think that's an issue that argument can dissuade me from; it's my own preference.

Replies from: cousin_it, steven0461
comment by cousin_it · 2010-07-14T22:26:58.311Z · LW(p) · GW(p)

This discussion prompted a connection in my mind that startled me a lot. Let's put it in the open.

We've been discussing the moral status of identical copies. I gave a partial reductio sometime ago, but wasn't really satisfied. Now consider this: what about the welfare of your imperfect copies? Do UDT-like considerations make it provably rational to care more about creatures that share random features with you? Note that I say UDT-like considerations, not evolutionary considerations. Evolution doesn't explain professional solidarity or feminism because neither relies on heritable traits. Ganging up looks more like a Schelling coordination game, where you benefit from seeking allies based on some random quality as long as they also get the idea of allying with you based on same quality. And it might work better if the quality is hard to change, like sex or race. Anyone willing to work out the math is welcome to do so...

comment by steven0461 · 2010-07-14T21:28:00.585Z · LW(p) · GW(p)

Asserting group inequalities means speaking more ill of one group of people but less ill of another, so doesn't that cancel out?

Replies from: None
comment by [deleted] · 2010-07-14T21:44:28.509Z · LW(p) · GW(p)

I'm not talking about empirical claims, I'm talking about affect. I have zero problem with talking about group inequalities, in themselves.

comment by Blueberry · 2010-07-14T16:06:35.995Z · LW(p) · GW(p)

your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I'm pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I'd be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different "positions". There is only one truth.

But there are many different values. If we can't sway each other's positions, that points to a value difference.

Replies from: Vladimir_Nesov, cousin_it
comment by Vladimir_Nesov · 2010-07-14T16:23:40.608Z · LW(p) · GW(p)

If we can't sway each other's positions, that points to a value difference.

If only it was always so. Value is hard to see, so easy to rationalize.

comment by cousin_it · 2010-07-14T18:54:55.969Z · LW(p) · GW(p)

"Value difference" is often used as a cop-out. How did our terminal values come to be so different, anyway? If I'm extremely selfish and you're extremely selfish, we will likely have very different values, but if we are both altruistic, our values are combinations of values of all the other people in the world, so they should be pretty similar. For example, if I think society should be organized like an anthill and you think it should be organized like a pool of sharks (to borrow Ken Binmore's example), this is a factual disagreement about what would make everyone better off, not a value disagreement.

comment by Douglas_Knight · 2010-07-14T17:06:18.219Z · LW(p) · GW(p)

Maybe it's a political correctness principal component, but it seems to me that ideas about status should not be aligned with that component. If PUA had not been mentioned, and we were just discussing Johnstone, then I think those who are ignorant of PUA, whether pro- or anti-PC, would have less extreme reactions and often completely different ones.

If people's opinions on one issue are polarizing their opinions on another, without agreement that they're logically related, something is probably going wrong and this is a cost to discussing the first issue. Also, cousin it talked about the issues creating "camps." That's probably the mediating problem.

comment by Risto_Saarelma · 2011-02-22T10:36:36.997Z · LW(p) · GW(p)

Like whpearson, I think we're not all on one side or another. I'm pro-truth. I'm anti-PUA. I don't know if I'm pro or anti status

I am presently amused by imagining forum members declaring themselves "anti-truth".

Though I guess there is a spectrum from sticking to discovering and exposing widely applicable truths no matter what, some kind of Straussian stance where only the enlightened elites can be allowed access to dangerous truths and the general populace is to be fed noble lies, and then on to even less coherent spheres of willful obscurantism and outright anti-intellectualism, where it seems that nobody is encouraged to pursue some topics.

For some reason though, people who either explicitly believe that noble lies are necessary or have internalized a culture where they are built-in never seem to claim to be anti-truth.

comment by whpearson · 2010-07-14T10:05:26.780Z · LW(p) · GW(p)

I think there are divisions within the community, but I am not sure about the correlations. Or at least they don't fit me.

I'm pro discussion of status, I liked red paper clip theory for an example. I'm anti acquiring high status for myself and anti people telling me I should be pro that. I'm anti-pua advice, pro the occasional well backed up psychological research with PUA style flavour (finding out what women really find attractive, why the common advice is wrong etc).

I'm pretty much pro-truth, I don't think words can influence me that much (if they could I would be far more mainstream). I'm less sure about situations, if I was more status/money maximising for a while to earn money to donate to FHI etc, then I would worry that I would get sucked into the high status decadent consumer lifestyle and forget about my long term concerns.

Edit: Actually, I've just thought of a possible reason for the division you note.

If you are dominant or want to become dominant you do not want to be swayed by the words of others. So ideas are less likely to be dangerous to you or your values. If you are less-dominant you may be more susceptible to the ideas that are floating around in society as, evolutionarily, you would want to be part of whatever movement is forming so you are part of the ingroup.

I think my social coprocessor is probably broken in some weird way, so I may be an outlier.

Replies from: MichaelVassar, HughRistik, Blueberry
comment by MichaelVassar · 2010-07-15T16:53:54.395Z · LW(p) · GW(p)

There's no social coprocessor, we evolved a giant cerebral cortex to do social processing, but some people refuse to use it for that because they can't use it in its native mode while they are also emulating a general intelligence on the same hardware.

Replies from: whpearson, Blueberry, daedalus2u
comment by whpearson · 2010-07-16T10:15:56.621Z · LW(p) · GW(p)

I was being brief (and imprecise) in my self-assessment as that wasn't the main point of the comment. I didn't even mean broken in the sense that other might have meant it, i.e. Aspergers.

I just don't enjoy social conversation much normally. I can do it such that the other person enjoys it somewhat. An example, I was chatting to a cute dancer last night (at someone's 30th so I was obliged to), and she invited me to watch her latest dance. I declined because I wasn't into her (or into watching dance). She was nice and pretty, nothing wrong with her, but I just don't tend to seek marginal connections with people because they don't do much for me. Historically the people I connect with have seem to have been people that have challenged me or can make me think in odd directions.

This I understand is an unusual way to pick people to associate with, so I think something in the way I process social signals is different from the norm. This is what I meant.

Replies from: MichaelVassar, Blueberry
comment by MichaelVassar · 2010-07-17T01:41:02.494Z · LW(p) · GW(p)

I know what's going on. You think of yourself and others as collections of thoughts and ideas. Since most people don't have interesting thoughts or ideas, you think they aren't interesting. OTOH, it's possible to adopt, temporarily and in a manner which automatically reverses itself, the criteria for assigning interest that the person you are associating with uses. When you do that, everyone turns out to be interesting and likable.

Replies from: whpearson, ABranco
comment by whpearson · 2010-07-17T11:00:31.795Z · LW(p) · GW(p)

I know what's going on. You think of yourself and others as collections of thoughts and ideas. Since most people don't have interesting thoughts or ideas, you think they aren't interesting

That wasn't my working hypothesis. Mine was that I have different language capabilities and that those affect what social situations I find easy and enjoyable (and so the different people I chose to associate with). For example I can quite happily rattle off some surreal story with someone or I enjoy helping someone plan or design something. I find it hard to narrate stories about my life or remember interesting tidbits about the world that aren't in my interest right at the moment.

OTOH, it's possible to adopt, temporarily and in a manner which automatically reverses itself, the criteria for assigning interest that the person you are associating with uses. When you do that, everyone turns out to be interesting and likable.

Oh I can find many things interesting for a brief time, e.g. where the best place to be a dancer is (London is better than Europe) or how the some school kids were playing up today. Just subconsciously my brain knows it doesn't want lots of that sort of information or social interaction so sends signals that I do not want to have long term friendships with these sorts of people.

comment by ABranco · 2010-07-19T03:49:57.252Z · LW(p) · GW(p)

Hi, Michael.

Can you expand that thought, and the process? Doesn't adopting the other person's criteria constitute a kind of "self-deception" if you happen to dislike/disapprove his/her criteria?

I mean that even if, despite your dislikes, you sympathize with the paths that led to that person's motivations, if reading a book happens to be a truly more interesting activity at that moment, and is an actionable alternative, I don't see how connecting with the person could be a better choice.

Unless... you find something very enjoyable in this process itself that doesn't depend much on the person. I remember your comment about "liking people's territories instead of their maps" — it seems to be related here. Is it?

comment by Blueberry · 2010-07-16T17:10:33.396Z · LW(p) · GW(p)

Do you ever just associate with people you find attractive at first sight? (I can't tell if you're referring to a strip club, or what kind of dancer you mean.)

You may find Prof. Richard Wiseman's research on what makes people "lucky" interesting: his research has found advantages to seeking marginal connections with people you meet.

Replies from: whpearson
comment by whpearson · 2010-07-16T18:00:04.423Z · LW(p) · GW(p)

Do you mean sexually attractive? Or just interesting looking? I'll initiate conversation with interesting looking people (that may or may not be sexually attractive).

By dancer I just meant someone who does modern dance, she was a friend of a friend (I have some odd friends by this websites standard I think).

Oh I know I should develop more marginal connections. It simply feels false to do so though, that I am doing so in the hopes of exploiting them, rather than finding them particularly interesting in their own right. I would rather not be cultivated in that fashion.

Replies from: Blueberry
comment by Blueberry · 2010-07-21T00:11:45.777Z · LW(p) · GW(p)

I meant sexually attractive (you described the dancer as "cute" and "pretty"). Though I guess either would work.

comment by Blueberry · 2010-07-16T16:54:24.381Z · LW(p) · GW(p)

some people refuse to use it for that because they can't use it in its native mode while they are also emulating a general intelligence on the same hardware.

I'm not sure I understand. By 'emulating a general intelligence', do you mean consciously thinking through every action? My understanding is that people can develop social processing skills by consciously practicing unnatural habits until they become natural.

Replies from: MichaelVassar, HughRistik
comment by MichaelVassar · 2010-07-17T01:34:35.154Z · LW(p) · GW(p)

No-one consciously thinks through every action. I mean thinking at all rather than paying total attention to the other person and letting your actions happen. If you feel that 'you' are doing something, you aren't running the brain in its native mode, your running an emulation. It's hard to figure out how to do this from a verbal description, but if it happens you will recognize what I'm talking about and it doesn't require any practice of anything unnatural.

comment by HughRistik · 2010-07-16T17:24:43.112Z · LW(p) · GW(p)

My understanding is that people can develop social processing skills by consciously practicing unnatural habits until they become natural.

This is correct; at least some people can do this. For someone reason, there is a cultural bias that makes believe that this approach doesn't work, because so many people seem to believe that it doesn't without evidence. These people are wrong; this view has already been falsified by many people.

Many people learn many different disciplines through the four stages of competence (unconscious incompetence, conscious incompetence, conscious competence, unconscious competence), in sports and the arts.

Conversation isn't a special exception. Though it may be different from those domains by requiring more specialized mental hardware. Consciously practicing "unnatural" social habits happens to be a good way to jump start that hardware if it is dormant.

Someone without this hardware may not be able to learn how to emulate naturally social people through consciously trying to emulate them. Yet I bet that most people with social difficulties short of Asperger's aren't missing the relevant hardware; they just don't know how to use it out of social inexperience, such as from spending their formative years being isolated and bullied for being slightly different.

comment by daedalus2u · 2010-07-26T23:06:05.058Z · LW(p) · GW(p)

I disagree. I think there is the functional equivalent of a “social-co-processor”, what I see as the fundamental trade-off along the autism spectrum, the trading of a "theory of mind" (necessary for good and nuanced communication with neurotypically developing individuals and a “theory of reality”, (necessary for good ability at tool making and tool using).

http://daedalus2u.blogspot.com/2008/10/theory-of-mind-vs-theory-of-reality.html

Because the maternal pelvis is limited in size, the infant brain is limited at birth (still ~1% of women die per childbirth (in the wild) due to cephalopelvic disproportion). The “best” time to program the fundamental neuroanatomy of the brain is in utero, during the first trimester when the fundamental neuroanatomy of the brain is developing and when the epigenetic programming of all the neurons in the brain is occurring.

The two fundamental human traits, language and tool making and tool using both require a large brain with substantial plasticity over the individual's lifetime. But other than that they are pretty much orthogonal. I suspect there has been evolutionary pressure to optimize the neuroanatomy of the human infant brain at birth so as to optimize the neurological tasks that brain is likely to need to do over that individual's lifetime.

comment by HughRistik · 2010-07-15T05:41:33.854Z · LW(p) · GW(p)

If you are dominant or want to become dominant you do not want to be swayed by the words of others. So ideas are less likely to be dangerous to you or your values. If you are less-dominant you may be more susceptible to the ideas that are floating around in society as, evolutionarily, you would want to be part of whatever movement is forming so you are part of the ingroup.

Another possibility is that we are seeing some other personality differences in openness and or agreeableness. People who are higher in openness and/or lower in agreeableness might be more interested in ideas that are judged politically incorrect, or antisocial.

Replies from: None
comment by [deleted] · 2011-02-22T02:54:38.902Z · LW(p) · GW(p)

People who are higher in openness and/or lower in agreeableness might be more interested in ideas that are judged politically incorrect, or antisocial.

The division might correlate with where people land on the various axis's of the neurodiversity spectrum.

comment by Blueberry · 2010-07-14T16:14:40.387Z · LW(p) · GW(p)

I'm anti-pua advice, pro the occasional well backed up psychological research with PUA style flavour (finding out what women really find attractive, why the common advice is wrong etc).

I think this is just another way of saying "I'm pro- good advice about dating and anti- bad advice about dating." I would consider the research you're discussing a form of PUA/dating advice.

Replies from: whpearson
comment by whpearson · 2010-07-14T17:54:01.506Z · LW(p) · GW(p)

Are newtons laws billiard ball prediction advice?

In other words, there are other uses than trying to pick up girls for knowing what, on average, women like in a man. These include, but are not limited to,

  • Judging the likely ability of politicians to influence women
  • Being able to match make between friends
  • Writing realistic plots in fiction
  • Not being suprised when your friends are attracted to certain people
Replies from: Larks
comment by Larks · 2010-07-15T00:38:49.552Z · LW(p) · GW(p)

If you're an altruist (on the 'idealist' side of WrongBot's distinction), you'd probably consider making women you know happier to be the biggest advantage.

Replies from: whpearson
comment by whpearson · 2010-07-15T09:54:04.967Z · LW(p) · GW(p)

Most of the women I'm friends with are in relationships with men that aren't me :) So me being maximally attractive to them may not make them happier. I would need more research on how to have the correct amount of attractiveness in platonic relationships.

Sure women like the attention of a very attractive man, but it could lead to jealousy (why is attractive man speaking to X and not me), unrequited lust and .strife in their existing relationships.

Perhaps a research on what women find creepy, and not doing that, would be more useful for making women happier in general.

Edit: There is also the problem that if you become more attractive you might make your male friends less happy as they get less attention. Raising the general attractiveness of your male social group is another possibility, but one that would require quite an oddly rational group.

comment by Emile · 2010-07-14T12:35:10.482Z · LW(p) · GW(p)

I agree that these politically charged issues are probably not a very good thing for the community, and that we should be extra cautious when engaging them.

comment by CarlShulman · 2010-07-14T10:10:24.479Z · LW(p) · GW(p)

Any hypotheses about the common factor?

Replies from: cousin_it
comment by cousin_it · 2010-07-14T12:25:47.665Z · LW(p) · GW(p)

Not sure. I was anti-status, anti-PUA, pro-equality until age 22 or so, and then changed my opinions on all these issues at around the same time (took a couple years). So maybe there is a common cause, but I have absolutely no idea what that cause could be.

Replies from: None, CarlShulman, JamesPfeiffer, Blueberry
comment by [deleted] · 2010-07-16T17:14:35.610Z · LW(p) · GW(p)

del

comment by CarlShulman · 2010-07-14T13:27:39.757Z · LW(p) · GW(p)

Reduced attachment to explicit verbal norms?

comment by JamesPfeiffer · 2010-07-14T15:27:29.329Z · LW(p) · GW(p)

My relevant life excerpt is similar to yours. The first two changed because of increased understanding of how humans coordinate and act socially. Not sure if there is a link to the third.

comment by Blueberry · 2010-07-14T16:11:57.674Z · LW(p) · GW(p)

I was anti-status, anti-PUA, pro-equality until age 22 or so, and then changed my opinions on all these issues

It's called "growing up."

Replies from: None
comment by [deleted] · 2011-02-22T02:58:27.290Z · LW(p) · GW(p)

I wouldn't call it that, climbing the metacontrarian ladder seems to describe it much better.

comment by [deleted] · 2010-07-13T14:16:20.087Z · LW(p) · GW(p)

A thousand times no. Really, this is a bad idea.

Yeah, some people don't value truth at any cost. And there's some sense to that. When you take a little bit of knowledge and it makes you a bad person, or an unhappy person, I can understand the argument that you'd have been better off without that knowledge.

But most of the time, I believe, if you keep thinking and learning, you'll come round right. (I.e.: when a teenager reads Ayn Rand and thinks that gives him license to be an asshole, his problem is not that he reads too much philosophy.)

You seem to be particularly worried about accidentally becoming a bigot. (I don't think most of us are in any danger of accidentally becoming supreme dictators.) I think you are safe. Think of it this way: you don't want to be a bigot. You don't want your future self to be a bigot either. So don't behave like one. No matter what you read. Commit your future self to not being an asshole.

I think fear of brainwashing is generally silly.* You will not become a Mormon from reading the Book of Mormon. You will not become a Nazi from reading Mein Kampf, or a Communist from reading Das Kapital. You will not become a racist from reading Steve Sailer. I don't think we are such fragile creatures. Just keep an even keel and behave like a decent person, and you're free to read whatever you like.

*Actual brainwashing -- overriding your own sanity and reason -- is possible, but I think it requires a total environment, like a cult compound or an interrogation room. It's not something that reading a book can do to you.

Replies from: Richard_Kennaway, simplicio, None, Emile, satt
comment by Richard_Kennaway · 2010-07-13T14:49:27.389Z · LW(p) · GW(p)

But most of the time, I believe, if you keep thinking and learning, you'll come round right. (I.e.: when a teenager reads Ayn Rand and thinks that gives him license to be an asshole, his problem is not that he reads too much philosophy.)

"A little learning is a dang'rous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again."

-- Pope

Replies from: SilasBarta
comment by SilasBarta · 2010-07-13T16:43:10.651Z · LW(p) · GW(p)

That sounds like my (provisional) resolution the conflict between "using all you know" and "don't be a bigot": you should incorporate the likelihood ratio of things that a person can't control, so long as you also observe and incorporate evidence that could outweigh such statistical, aggregate, nonspecific knowledge.

So drink deep (use all evidence), but if you don't, then avoid incorporating "dangerous knowledge" as a second best alternative. Apply a low Bayes factor for something someone didn't choose, as long as you give them a chance to counteract it with other evidence.

(Poetry still sucks, though. I'm not yet changing my mind about that.)

Replies from: Emile, NancyLebovitz
comment by Emile · 2010-07-13T17:27:14.810Z · LW(p) · GW(p)

(Poetry still sucks, though. I'm not yet changing my mind about that.)

... must ... resist ... impulse ... to ... downvote ... different ... tastes ...

comment by NancyLebovitz · 2010-07-13T22:51:42.097Z · LW(p) · GW(p)

The other problem with "using all you know" about groups which are subject to bigotry is that "we rule, you drool" is very basic human wiring, and there's apt to be some motivated cognition (in the people developing and giving you the information, even if you aren't engaging in it) on the subject.

comment by simplicio · 2010-07-13T23:51:50.766Z · LW(p) · GW(p)

You will not become a Nazi from reading Mein Kampf, or a Communist from reading Das Kapital.

I became a Trotskyite (once upon a time) partly based on reading Trotsky's history of the Russian Revolution. Yes, I was primed for it, but... words aren't mere.

Replies from: Emile
comment by Emile · 2010-07-14T10:06:32.149Z · LW(p) · GW(p)

Interesting - would you recommend others read it?

I'm interested in reading anything that can change my mind, but avoid some partisan stuff when it looks like it's "preaching to the choir" and that it assumes that the reader already agrees with the conclusions.

Replies from: simplicio, Thomas
comment by simplicio · 2010-07-14T15:49:41.220Z · LW(p) · GW(p)

Interesting - would you recommend others read it?

Yes, if you're not young, impressionable and overidealistic. Trotsky was an incredible writer, and reading that book you do really see things from the perspective of an insider.

One of the reactionary and therefore fashionable historians in contemporary France, L. Madelin, slandering in his drawing-room fashion the great revolution – that is, the birth of his own nation – asserts that “the historian ought to stand upon the wall of a threatened city, and behold at the same time the besiegers and the besieged”: only in this way, it seems, can he achieve a “conciliatory justice.” However, the words of Madelin himself testify that if he climbs out on the wall dividing the two camps, it is only in the character of a reconnoiterer for the reaction. It is well that he is concerned only with war camps of the past: in a time of revolution standing on the wall involves great danger. Moreover, in times of alarm the priests of “conciliatory justice” are usually found sitting on the inside of four walls waiting to see which side will win.

The serious and critical reader will not want a treacherous impartiality, which offers him a cup of conciliation with a well-settled poison of reactionary hate at the bottom, but a scientific conscientiousness, which for its sympathies and antipathies – open and undisguised – seeks support in an honest study of the facts, a determination of their real connections, an exposure of the causal laws of their movement. That is the only possible historic objectivism, and moreover it is amply sufficient, for it is verified and attested not by the good intentions of the historian, for which only he himself can vouch, but the natural laws revealed by him of the historic process itself.

comment by Thomas · 2010-07-15T08:21:55.244Z · LW(p) · GW(p)

Your brain cannot be trusted. It is not safe. You must be careful with what you put into it, because it will decide the output, not you.

This "it" may, or even should, relate to the idea itself. The same idea, the same meme, put into a healthy rational brains anywhere, will decide the same! Since the brains are just a rational machine always doing the best possible thing.

It is the input, what decides the output. Machine has no other (irrational) choices, than to process the input best way it can, and then to spit out the output.

It is not my calculator only, which outputs "12" to the input "5+7". It is every unbroken calculator in the world, which outputs the same.

So again. The input "decides" what the output should be, not the computer (brains).

comment by [deleted] · 2011-04-18T22:42:14.507Z · LW(p) · GW(p)

You will not become a Mormon from reading the Book of Mormon. You will not become a Nazi from reading Mein Kampf, or a Communist from reading Das Kapital. You will not become a racist from reading Steve Sailer.

I don't know if this is a fair characterization of Steve Sailer. I'm quite sure some of his commenters are racist but then again many of the commenters on any major news site are as well. I would call him racialist, or perhaps just a HBDer.

Perhaps I'm somewhat biased in my view of him, but generally for example this interesting video seems typical Steve Sailer style. This is as representative of racism as Das Kapital is of Communism or Mein Kampf of Nazism?

Does racism just have a bad PR guy?

Whatever one calls this it clearly dosen't deserve the few thousand negative karma points racism has in my mind. Perhaps he is putting forward his best face here, but listening to a few parts of this discussion I half expected he would start reciting the litany of Tarski, going into a Hansonian analysis of status or telling everyone that beliefs should pay rent. He certainly touches on these topics in a slightly different vocabulary! Reword a sentence or two and it sounds something a commenter could write on Lesswrong and get upvoted for.

Replies from: GLaDOS
comment by GLaDOS · 2012-08-11T07:44:47.052Z · LW(p) · GW(p)

Original link is broken. This seems to be the same video.

comment by Emile · 2010-07-13T15:05:16.094Z · LW(p) · GW(p)

You seem to be particularly worried about accidentally becoming a bigot. (I don't think most of us are in any danger of accidentally becoming supreme dictators.) I think you are safe. Think of it this way: you don't want to be a bigot. You don't want your future self to be a bigot either. So don't behave like one. No matter what you read. Commit your future self to not being an asshole.

He's probably more motivated by not wanting others to become bigots - right, WrongBot?

Replies from: WrongBot
comment by WrongBot · 2010-07-13T15:41:42.725Z · LW(p) · GW(p)

My motivation in writing this article was to attempt to dissuade others from courses of action that might lead them to become bigots, among other things.

But I am also personally terrified of exactly the sort of thing I describe, because I can't see a way to protect against it. If I had enough strong evidence to assign a probability of .99 to the belief that gay men have an average IQ 10 points lower than straight men (I use this example because I have no reason at all to believe it is true, and so there is less risk that someone will try to convince me of it), I don't think I could prevent that from affecting my behavior in some way. I don't think it's possible. And I disvalue such a result very strongly, so I avoid it.

I bring up dangerous thoughts because I am genuinely scared of them.

Replies from: None, Jonathan_Graehl, twanvl, daedalus2u
comment by [deleted] · 2010-07-13T17:06:20.686Z · LW(p) · GW(p)

The fact that you have a core value, important enough to you that you'd deliberately keep yourself ignorant to preserve that value, is evidence that the value is important enough to you that it can withstand the addition of information. Your fear is a good sign that you have nothing to fear.

For real. I have been in those shoes. Regarding this subject, and others. You shouldn't be worried.

Statistical facts like the ones you cited are not prescriptive. You don't have to treat anyone badly because of IQ. IQ does not equal worth. You don't use a battery of statistics on test scores, crime rates, graduation rates, etc. to determine how you will treat individuals. You continue to behave according to your values.

Replies from: JenniferRM
comment by JenniferRM · 2010-07-14T01:12:39.005Z · LW(p) · GW(p)

In the past I have largely agreed with the sentiment that truth and information are mostly good, and when they create problems the solution is even more truth.

But on the basis of an interest in knowing more, I sometimes try to seek evidence that supports things I think are false or that I don't want to be true. Also, I try to notice when something I agree with is asserted without good evidential support. And I don't think you supported your conclusions there with real evidence.

You don't have to treat anyone badly because of IQ. IQ does not equal worth. You don't use a battery of statistics on test scores, crime rates, graduation rates, etc. to determine how you will treat individuals. You continue to behave according to your values.

This reads more to me like prescriptive signaling than like evidence. While it is very likely to be the case that "IQ test results" are not the same as "human worth", it doesn't follow that an arbitrary person would not change their behavior towards someone who is "measurably not very smart" in any way that dumb person might not like. And for some specific people (like WrongBot by the admission of his or her own fears) the fear may very well be justified.

When I read Cialdinni's book Influence, I was struck by the number of times his chapters took the form: (1) describe mental shenanigan, (2) offer evidence that people are easily and generally tricked in this way (3) explain how it functions as a bias when manipulated and a useful heuristic in non-evil environments, (4) offer laboratory evidence that basic warnings to people about the trick offer little protective benefit, (5) exhort the reader to "be careful anyway" with some ad hoc and untested advice.

Advice should be supported with evidence... and sometimes I think a rationalist should know when to shut up and/or bail out of a bad epistemic situation.

Evidence from implicit association tests indicate that people can be biased against other people without even being aware of it. When scientists tried to measure the degree of "cognitive work" it takes to parse racist situations they found that observing overt racism against black people was mentally taxing to white people while observing subtle racism against black people racism was mentally taxing to black people. The whites were oblivious to subtle racism and didn't even try to process it because it happened below their perceptual awareness, overt racism made them stop and momentarily ponder if maybe (shock!) we don't live in a colorblind world yet. The blacks knew racism was common (but not universal) and factored it into their model of the situation without lots of trouble when racism was overt - the tricky part was subtle racism where they had to think through the details to understand what was going on.

(I feel safe saying that white people are frequently oblivious to racism, and are sometimes active but unaware perpetrators of subtle forms of racism because I myself am white. When talking about group shortcomings, I find it best to stick to the shortcomings of my own group.)

Based on information like this, I can easily imagine that I might learn a true (relatively general) fact, use it to leap to an unjustifiable conclusion with respect to an individual, have that individual be harmed by my action, and never notice unless "called on it".

But when called on it, its quite plausible that I'd leap to defend myself and engage in a bunch of motivated cognition to deny that I could possibly ever be biased... and I'd dig myself even deeper into a hole, updating the wrong way when presented with "more evidence". So it would seem that more information would just leave me more wrong than I started with, unless something unusual happened.

(Then, to compound my bad luck I might cache defensive views of myself after generating them in the heat of an argument.)

So it seems reasonable to me that if we don't have the time to drink largely then maybe we should avoid shallow draughts. And even in that case we should be cautious about any subject that impinges on mind killer territory because more evidence really does seem to make you more biased in such areas.

I upvoted the article (from -2 to -1) because the problems I have with it are minor issues of tone, rather major issues with the the content. The general content seems to be a very fundamental rationalist "public safety message", with more familiarity assumed than is justified (like assuming everyone automatically agrees with Paul Graham and putting in a joke about violence at the end).

I don't, unfortunately, know of any experimentally validated method for predicting whether a specific person at a specific time is going to be harmed or helped by a specific piece of "true information" and this is part of what makes it hard to talk with people in a casual manner about important issues and feel justifiably responsible about it. In some sense, I see this community as existing, in part, to try to invent such methods and perhaps even to experimentally validate them. Hence the up vote to encourage the conversation :-)

Replies from: None
comment by [deleted] · 2010-07-14T04:02:39.763Z · LW(p) · GW(p)

Those are good points.

What I was trying to encourage was a practice of trusting your own strength. I think that morally conscientious people (as I suspect WrongBot is) err too much on the side of thinking they're cognitively fragile, worrying that they'll become something they despise. "The best lack all conviction, while the worst are full of passionate intensity."

Believing in yourself can be a self-fulfilling prophecy; believing in your own ability to resist becoming a racist might also be self-fulfilling. There's plenty of evidence for cognitive biases, but if we're too willing to paint humans as enslaved by them, we might actually decrease rationality on average! That's why I engaged in "prescriptive signaling." It's a pep talk. Sometimes it's better to try to do something than to contemplate excessively whether it's possible.

comment by Jonathan_Graehl · 2010-07-13T16:30:34.494Z · LW(p) · GW(p)

Why should your behavior be unaffected? If you want to spend time evaluating a person on their own merits, surely you still can.

Replies from: WrongBot
comment by WrongBot · 2010-07-13T18:30:54.040Z · LW(p) · GW(p)

Just because I'll be able to do something doesn't mean that I will. I can resolve to spend time evaluating people based on their own merits all I like, but that's no guarantee at all that the resolution will last.

Replies from: None
comment by [deleted] · 2010-07-13T18:46:30.383Z · LW(p) · GW(p)

You seem to think that anti-bigots evaluate people on their merits more than bigots do. Why?

If you're looking for a group of people who are more likely to evaluate people on their merits, you might try looking for a group of people who are committed to believing true things.

comment by twanvl · 2010-07-13T17:43:51.090Z · LW(p) · GW(p)

Group statistics gives only a prior, and just a few observations of any individual will overwhelm it. And if start discriminating against gays if they have low average intelligence, then you should discriminate even more against low intelligence itself. It is not the gayness that is the important factor in that case, it just has a weak correlation.

comment by daedalus2u · 2010-07-26T23:43:53.284Z · LW(p) · GW(p)

I see the problem of bigotry in terms of information and knowledge but I see bigotry as occurring when there is too little knowledge. I have quite an extensive blog post on this subject.

http://daedalus2u.blogspot.com/2010/03/physiology-behind-xenophobia.html

My conceptualization of this may seem contrived, but I give a much more detailed explanation on my blog along with multiple examples.

I see it as essentially the lack of an ability to communicate with someone that triggers xenophobia. As I see it, when two people meet and try to communicate, they do a “Turing test”, where they exchange information and try to see if the person they are communicating with is “human enough”, that is human enough to communicate with, be friends with, trade with, or simply human enough to not kill.

What happens when you try to communicate, is that you both use your “theory of mind”, what I call the communication protocols that translate the mental concepts you have in your brain into the data stream of language that you transmit; sounds, gestures, facial expressions, tone of voice, accents, etc. If the two “theories of mind” are compatible, then communication can proceed at a very high data rate because the two theories of mind do so much data compression to fit the mental concepts into the puny data stream of language and to then extract them from the data stream.

However, if the two theories of mind are not compatible, then the error rate goes up, and then via the uncanny valley effect xenophobia is triggered. This initial xenophobia is a feeling and so is morally neutral. How one then acts is not morally neutral. If one seeks to understand the person who has triggered xenophobia, then your theory of mind will self-modify and eventually you will be able to understand the person and the xenophobia will go away. If you seek to not understand the individual, or block that understanding, then the xenophobia will remain.

It is exactly analogous to Nietzsche's quote “if you look into the abyss, the abyss looks back into you”. We can only perceive something if we have pattern recognition for that something instantiated in our neural networks. If we don't have the neuroanatomy to instantiate an idea, we can't perceive the idea, we can't even think the idea. To see into the abyss, you have to have a map of the abyss in your visual cortex to decode the image of the abyss that is being received on your retina.

Bigots as a rule are incapable of understanding the objects of their bigotry (I am not including self-loathing here because that is a special case), and it shows, they attribute all kinds of crazy, wild, and completely non-realistic thinking processes to the objects of their bigotry. I think this was the reason why many invader cultures committed genocide on native cultures by taking children away from natives and fostering them with the invader culture (example US, Canada, Australia) (I go into more detail on that). What bigots often do is make up reasons out of pure fantasy to justify the hatred they feel toward the objects of their bigotry. The Blood Libel against the Jews is a good example. This was the lie that Jews used the blood of Christians in Passover rituals. This could not be correct. Passover long predated Christianity, blood is never kosher, human blood is never kosher, no observant Jew could ever use human blood in any religious ceremony. It never happened, it was a total lie. A lie used to justify the hatred that some Christians felt toward Jews. The hate came first, the lie was used to justify the feelings of hatred.

Bigots as a rule are afraid of associating with the objects of their bigotry because they will then come to understand them. The term “xenophobia” is quite correct. There is a fear of associating with the other because then some of “the other” will rub off on you and you will necessarily become more “other-like”. You will have a map that understands “the other” in your neuroanatomy.

In one sense, to the bigot, understanding “the other” is a “dangerous thought” because it changes the bigot's utility function such that certain individuals are no longer so low on the social hierarchy as to be treated as non-humans.

There are some thoughts that are dangerous to humans. These activate the “fight or flight” state in an uncontrolled manner and that can be lethal. This usually requires a lot of priming (years). There are too many safeties that kick-in for it to happen by accident. I think this is what the Kundalini kindling is. For the most part there isn't enough direct coupling between the part of the brain that thinks thoughts and the part that controls the stuff that keeps you alive. There is some, and that can be triggered in a heart beat when you are being chased by a bear, but there is lots of feedback via feelings before you get to dangerous levels. I don't recommend trying to work yourself into that state because it is quite dangerous because the safeties do get turned off (that is unless a bear is actually chasing you).

Drugs of abuse can trigger the same things which is one of the reasons they are so dangerous.

comment by satt · 2010-07-13T20:17:00.221Z · LW(p) · GW(p)

You will not become a Mormon from reading the Book of Mormon. You will not become a Nazi from reading Mein Kampf, or a Communist from reading Das Kapital. You will not become a racist from reading Steve Sailer.

I will not, but...

comment by jimrandomh · 2010-07-13T18:29:20.051Z · LW(p) · GW(p)

With bigotry, I think the real problem is confirmation bias. If I believe, for example, that orange-eyed people have an average IQ of only 99, and that's true, then when I talk to orange-eyed people, that belief will prime me to notice more of their faults. This would cause me to systematically underestimate the intelligence of orange-eyed people I met, probably by much more than 1 IQ point. This is especially likely because I get to observe eye color from a distance, before I have any real evidence to go on.

In fact, for the priming effect, in most people the magnitude of the real statistical correlation doesn't matter at all. Hence the resistance to acknowledging even tiny, well-proven differences between races and genders: they produce differences in perception that are not necessarily on the same order of magnitude as the differences in reality.

Replies from: lmnop, Emile
comment by lmnop · 2010-07-13T20:34:23.673Z · LW(p) · GW(p)

This is exactly the crux of the argument. When people say that everyone should be taught that people are the same regardless of gender or race, what they really mean isn't that there aren't differences on average between women and men, etc, but that being taught about those small differences will cause enough people to significantly overshoot via confirmation bias that it will overall lead to more misjudgments of individuals than if people weren't taught about those small differences at all, hence people shouldn't be taught about those small differences. I am hesitantly sympathetic to this view; it is borne out in many of the everyday interactions I observe, including those involving highly intelligent aspiring rationalists.

This doesn't mean we should stop researching gender or race differences, but that we should simultaneously research the effects of people learning about this research: how big are the differences in the perception vs the reality of those differences? Are they big enough that anyone being taught about gender and race differences should also be taught about of the risk of them systematically misjudging many individuals because of their knowledge, and warned to remain vigilant against confirmation bias? When individuals are told to remain vigilant, do they still overshoot to an extent that they become less accurate in judging people than they were before they obtained this knowledge? I would have a much better idea how to proceed both as a society and as an individual seeking to maximize my accuracy in judging people after finding out the answer to these questions.

comment by Emile · 2010-07-14T11:29:32.185Z · LW(p) · GW(p)

Those are real and important effects (that should probably have been included in the original post).

A problem with avoiding knowledge that could lead you to discriminate is that it makes it hard to judge some situations - did James Watson, Larry Summers and Stephanie Grace deserve a public shaming?

Replies from: MichaelVassar
comment by MichaelVassar · 2010-07-15T17:05:21.046Z · LW(p) · GW(p)

Stephanie Grace, definitely not, she was sharing thoughts privately.

Summers? Not for sexism, he seemed honest and sincere in a desire to clarify issues and reach truth, but he displayed stupidity and gullibility which should be cause for shame in his position at Harvard, and to some degree as a broad social scientist and policy adviser, though not as an economic theorist narrowly construed.

Watson, probably. He said something overtly and exageratedly negative, said it publicly and needlessly, and has a specific public prestige which makes his words more influential. It's unfortunate that he didn't focus on some other issue and public shame of this sort might reduce such unfortunate occurrences in the future.

Replies from: Emile
comment by Emile · 2010-07-16T12:14:59.436Z · LW(p) · GW(p)

I wasn't really looking for answers to that question, I was trying to say that if we avoid "dangerous information" (to avoid confirmation bias, etc.), and encourage others to avoid it too, we're making it harder to answer questions like that.

comment by teageegeepea · 2010-07-14T01:50:17.376Z · LW(p) · GW(p)

Bryan Caplan argues against the "corrupted by power" idea with an alternative view: they were corrupt from the start, which is why they were willing to go to such extremes to attain power.

Around the time I stopped believing in God and objective morality I came around to Stirners' view: such values are "geists" haunting the mind, often distracting us from factual truths. Just as I stopped reading fiction for reasons of epistemic hygiene, I decided that chucking morality would serve a similar purpose. I certainly wouldn't trust myself to selectively filter any factual information. How can the uninformed know what to be uninformed about?

comment by WrongBot · 2010-07-15T19:52:33.971Z · LW(p) · GW(p)

I've observed that quite a bit of the disagreement with the substance of my post is due to people believing that the level of distrust for one's own brain that I advocate is excessive. (See this comment by SarahC, for example.)

It occurs to me that I should explain exactly why I do not trust my own brain.

In the past week I have noted the following instances in which my brain has malfunctioned; each of them is a class of malfunction I had never previously observed in myself:

(It may be relevant to note that I have AS.)

  • I needed to open a box of plastic wrap, of the sort with a roll inside a box, a flap that lifts up, and a sharp edge under the flap. The front of the box was designed such that there were two sections separated by some perforation; there's a little set of instructions on the box that tells you to tear one of those sections off, thus giving you a functional box of plastic wrap. I spent approximately five minutes trying to tear the wrong section off, mangling the box and cutting my finger twice in the process. This was an astonishing failure to solve a basic physical task.

  • I was making bread dough, a process which necessitates measuring out 4.5 cups of flour into a bowl. My mind was not wandering to any unusual degree, nor was I distracted or interrupted. I lost count of the number of consecutive cups of flour I was pouring into the bowl; I failed to count to four and a half.

  • I was playing Puzzle Quest (a turn-based videogame that mostly involves match-3 play of the sort made popular by Bejewled) while reading comments on LessWrong, switching between tasks every few minutes. I find that doing this gives me time to think over things I've just read; it's also fun. At one point, as I switching from looking at a comment I had just finished reading to looking at my TV screen, I suddenly began to believe that matching colored gems was the process by which one constructed sound arguments. In general. This sensation lasted approximately five seconds before reality reasserted itself.

I might not have even really noticed these brain malfunctions if I hadn't spent significant effort recently on becoming more luminous; I'm inclined to believe that there have been plenty of other such events in the past that I have failed to notice.

In any case, I hope this explains why I am so afraid of my own brain.

Replies from: David_Gerard, Swimmer963, hesperidia, Vaniver
comment by David_Gerard · 2010-11-23T21:06:40.160Z · LW(p) · GW(p)

I suddenly began to believe that matching colored gems was the process by which one constructed sound arguments.

Someone has to write this game.

Replies from: WrongBot
comment by WrongBot · 2010-11-23T21:36:32.296Z · LW(p) · GW(p)

I'm imagining some kind of sliding-block puzzle game, with each block as a symbol or logical operator. You start off with some axioms and then have to go through and construct proofs for progressively more complex first-order logic expressions.

Or maybe a game that does for syllogisms what Manufactoria does for Turing Machines. (Memetic hazard warning!)

This could be promising...

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-03-28T14:04:48.877Z · LW(p) · GW(p)

I spent approximately five minutes trying to tear the wrong section off, mangling the box and cutting my finger twice in the process. This was an astonishing failure to solve a basic physical task.

I have a tendency to do this if I want to solve a basic task and someone is watching me, especially a teacher. (I'm in nursing school, so a lot of my evaluations consist of my teacher watching me assemble equipment, not something I'm talented with to begin with.) Alone, I'll just start experimenting with different ways until I find one that works, but if I'm being watched and implicitly evaluated, paradoxically enough I'll keep trying the same failed way over again until they correct me. I don't know if this is a weird illogical attempt to avoid embarrassment, or if I'm subconsciously trying to hasten the moment that they'll just go ahead and tell me, or if it's just because enough of my brain is taken up worrying about someone watching me that the leftovers aren't capable of thinking about the task, and just default to random physical actions.

I lost count of the number of consecutive cups of flour I was pouring into the bowl; I failed to count to four and a half.

I do this all the time, too. Maybe because my default state, when I'm alone and not under pressure to do something, is a kind of relaxed spacey-ness where I let my thoughts go on whatever association trains they please. People make fun of me for this, and it is irritating, but it's something I'm slowly learning to "switch off" when I really, really have to be focusing my whole attention on something.

I suddenly began to believe that matching colored gems was the process by which one constructed sound arguments. In general. This sensation lasted approximately five seconds before reality reasserted itself.

This kind of thinking happens to me all the time in the state between sleeping and waking, or during dreams themselves. It's occasionally happened to me while awake. I don't find it particularly concerning, since it's easy to notice and wears off fast.

comment by hesperidia · 2012-03-17T17:30:14.172Z · LW(p) · GW(p)

Noting that this thread is nearly two years old: AS is highly correlated with deficiency in executive function. This would explain the bread incident, although not the other two.

Replies from: WrongBot, gwern
comment by WrongBot · 2012-03-17T19:03:00.746Z · LW(p) · GW(p)

In the intervening time I've also been convinced that I have ADD, or at least something that looks like it. My executive function is usually pretty decent.

comment by gwern · 2012-03-17T18:48:38.617Z · LW(p) · GW(p)

Is 'AS' supposed to mean 'Asperger's Syndrome'? I was thinking so and the bread incident does sound like an executive control problem, but the third TV incident sounds more like a schizophrenic sort of hallucination.

Replies from: Blueberry
comment by Blueberry · 2012-03-28T03:24:06.250Z · LW(p) · GW(p)

It sounds like the type of unusual, creative, synaesthetic association that can occur under the influence of cannabis or psiloycbin mushrooms, or just sleep deprivation.

comment by Vaniver · 2010-11-23T20:33:00.607Z · LW(p) · GW(p)

I was making bread dough, a process which necessitates measuring out 4.5 cups of flour into a bowl. My mind was not wandering to any unusual degree, nor was I distracted or interrupted. I lost count of the number of consecutive cups of flour I was pouring into the bowl; I failed to count to four and a half.

About half of my caloric intake is bread I bake, and I am terrible at counting. I keep a stack of pennies handy for exactly this reason.

comment by red75 · 2010-07-13T16:27:24.205Z · LW(p) · GW(p)

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman.

Your evidence is not quite about beliefs. I think correct version is:

People that don't mind to share that they believe that women have a lower... etc.

Replies from: Douglas_Knight, army1987
comment by Douglas_Knight · 2010-07-13T17:30:44.019Z · LW(p) · GW(p)

Another version is that bigots can't shut up about it.

comment by A1987dM (army1987) · 2012-08-11T18:37:03.893Z · LW(p) · GW(p)

Yeah. I have that belief too, but I don't point it out unless that's particularly relevant to the conversation, nor do I try to steer conversations towards that region of topicspace unless I have some compellingly strong reason to do that.

comment by NancyLebovitz · 2010-07-13T12:07:34.722Z · LW(p) · GW(p)

I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.

This is something I haven't observed, but it's seemed plausible to me anyway. Have there been any studies (even small, lightweight studies with hypothetical trait differences) showing that sort of overshoot? If there are, why don't they get the sort of publicity that studies which show differences get?


Speaking of AIs getting out of the box, it's conceivable to me that an AI could talk its way out. It's a lot less plausible that an AI could get it right the first time.


And here's a thought which may or may not be dangerous, but which spooked the hell out of me when I first realized it.

Different groups have different emotional tones, and these kept pretty stable with social pressure. Part of the social pressure is usually claims that the particular tone is superior to the alternatives (nicer, more honest, more fun, more dignified, etc.). The shocker was when I realized that the emotional tone is almost certainly the result of what a few high-status members of a group prefer or preferred, but the emotional tone is generally defended as though it's morally superior. This is true even in troll groups, who claim that emotional toughness is more valuable than anything which can be gained by not being insulting.

Replies from: rhollerith_dot_com, Airedale, Morendil, Jonathan_Graehl
comment by RHollerith (rhollerith_dot_com) · 2010-07-13T17:25:39.826Z · LW(p) · GW(p)

Different groups have different emotional tones . . . (nicer, more honest, more fun, more dignified, etc.).

Downvotes have caused me to put a lot of effort into changing the tone of my communications on Less Wrong so that they are no longer significantly less agreeable (nice) than the group average.

In the early 1990s the newsgroups about computers and other technical subjects were similar to Less Wrong: mostly male, mean IQ above 130, vastly denser in libertarians than the population of any country, the best place online for people already high in rationality to improve their rationality.

Aside from differences in the "shape" of the conversation caused by differences in the "mediating" software used to implement the conversation, the biggest difference between the technical newsgroups of the early 1990s and Less Wrong is that the tone of Less Wrong is much more agreeable.

For example, there was much less evidence IIRC of a desire to spare someone's feelings on the technical newsgroups of the early 1990s, and flames (impassioned harangues of a length almost never seen in comments here and of a level of vitriol very rare here) were very common -- but then again the mediating software probably pulled for deep nesting of replies more than Less Wrong's software does, and most of those flames occured in very deeply nested flamewars with only 2 or 3 participants.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-07-13T20:29:10.044Z · LW(p) · GW(p)

Having seen both types of tone, which do you think is more effective in improving rationality and sharing ideas?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-07-13T22:20:09.371Z · LW(p) · GW(p)

The short answer is I do not know.

The slightly longer answer is that it probably does not matter unless the niceness reaches the level at which people become too deferential towards the leaders of the community, a failure mode that I personally do not worry about.

Parenthetically, none of the newsgroups I frequented in the 1990s had a leader unless my memory is epically failing me right now. Erik Naggum came the closest (on comp.lang.lisp) but the maintenance of his not-quite-leader status required him to expend a prodigious amount time (and words) to continue to prove his expertise and commitment to Lisp and to browbeat other participants. (And my guess is the the constant public browbeating cost him at least one consulting job. It certainly did not make him look attractive.)

The most likely reason for the emotional tone of LW is that the participants the community most admire have altruism, philanthropy or a refined kind of friendliness as one of their primary motivations for participation, and for them to maintain a certain level of niceness is probably effortless or well-rehearsed and instrumentally very useful.

Specifically, Eliezer and Anna have altruism, philanthropy or human friendliness as one of their primary motivations with probability .9. There are almost certainly others here with that as one of the primary motivations, but they are hard for me to read or I just do not have enough information (in the form of either a large body of online writings like Eliezer's or sufficient face time) to form an opinion worth expressing.

More precisely, if they were less nice than they were, it would be difficult for them to fulfill their mission of improving people's rationality and networking to reduce e-risks, but if they were too nice it would have too much of an inhibitory effect on the critical (judgemental) faculties of them and their interlocutors, so they end up being less nice than the average suburban Californian, say, but significantly nicer than the average niceness of most of the online communities frequented by programmers and others whose work relies heavily on the critical faculty, i.e., where to succeed at the work requires being able to perceive very subtle faults in something.

In other words, I have a working hypothesis that there is a tension between the internal emotional state optimal for "interpersonal" goals (like networking and teaching rationality) and the state optimal for making a rational analysis of a situation or argument. This tension certainly exists for me. I have no direct evidence that the same tension exists for the leaders of this community, but again that is my tentative hypothesis.

So, IMHO the important question is not the effects of the current level of niceness but rather the effects of altruistically motivated participants. I should share my thinking on that some day when I have more time.

comment by Airedale · 2010-07-13T15:38:36.237Z · LW(p) · GW(p)

I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.

This is something I haven't observed, but it's seemed plausible to me anyway. Have there been any studies (even small, lightweight studies with hypothetical trait differences) showing that sort of overshoot? If there are, why don't they get the sort of publicity that studies which show differences get?

I would also be interested in hearing if there are any studies on this subject. For me, much of WrongBot's argument hangs on how accurate these observations are. I'm still not sure I'd agree with the overall point, but more evidence on this point would make me much more inclined to consider it.

Also, WrongBot, it seems possible that the observations you've made could have alternate explanations; e.g., the people that you have witnessed change their behavior based on scientific results may not have been as originally unbiased or reluctant to change their minds on these subjects as you had believed them to be.

In other words, there may be a chicken/egg problem here. Did these people that you observed really become more bigoted/discriminatory after accepting the truth of certain studies, or did (perhaps subconscious) bigotry actually lead them to accept (and even seek out) studies showing results that confirmed this bigotry and gave them "cover" to discriminate?

Replies from: WrongBot
comment by WrongBot · 2010-07-13T18:29:06.571Z · LW(p) · GW(p)

I didn't look hard enough for more evidence for this post, and I apologize.

I've recently turned up:

  • A study on clapping which indicated that people believe very strongly that they can distinguish between the sounds of clapping produced by men and women, when in reality they're slightly better than chance. The relevant section starts at the bottom of the 4th page of that PDF. This is weak evidence that beliefs about gender influence a wide array of situations, often unconsciously.

  • This paper on sex-role beliefs and sex-difference knowledge in schoolteachers may be relevant, but it's buried behind a pay-wall.

  • Lots of studies like this one have documented how gender prejudices subconsciously affect behavior.

  • And here's a precise discussion of exactly the effect I was describing. Naturally, it too is behind a pay-wall.

comment by Morendil · 2010-07-13T17:18:58.738Z · LW(p) · GW(p)

The shocker was when I realized that the emotional tone is almost certainly the result of what a few high-status members of a group prefer or preferred

Yes, if you have gained temporary influence over others one of the ways you can put that to further use is by trading that influence into an environment that accords with your preferences.

but the emotional tone is generally defended as though it's morally superior

Regardless of how it comes to be established as a social norm, it could be that a particular tone is more suited to a particular purpose, for instance truth-seeking or community-building or fund-raising.

(For instance, academics have a strong norm of writing in an impersonal tone, usually relying on the passive voice to achieve that. This could either be the result of contingent pressure exerted by the people who founded the field, or it could be an antidote to inflamed rhetoric which would detract from the arguments of fact and inference.)

Replies from: Sniffnoy
comment by Sniffnoy · 2010-07-13T21:31:38.540Z · LW(p) · GW(p)

Yes, if you have gained temporary influence over others one of the ways you can put that to further use is by trading that influence into an environment that accords with your preferences.

What exactly is spent here? It looks like this is someone with enough status in the group can do "for free".

Replies from: Morendil
comment by Morendil · 2010-07-13T21:43:11.549Z · LW(p) · GW(p)

I don't think it's ever free to use your influence over a group. Do it too often, and you come across as a despot.

As a local example, Eliezer's insistence on the use of ROT13 for spoilerish comments carried through at some status "cost" when a few dissenters objected.

comment by Jonathan_Graehl · 2010-07-13T16:51:19.324Z · LW(p) · GW(p)

Your point about tone being set top-down (by the high-status, or by inertia in the established community) seems to me to explain why we there are so many genuinely vicious people among netizens who talk rationally and honestly about differences in populations (essentially anti-PC) - even beyond what you'd expect in that they're rebelling against an explicit "be nice" policy that most people assent to.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-13T17:05:07.897Z · LW(p) · GW(p)

I'm not sure about the connection you're making. Is it combining my points that tone is set from the top, and people are apt to overshoot their prejudices beyond their evidence?

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-07-13T18:55:03.968Z · LW(p) · GW(p)

My old theory about the nastiness of some anti-PC reactionaries was that they came to their view out of some animus.

Your suggestion that communities' tones may be determined by that of a small number of incumbents serves as an alternative, softening explanation.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-13T22:44:27.562Z · LW(p) · GW(p)

I think it's complicated. Some of it probably is animus, but it wouldn't surprise me if some of it isn't about the specific topic so much as resentment at having the rules changed with no acknowledgement made that rule changes have costs for those who are obeying them.

comment by xamdam · 2010-07-13T23:46:31.885Z · LW(p) · GW(p)

I think this is a worthwhile discussion.

Here are some "true things" I don't want to know about:

  • the most catchy commercial jingle in the universe
  • what 2g1c looks like. I managed to avoid it thus far
  • the day I am going to die
Replies from: teageegeepea, None, Emile, ABranco, None
comment by teageegeepea · 2010-07-14T01:52:49.109Z · LW(p) · GW(p)

I'm surprised about the last one. I think it would be quite helpful if you could be prepared for that.

The other two are experiences you wouldn't like to have. If you had the indexical knowledge of what the catchiest jingle was, you could better avoid hearing it.

Replies from: xamdam
comment by xamdam · 2010-07-15T19:18:36.116Z · LW(p) · GW(p)

if you could be prepared for it

That's a big if ;)

I am not.

comment by [deleted] · 2010-07-14T00:33:05.108Z · LW(p) · GW(p)

I have to admit there's information I shield myself from as well.

  1. I don't like watching real people die on video. I worry about getting desensitized/dehumanized.

  2. I don't want to see 2g1c either. (by extension, most of the grungier parts of the intertubes.)

  3. I don't want to know (from experience) what heroin feels like.

I do know people who believe in total desensitization -- they think that the reflex to shudder or gag is something you have to burn out of yourself. I don't think I want that for myself, though.

Replies from: gwern
comment by gwern · 2010-07-14T10:54:26.411Z · LW(p) · GW(p)

I don't want to see 2g1c either. (by extension, most of the grungier parts of the intertubes.)

You know, those shock videos are not as bad as they look. 2g1c is usually thought to be something in the line of chocolate; and the infamous Tubgirl is known to be just orange juice.

(Which makes sense; eating feces is a good way to get sick.)

comment by Emile · 2010-07-14T10:39:19.734Z · LW(p) · GW(p)
If you tell me the wild boar
Has twenty teeth, I’ll say, “Why sure.”
Or say that he has thirty three,
That number is quite all right with me
Or scream that he has ninety-nine
I’ll never say that you are lyin’,
For the number of teeth
In a wild boar’s mouth
Is a subject I’m glad
I know nothing about.

-- Shel Silverstein

comment by ABranco · 2010-07-19T04:23:13.040Z · LW(p) · GW(p)

It's not obvious that knowing more always makes us better off — because the landscape of rationality is not smooth.

The quote in Eliezer's site stating that "That which can be destroyed by the truth should be." sounded to me too strong a claim from the very first time I read it. Many people cultivate falsehoods or use blinkers that are absolutely necessary to the preservation of their sanity (sic), and removing them could terribly jeopardize their adaptability to the environment. It could literally kill them.

comment by [deleted] · 2015-07-06T16:30:36.512Z · LW(p) · GW(p)

I suppose this translates to things you already know, but don't want to consciously attend to. For instance, I feel compelled by the Essendon Football Club's slogan: ]

'Stand as One - One Team. One Dream. Click here or call 1300 GO BOMBERS to become a member and be part of the Bombers team today.Exclusive to essendonfc.com.au'.

While I am tempted to mull of it for a while to dissect it's secrets, I am unlikely, from experience, to get anything meaningful from the experience that I could apply to increase any consequential skill set. Therefore, I'll attend to some other thought associated with my immediate environmental stimuli.

comment by satt · 2010-07-13T21:38:25.586Z · LW(p) · GW(p)

Here's something that might work as an alternative example that doesn't imply as much bigotry on anybody's part: a PNAS study from earlier this year found that during a school year, schoolgirls with more maths-anxious female maths teachers appear to develop more stereotyped views of gender and maths achievement, and do less well in their maths classes.

Let's suppose the results of that study were replicated and extended. Would a female maths teacher be justified in refusing to think about the debate over sex and IQ/maths achievement, on the grounds that doing so is likely to generate maths anxiety and so indirectly harm their female students' maths competence?

[Edited so the hyperlink isn't so long & ugly.]

comment by knb · 2010-07-13T18:08:51.284Z · LW(p) · GW(p)

I really disagree with your argument, Wrongbot. First of all, I think responding appropriately to "dangerous" information is an important task, and one which most LW folks can achieve.

In addition, I wonder if your personal observations about people who become bigots by reading "dangerous content" are actually accurate. People who are already bigots (or are predisposed to bigotry) are probably more likely to seek out data that "confirms" their assumptions. So your anecdotal observation may be produced by a selection effect.

At bare minimum, you should give us some information about the sample your observations are based on. For example you say:

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them.

This could mean you've met a couple people like this, and never met anyone else who has encountered this data. In any case, you really don't have enough data to draw the extreme conclusion that you should ignore data.

In any case, the most fundamental problem with your point is that any attempt to preemptively prevent yourself from acquiring dangerous information is predicated on you already knowing the "dangerous" part. You can spend the rest of your life avoiding data about IQ/SAT scores, but you already know that women's scores vary somewhat less than men's' scores. (Anyway, I fail to see how expecting somewhat less variance in women would effect behavior in real life.)

comment by Emile · 2010-07-13T07:55:09.107Z · LW(p) · GW(p)

This seems to be bordering on Dark Side epistemology - and doesn't seem very well aligned with the name of this site.

Another argument against digging in some of the red flag issues is that you might aquire unpopular opinions, and if you're bad at hiding those, you might suffer negative social consequences.

Replies from: WrongBot
comment by WrongBot · 2010-07-13T15:03:13.822Z · LW(p) · GW(p)

Dark Side epistemology is about protecting false beliefs, if I understand the article correctly. I'm talking about protecting your values.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T15:25:27.403Z · LW(p) · GW(p)

Anti-epistemology (the updated term for the concept) is primarily about developing immunity to rational argument, allowing to stop the development of your understanding (of factual questions, or of moral questions), and keep incorrect answers (that usually signal belonging to a group) indefinitely. In worse forms, it fosters the development of incorrect understanding as well.

comment by Vladimir_Nesov · 2010-07-13T08:16:36.781Z · LW(p) · GW(p)

I agree with the overall point: certain thoughts can make you worse off.

Whether it's difficult to judge which information is dangerous, and whether given heuristics for judging that will turn into an anti-epistemic disaster, is about solving the problem, not about existence of the problem. In fact, a convincing argument for using a flawed knowledge-avoiding heuristics would itself be the kind of knowledge one should avoid being exposed to.

If we have an apparently unsolvable problem, with most hypothetical attempts at solution leading to disaster, we shouldn't therefore declare it illusory, and mentioning it irresponsible.

Edit: See also WrongBot's analysis of why the post gets a negative reaction for the wrong reasons.

comment by JoshuaZ · 2010-07-13T04:54:57.319Z · LW(p) · GW(p)

This advice bothers me a lot. Labeling possibly true knowledge as dangerous knowledge (as the example with statements about average behavior of groups) is deeply worrisome and is the sort of thing that if one isn't careful would be used by people to justify ignoring relevant data about reality. I'm also concerned that this piece conflates actual knowledge (as in empirical data) and things like group identity which seems to be not so much knowledge but rather a value association.

Replies from: WrongBot
comment by WrongBot · 2010-07-13T15:03:20.366Z · LW(p) · GW(p)

I am grouping together "everything that goes into your brain," which includes lots and lots of stuff, most of it unconscious. See research on priming), for example.

This argument is explicitly about encouraging people to justify ignoring relevant data about reality. It is, I recognize, an extremely dangerous proposition, of exactly the sort I am warning against!

At risk of making a fully general counterargument, I think it's telling that a number of commenters, yourself included, have all but said that this post is too dangerous.

  • You called it "deeply worrisome."
  • RichardKennaway called it "defeatist scaremongering."
  • Emile thinks it's Dark Side Epistemology. (And see my response.)

These are not just people dismissing this as a bad idea (which would have encouraged me to do the same), these are people are worrying about a dangerous idea. I'm more convinced I'm right than I was when I wrote the post.

Replies from: Vladimir_Nesov, Jonathan_Graehl, Bongo, mattnewport, JoshuaZ
comment by Vladimir_Nesov · 2010-07-13T15:19:38.255Z · LW(p) · GW(p)

Heh. So most of the critics argue their disapproval of the argument in your post based essentially on the same considerations as discussed in the post.

comment by Jonathan_Graehl · 2010-07-13T16:43:59.736Z · LW(p) · GW(p)

It doesn't make you right. It just makes them as wrong (or lazy) as you.

If you feel afraid that incorporating a belief would change your values, that's fine. It's understandable that you won't then dispassionately weigh the evidence for it; perhaps you'll bring a motivated skepticism to bear on the scary belief. If it's important enough that you care, then the effort is justified.

However, fighting to protect your cherished belief is going to lead to a biased evaluation of evidence, so refusing to engage the scary arguments is just a more extreme and honest version of trying to refute them.

I'd justify both practices situationally: considering the chance you weigh the evidence dispassionately but get the answer quite wrong (even your confidence estimation is off), you can err on the side of caution in protecting your most cherished values. That is, your objective function isn't just to have the best Bayesian-rational track record.

comment by Bongo · 2010-07-14T07:56:51.432Z · LW(p) · GW(p)

Your post is not dangerous knowledge. It's dangerous advice about dangerous knowledge.

comment by mattnewport · 2010-07-13T18:21:07.988Z · LW(p) · GW(p)

These are not just people dismissing this as a bad idea (which would have encouraged me to do the same), these are people are worrying about a dangerous idea. I'm more convinced I'm right than I was when I wrote the post.

Becoming more convinced of your own position when presented with counterarguments is a well known cognitive bias.

Replies from: WrongBot
comment by WrongBot · 2010-07-13T18:38:03.857Z · LW(p) · GW(p)

Knowing about biases may have hurt you. The counterarguments are not what convinced me; it's that the counterarguments describe my post as bad because it belongs to the class of things that it is warning against.

There are other counterarguments in the comments here that have made me less convinced of my position; this is not a belief of which I am substantially certain.

comment by JoshuaZ · 2010-07-13T15:14:05.038Z · LW(p) · GW(p)

"Deeply worrisome" may have been bad wording on my part. It might be more accurate to say that this is an attitude which is so much more often wrong than right that it is better to acknowledge the low probability of such knowledge existing but not actually deliberately keep knowledge out.

comment by Desrtopa · 2010-11-26T15:05:12.333Z · LW(p) · GW(p)

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them.

I'm skeptical of the notion that people tend to lower their intelligence estimates of women they meet as a result of this as opposed to using it as an excuse to reinforce their preexisting inclination to have a lower intelligence estimate of women than of men.

Replies from: Dmytry
comment by Dmytry · 2012-03-17T19:01:53.079Z · LW(p) · GW(p)

Ya. Plus, technically, smaller standard deviation makes for extreme differences in the frequencies at the high IQ range (not that I believe in it or anything).

comment by Matt_Simpson · 2010-07-13T15:47:45.010Z · LW(p) · GW(p)

I agree with the main point of this post, but I think it could have used a more thorough, worked out example. Identity politics is probably the best example of your point, but you barely go into it. Don't worry about redundancy too much; not everyone has read the original posts.

FWIW, my personal experience with politics is an anecdote in your favor.

comment by [deleted] · 2010-07-13T14:38:44.167Z · LW(p) · GW(p)

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman.

I don't think that this requires a utility-function-changing superbias. Alternatively: We think sloppily about groups, flattening fine distinctions into blanket generalizations. This bias takes the fact "women have a lower standard deviation on measures of IQ than men" as input and spits out the false fact "chicks can't be as smart as guys". If a person updates on this nonfact, and he tends to value less-intelligent individuals less and treat them differently, his valuation of all women will shift downward, fully in accordance with his existing utility function.

Placing "a high value on not discriminating against sentient beings on the basis of artifacts of the birth lottery" is not a common position. Most people discriminate freely on an individual basis. They also aren't aware of cognitive biases or how to combat them. Perhaps it's safer not to learn about between-group differences under those circumstances.

Strange advice for Less Wrong, though.

Replies from: RobinZ
comment by RobinZ · 2010-07-13T15:23:19.907Z · LW(p) · GW(p)

One argument you could give a Less Wrong audience is that the information about intelligence you could learn by learning someone's gender is almost completely screened off by the information content gained by examining the person directly (e.g. through conversation, or through reading research papers).

Replies from: lmnop
comment by lmnop · 2010-07-13T20:58:08.911Z · LW(p) · GW(p)

That is exactly what should happen, but I suspect that in real life it doesn't, largely because of anchoring and adjustment.

Suppose I know the average intelligence of a member of Group A is 115, and the average intelligence of a member of Group B is 85. After meeting and having a long, involved conversation with a specific member of either group, I should probably toss out my knowledge of the average intelligence of their group and evaluate them based on the (much more pertinent) information I have gained from the conversation. But if I behave like most people do, I won't do that. Instead, I'll adjust my estimate from the original estimate supplied by the group average. Thus, my estimate of the intelligence of a particular individual from Group A will still be very different than my estimate of the intelligence of a particular individual from Group B with the same actual intelligence even after I have had a conversation (or two, or three) with both of them. How many conversations does it take for my estimates to converge? Do my estimates ever converge?

Replies from: mattnewport
comment by mattnewport · 2010-07-13T21:11:15.439Z · LW(p) · GW(p)

After meeting and having a long, involved conversation with a specific member of either group, I should probably toss out my knowledge of the average intelligence of their group and evaluate them based on the (much more pertinent) information I have gained from the conversation.

If your goal is to accurately judge intelligence this may not be a good approach. Universities moved away from basing admissions decisions primarily on interviews and towards emphasizing test scores and grades because 'long, involved conversation' tends to result in more unconscious bias than simpler, more objective measures when it comes to judging intelligence (at least as it correlates with academic achievement).

Unless you have strong reason to believe that all the unconscious biases that come into play in face to face conversation are likely to be just about right to balance out any biases based on preconceptions of particular groups you are just replacing one source of bias (preconceived stereotypes based on group membership) with another (responses to biasing factors in face to face conversation such as physical attractiveness, accent, shared interests, body language, etc.)

comment by Simplicius · 2011-02-23T22:06:41.091Z · LW(p) · GW(p)

Actually I think that if differences in group (sex, race, ethnicity, class, caste) intelligence (IQ) means and distributions proved to be of genetic origins this would be a net gain in utility since it would increase public acceptance of genetic engineering and spending on gene based therapies.

BTW We already know that the differences are real as in they are measured and we have tried our very best to get rid of say cultural bias, and proving that they aren't culturally biased is impossible so its deceiving to talk "if differences proved to be real" as some posters have done, its more accurate to say "if differences proved to be mostly genetic in origin".

Which reminds me, we also know that some of the differences are caused by environmental factors, the so called hereditarnian (known as nature or genetic) position is actually dominated by a model that ascribes about equal weight to environment and genetics. And even experts who are generally labelled as "nurture" supporters like say the respected James Flynn have said that they aren't ruling out a small genetic component.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2011-02-23T23:11:14.988Z · LW(p) · GW(p)

experts who are generally labelled as "nurture" supporters like say the respected Richard Lynn

I think you may be confusing Richard Lynn (author of such books as Race Differences in Intelligence: An Evolutionary Analysis) with James Flynn (of Flynn effect fame).

Replies from: Simplicius
comment by Simplicius · 2011-02-24T11:47:08.026Z · LW(p) · GW(p)

Yes I actually did. Corrected.

This is an interesting failure since before I checked back on this post I was 100% certain I put James Flynn.

Replies from: wedrifid
comment by wedrifid · 2011-02-24T15:43:44.427Z · LW(p) · GW(p)

This is an interesting failure since before I checked back on this post I was 100% certain I put James Flynn.

100% certain and wrong? Ooops, there goes your entire epistemic framework. :)

Replies from: Simplicius
comment by Simplicius · 2011-02-24T21:49:54.332Z · LW(p) · GW(p)

Lol yes I see why using that phrase on this site is a bit funny.

Still updating on the language used here. Wonderful site.

comment by retiredurologist · 2010-07-13T19:32:47.654Z · LW(p) · GW(p)

WrongBot: Brendan Nyhan, the Robert Wood Johnson scholar in health policy research at the University of Michigan, spoke today on Public Radio's "Talk of the Nation" about a bias that may be reassuring to you. He calls it the "backfire effect". He says new research suggests that misinformed people rarely change their minds when presented with the facts -- and often become even more attached to their beliefs. The Boston Globe reviews the findings here as they pertain to politics. If this is correct, it seems quite likely that if you have strong anti-bigot beliefs, and you are exposed to "dangerous factual thoughts" that might conceivably sway you toward bigotry, the backfire effect should make you defend your original views even more vigorously, thus acting as a protective bias. OTOH, while listening, I wondered, "Is Nyhan saying that the only factual positions one can assume are those about which one had no previous opinion or knowledge?" Best wishes to overcome your phobia.

comment by PlaidX · 2010-07-13T07:54:39.203Z · LW(p) · GW(p)

Certain patterns of input may be dangerous, but knowledge isn't a pattern of input, it can be formatted in a myriad of ways, and it's not generally that hard to find a safe one. There's a picture of a french fry that crashes AOL instant messenger, but that doesn't mean it's the french fry that's the problem. It's just the way it's encoded.

comment by David_Gerard · 2010-11-23T21:13:41.424Z · LW(p) · GW(p)

I'm working on something on the subject of dangerous and predatory memes. And oh yes, predatory memes exist.

Please read this thread. When anyone talks about this sort of thing, the first reaction is "It can't happen to me, I'm far too smart for that". When it is pointed out how many people who fell for such things thought precisely that, the next reaction is a longer and more elaborate version of "It can't happen to me, I'm far too smart for that".

I'm thinking the very hardest bit is going to be getting across to people that it can happen to you. Even if you're smart and know all about biases (though knowing about them does not mean you don't have them) and think of yourself as rational and so forth. The predatory memes have evolved to eat people who think it can't happen to them.

It is quite possible I am being overcautious. Well, fine.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-23T21:37:15.877Z · LW(p) · GW(p)

I'm thinking the very hardest bit is going to be getting across to people that it can happen to you. [...] The predatory memes have evolved to eat people who think it can't happen to them.

Certainly there are people who can't be infected with strong cultish memes, and when those people believe that it can't happen to them, they are correct. There are also people who believe so incorrectly, but this is not a strong argument for impossibility of holding that belief correctly. You seem to be overstating the case, implying undue confidence.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-23T22:28:54.463Z · LW(p) · GW(p)

Yes, I'm seeming to state it as 1 rather than a high percentage. This is hyperbole, sorry. What I mean to get across is that it's higher than most people think. Particularly ones who consider they think better than others. Thinking better than most people isn't actually that hard, and sufficient LessWrong and you may think quite a lot better. You still have all your cognitive biases - they're in the buggy, corrupt hardware. Knowing about them doesn't grant you immunity to them.

WrongBot gives an anecdote of just how wrong a brain can be. You Are Not So Smart's about page gives a summary of the problem and the blog itself gives the examples. I try to notice my own stupidities and I miss a ton (my loved ones are happy to help my awareness). In general, people don't have a keen sense for their own stupidities, and learning how to be rational can induce a hubris where one thinks one isn't susceptible any more. (What is the correct term for this bias?)

I do think it likely that any mind will have susceptibilities and exploits. Consider the AI box experiment. Even a human can think of an argument to convince a human to do the thing they really, really shouldn't when the subject knows the game and that the game is on; what could a human or evolved meme do when the subject isn't aware the game is on or that there's a game?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-23T22:42:55.764Z · LW(p) · GW(p)

You still have all your cognitive biases - they're in the buggy, corrupt hardware. Knowing about them doesn't grant you immunity to them.

What are you arguing for using these arguments? Being protected from cults doesn't require lack of bias, and indeed lack of bias is an unattainable idealization.

If you argue that presence of biases knowably confers overconfidence in the belief "I can't be captured by a cult", then correcting for that knowable bias leaves you no longer knowably biased. Since this can be said about any belief, it's not clear why it should be said about this particular one, unless you believe that this belief is more systematically incorrect than others. But then you need to argue about what distinguishes this belief from others, not about presence of bias in general. That people are not perfectly rational is not a general argument against any belief.

what could a human or evolved meme do when the subject isn't aware the game is on?

Contrived scenarios can surprise any belief, however correct about expected scenarios.

comment by Peter_de_Blanc · 2010-07-13T05:02:34.646Z · LW(p) · GW(p)

There has not yet been a truly benevolent dictator and it would be delusional at best to believe that you will be the first.

This is true approximately to the extent that there has never been a truly benevolent person. Power anti-corrupts.

Replies from: gwern, Douglas_Knight
comment by gwern · 2010-07-13T06:27:00.604Z · LW(p) · GW(p)

I don't understand your second sentence.

Replies from: EStokes
comment by EStokes · 2010-07-13T18:54:06.004Z · LW(p) · GW(p)

I believe that what he's saying is that with power, people show their true colors. Consciously or not, nice people may have been nice because it benefitted them to. The fact that there were too many penalties for not being nice when they didn't have as much power was a "corruption" of their behavior, in a sense. With the power they gained, the penalties didn't matter enough compared to the benefits.

Replies from: Blueberry
comment by Blueberry · 2010-07-13T20:27:43.319Z · LW(p) · GW(p)

Wow, you're really good at interpreting cryptic sentences!

Replies from: xamdam
comment by xamdam · 2010-07-13T23:24:27.008Z · LW(p) · GW(p)

I think "Elementary, dear Watson" was in order ;)

comment by Douglas_Knight · 2010-07-14T07:31:48.317Z · LW(p) · GW(p)

In favor of the "power just allows corrupt behavior" theory, Bueno de Mesquita offers two very nice examples of people who ruled two different states. One is Leopold of Belgium, who simultaneously ruled Belgium and the Congo. The other is Chiang Kai-shek, who sequentially ruled China and Taiwan, allegedly rather differently. (I heard him speak about these examples in this podcast. BdM, Morrow, Silverson, and Smith wrote about Leopold here, gated)

comment by WrongBot · 2010-07-13T21:09:49.643Z · LW(p) · GW(p)

This post is seeing some pretty heavy downvoting, but the opinions I'm seeing in the comments so far seem to be more mixed; I suppose this isn't unusual.

I have a question, then, for people who downvoted this post: what specifically did you dislike about it? This is a data-gathering exercise that will hopefully allow me to identify flaws in my writing and/or thinking and then correct them. Was the argument being made just obviously wrong? Was it insufficiently justified? Did my examples suck? Were there rhetorical tactics that you particularly disliked? Was it structured badly? Are you incredibly annoyed by the formatting errors I can't figure out how to fix?

Those are broadly the sorts of answers I'm looking for. I am specifically not looking for justifications for downvotes; really, all I want is your help in becoming stronger. With luck, I will be able to waste less of your time in the future.

Thanks.

Replies from: mattnewport, jimrandomh, Tyrrell_McAllister, mattnewport
comment by mattnewport · 2010-07-14T02:34:08.116Z · LW(p) · GW(p)

I've just identified something else that was nagging at me about this post: the irony of the author of this post making an argument that closely parallels an argument some thoughtful conservatives make against condoning alternative lifestyles like polyamory.

The essence of that argument is that humans are not sufficiently intelligent, rational or self-controlled to deal with the freedom to pursue their own happiness without the structure and limits imposed by evolved cultural and social norms that keep their baser instincts in check. That cultural norms exist for a reason (a kind of cultural selection for societies with norms that give them a competitive advantage) and that it is dangerous to mess with traditional norms when we don't fully understand why they exist.

I don't really subscribe to the conservative argument (though I have more sympathy for it than the argument made in this post) but it takes a similar form to this argument when it suggests that some things are too dangerous for mere humans to meddle with.

Replies from: WrongBot
comment by WrongBot · 2010-07-14T03:43:46.145Z · LW(p) · GW(p)

While there are some superficial parallels, I don't think the two cases are actually very similar.

Humans don't have a polyamory-bias; if the scientific consensus on neurotransmitters like oxytocin and vasopressin is accurate, it's quite the opposite. Deliberate action in defiance of bias is not dangerous. There's no back door for evolution to exploit.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-07-15T17:07:17.983Z · LW(p) · GW(p)

This just seems unreasoned to me.

Replies from: WrongBot
comment by WrongBot · 2010-07-15T17:16:53.172Z · LW(p) · GW(p)

Erm, how so?

It occurs to me that I should clarify that when I said

Deliberate action in defiance of bias is not dangerous.

I meant that it is not dangerous thinking of the sort I have attempted to describe.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-07-15T18:19:32.317Z · LW(p) · GW(p)

Maybe I just don't see the distinction or the argument that you are making, but I still don't. Do you really think that thinking about polyamory isn't likely to impact values somewhat relative to unquestioned monogamy?

Replies from: WrongBot
comment by WrongBot · 2010-07-15T18:45:29.444Z · LW(p) · GW(p)

Oh, it's quite likely to impact values. But it won't impact your values without some accompanying level of conscious awareness. It's unconscious value shifts that the post is concerned about.

Replies from: None
comment by [deleted] · 2011-02-22T02:18:27.165Z · LW(p) · GW(p)

How can you be so sure? As in I dissagree.

How people value different kinds of sexual behaviours seems to be very strongly influenced by the subconscious.

comment by jimrandomh · 2010-07-13T22:53:29.176Z · LW(p) · GW(p)

I think it would've been better received if some attention was given to defense mechanisms - ie, rather than phrasing it as some true things being unconditionally bad to know, phrase it as some true things being bad to know unless you have the appropriate prerequisites in place. For example, knowing about differences between races is bad unless you are very good at avoiding confirmation bias, and knowing how to detect errors in reasoning is bad unless you are very good at avoiding motivated cognition.

comment by Tyrrell_McAllister · 2010-07-13T23:05:42.101Z · LW(p) · GW(p)

I have a question, then, for people who downvoted this post: what specifically did you dislike about it? This is a data-gathering exercise that will hopefully allow me to identify flaws in my writing and/or thinking and then correct them. Was the argument being made just obviously wrong? Was it insufficiently justified? Did my examples suck? Were there rhetorical tactics that you particularly disliked? Was it structured badly? Are you incredibly annoyed by the formatting errors I can't figure out how to fix?

I upvoted your post, because I think that you raise a possibility that we should consider. It should not be dismissed out of hand.

However, your examples do kind of suck :). As Sarah pointed out, none of us is likely to become a dictator, and dictators are probably not typical people. So the history of dictators is not great information about how we ought to tend to our epistemological garden. Your claims about how data on group differences in intelligence affect people would be strong evidence if it were backed up by more than anecdote and speculation. As it is, though, it is at least as likely that you are suffering from confirmation bias.

Replies from: WrongBot
comment by WrongBot · 2010-07-14T00:31:55.619Z · LW(p) · GW(p)

Thank you. I should have held off on making the post for a few days and worked out better examples at the very least. I will do better.

comment by mattnewport · 2010-07-13T21:40:43.284Z · LW(p) · GW(p)

Was the argument being made just obviously wrong?

This, primarily. At least obviously wrong by my value system where believing true things is a core value. To the extent that this is also the value system of less wrong as a whole it seems contrary to the core values of the site without acknowledging the conflict explicitly enough.

I didn't think the examples were very good either. I think the argument is wrong even for value systems that place a lower value on truth than mine and the examples aren't enough to persuade me otherwise.

I also found the (presumably) joke about hunting down and killing anyone who disagrees with you jarring and in rather poor taste. I'm generally in favour of tasteless and offensive jokes but this one just didn't work for me.

Replies from: Vladimir_Nesov, Tyrrell_McAllister
comment by Vladimir_Nesov · 2010-07-13T21:43:10.754Z · LW(p) · GW(p)

At least obviously wrong by my value system where believing true things is a core value.

Beware identity. It seems that a hero shouldn't kill, ever, but sometimes it's the right thing to do. Unless it's your sole value, there will be situations where it should give way.

Replies from: mattnewport
comment by mattnewport · 2010-07-13T21:58:52.404Z · LW(p) · GW(p)

Unless it's your sole value, there will be situations where it should give way.

This seems like it should generally be true but in practice I haven't encountered any plausible examples where I prefer ignorance. This includes a number of hypotheticals where many people claim they would prefer ignorance which leads me to believe the value I place on truth is outside the norm.

Truth / knowledge is a little paradoxical in this sense as well. I believe that killing is generally wrong but there is no paradox in killing in certain situations because it appears to be the right choice. The feedback effect of truth on your decision making / value defining apparatus makes it unlike other core values that might sometimes be abandoned.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T22:01:07.671Z · LW(p) · GW(p)

This seems like it should generally be true but in practice I haven't encountered any plausible examples where I prefer ignorance. This includes a number of hypotheticals where many people claim they would prefer ignorance which leads me to believe the value I place on truth is outside the norm.

I agree with this, my objection is to the particular argument you used, not necessarily the implied conclusion.

comment by Tyrrell_McAllister · 2010-07-13T22:59:45.653Z · LW(p) · GW(p)

This, primarily. At least obviously wrong by my value system where believing true things is a core value.

I really don't think that the OP can be called "obviously wrong". For example, your brain is imperfect, so it may be that believing some true things makes it less likely that you will believe other more important true things. Then, even if your core value is to believe true things, you are going to want to be careful about letting the dangerous beliefs into your head.

And the circularity that WrongBot and Vladimir Nesov have pointed out rears its head here, too. Suppose that the possibility that I pose above is true. Then, if you knew this, it might undermine the extent to which you hold believing true things to be a core value. That is precisely the kind of unwanted utility-function change that Wrongbot is warning us about.

It's probably too pessimistic to say that you could never believe the dangerous true things. But it seems reasonably possible that some true beliefs are too dangerous unless you are very careful about the way in which you come to believe them. It may be unwise to just charge in and absorb true facts willy-nilly.

Here's another way to come at WrongBot's argument. It's obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know. It's not obvious, but it is at least plausible, that the "harm" could be that the other person's utility function would change in a way that we don't want. At least, this is certainly not obviously wrong. The final step in the argument is then to acknowledge that the "other person" might be the part of yourself over which you do not have perfect control — which is, after all, most of you.

Replies from: mattnewport
comment by mattnewport · 2010-07-14T00:02:00.399Z · LW(p) · GW(p)

It's obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know.

I believe some other people's reports that there are things they would prefer not to know and would be inclined to honor their preference if I knew such a secret but I can't think of any examples of such secrets for myself. In almost all cases I can think of I would want to be informed of any true information that was being withheld from me. The only possible exceptions are 'pleasant surprises' that are being kept secret on a strictly time-limited basis to enhance enjoyment (surprise gifts, parties, etc.) but I think these are not really what we're talking about.

I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the 'more harm than good' would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld. Sometimes keeping such secrets might be the 'right thing' morally (the Nazi at the door looking for fugitives) but this is not because you are acting in the interests of the party from whom you are keeping information.

Replies from: Tyrrell_McAllister, Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-07-14T00:50:06.597Z · LW(p) · GW(p)

In almost all cases I can think of I would want to be informed of any true information that was being withheld from me.

Maybe this is an example:

I was once working hard to meet a deadline. Then I saw in my e-mail that I'd just received the referee reports for a journal article that I'd submitted. Even when a referee report recommends acceptance, it will almost always request changes, however minor. I knew that if I looked at the reports, I would feel a very strong pull to work on whatever was in them, which would probably take at least several hours. Even if I resisted this pull, resistance alone would be a major tax on my attention. My brain, of its own accord, would grab mental CPU cycles from my current project to compose responses to whatever the referees said. I decided that I couldn't spare this distraction before I met my deadline. So I left the reports unread until I'd completed my project.

In short, I kept myself ignorant because I expected that knowledge of the reports' contents would induce me to pursue the wrong actions.

Replies from: mattnewport
comment by mattnewport · 2010-07-14T01:05:06.996Z · LW(p) · GW(p)

This is an example of a pretty different kind of thing to what WrongBot is talking about. It's a hack for rationing attention or a technique for avoiding distraction and keeping focus for a period of time. You read the email once your current time-critical priority was dealt with, you didn't permanently delete it. Such tactics can be useful and I use them myself. It is quite different from permanently avoiding some information for fear of permanent corruption of your brain.

I'm a little surprised that you would have thought that this example fell into the same class of things as WrongBot or I were talking about. Perhaps we need to define what kinds of 'dangerous thought' we are talking about a little more clearly. I'm rather bemused that people are conflating this kind of avoidance of viscerally unpleasant experiences with 'dangerous thoughts' as well. It seems others are interpreting the scope of the article massively more broadly than I am.

Replies from: ABranco, Tyrrell_McAllister
comment by ABranco · 2010-07-19T04:55:32.438Z · LW(p) · GW(p)

Or putting it differently:

  • One thing is to operationally avoid gaining certain data at a certain moment in order to better function overall. Because we need to keep our attention focused.

  • Another thing is to strategically avoid gaining certain kinds of information that could possibly lead us astray.

I'd guess most people here agree with this kind of "self-deception" that the former entails. And it seems that the post is arguing pro this kind of "self-deception" in the latter case as well, although there isn't as much consensus — some people seem to welcome any kind of truth whatsoever, at any time.

However... It seems to me now that, frankly, both cases are incredibly similar! So I may be conflating them, too.

The major difference seems to be the scale adopted: checking your email is an information hazard at that moment, and you want to postpone it for a couple of hours. Knowing about certain truths is an information hazard at this moment, and you want to postpone it for a couple of... decades. If ever. When your brain is stronger enough to handle it smoothly.

It all boils down to knowing we are not robots, that our brains are a kludge, and that certain stimuli (however real or true) are undesired.

comment by Tyrrell_McAllister · 2010-07-14T01:32:53.237Z · LW(p) · GW(p)

This is an example of a pretty different kind of thing to what WrongBot is talking about.

I think that you can just twiddle some parameters with my example to see something more like WrongBot's examples. My example had a known deadline, after which I knew it would be safe to read the reports. But suppose that I didn't know exactly when it would be safe to read the reports. My current project is the sort of thing where I don't currently know when I will have done enough. I don't yet know what the conditions for success are, so I don't yet know what I need to do to create safe conditions to read the reports. It is possible that it will never be safe to read the reports, that I will never be able to afford the distraction of suppressing my brain's desire to compose responses.

My understanding is that WrongBot views group-intelligence differences analogously. The argument is that it's not safe to learn such truths now, and we don't yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we've figured out the safe conditions. That is my reading of the argument.

Replies from: WrongBot, HughRistik
comment by WrongBot · 2010-07-14T03:35:46.188Z · LW(p) · GW(p)

More or less. I'm generally sufficiently optimistic about the future that I don't think that there are kinds of true knowledge that will continue to be dangerous indefinitely; I'm just trying to highlight things I think might not be safe right now, when we're all stuck doing serious thinking with opaquely-designed sacks of meat.

comment by HughRistik · 2010-07-14T05:06:11.073Z · LW(p) · GW(p)

Like Matt, I don't think your example does the same thing as WrongBot's, even with your twiddling.

WrongBot doesn't want the "dangerous thoughts" to influence him to revise his beliefs and values. That wasn't the case for you: you didn't want to avoid revising your beliefs about your paper; you just didn't want to deal with the cognitive distraction of it during the short term. If you avoided reading your reports because you wanted to avoid believing that your article needed any improvement, then I think your situation would be more analogous to WrongBot's.

The argument is that it's not safe to learn such truths now, and we don't yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we've figured out the safe conditions.

But there's another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon. That's not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.

Putting oneself at risk of error for a short and capped time frame is much different from putting oneself at risk of error indefinitely.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-07-14T06:09:31.487Z · LW(p) · GW(p)

WrongBot doesn't want the "dangerous thoughts" to influence him to revise his beliefs and values. That wasn't the case for you: you didn't want to avoid revising your beliefs about your paper; you just didn't want to deal with the cognitive distraction of it during the short term.

The beliefs that I didn't want to revise were my beliefs about the contents of the reports. Before I read them, my beliefs about their contents were general and vague. Were I to read the reports, I would have specific knowledge about what they said. My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project. Despite my intention to focus solely on my current project, my brain would allocate significant resources to composing responses to what I'd read in the reports.

But there's another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon.

But in the "twiddled" version, I don't know when the safe conditions will occur . . .

That's not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.

To be fair, WrongBot thinks that we will be able to learn this knowledge eventually. We just shouldn't take it as obvious that we know what the safe conditions are yet.

Replies from: HughRistik
comment by HughRistik · 2010-07-14T06:52:45.060Z · LW(p) · GW(p)

I still say that there is a difference between what you and WrongBot are doing, even if you're successfully shooting down my attempts to articulate it. I might need a few more tries to be able to correctly articulate that intuition.

My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project.

These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.

But in the "twiddled" version, I don't know when the safe conditions will occur . . .

True, but there wasn't the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.

I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won't read it because it might change my values, at least not only the conditions are safe for me. If I say that I can't read it this week because I have a deadline, but maybe next week, you'll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?

It seems that since we recognize that rationalists are human, we can and should give them a pass on scrutinizing certain thoughts or investigating certain ideas when they are under time pressure or emotional pressure in the short term, like in your example. But how long can one dodge inquiry in a certain area before one's rationalist creds become suspect?

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-07-14T17:54:15.606Z · LW(p) · GW(p)

My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project.

These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.

I'm having trouble seeing this distinction. What if I had a moral obligation to do as well as possible on my current project, because people were depending on me, say? My concern would be that, if I read the reports, I would feel a pull to act immorally. I might even rationalize away the immorality under the influence of this pull. In effect, I would act according to different moral values. Would that make the situation more analogous in your view, or would something still be missing?

I'm getting the sense that the problem with my example is that it has nothing to do with political correctness. Is it key for you that WrongBot wants to keep information out of his/her brain because of political correctness specifically?

But in the "twiddled" version, I don't know when the safe conditions will occur . . .

True, but there wasn't the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.

I called it a "twiddled" version because I was thinking of the uncertainty as a continuous parameter that I could set to a wide spectrum of values. In the actual situation, the dial was pegged at "almost complete certainty". But I can imagine situations where I'm very uncertain. It looks like part of your problem with this is that such a quantitative change amounts to a qualitative change in your view. Is that right?

I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won't read it because it might change my values, at least not only the conditions are safe for me. If I say that I can't read it this week because I have a deadline, but maybe next week, you'll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?

I take it that your concern would be that losing creationism would change your moral values in a dangerous way. Whether you are being rational then depends on what "put off reading it indefinitely" means. I would say that you are being rational to avoid the book for now only if you are making a good-faith effort to determine rationally the conditions under which it would be safe to read the book, with the intention of reading the book once you've found sufficiently safe conditions.

Replies from: mattnewport
comment by mattnewport · 2010-07-14T18:54:20.882Z · LW(p) · GW(p)

Part of the problem I'm having with your example is my perception of the magnitude of the gap between what you are talking about and WrongBot's examples. While they share certain similarities it appears roughly equivalent to a discussion about losing your entire life savings which you are comparing to the time you dropped a dime down the back of the sofa.

Sometimes a sufficiently large difference of magnitude can be treated for most purposes as a difference in kind.

Replies from: HughRistik, Tyrrell_McAllister
comment by HughRistik · 2010-07-14T19:01:44.970Z · LW(p) · GW(p)

Quantity has a quality all of its own.

comment by Tyrrell_McAllister · 2010-07-14T20:11:19.414Z · LW(p) · GW(p)

Part of the problem I'm having with your example is my perception of the magnitude of the gap between what you are talking about and WrongBot's examples.

What is the axis along which the gap lies? Is it the degree of uncertainty about when it will be safe to learn the dangerous knowledge?

Replies from: mattnewport, HughRistik, WrongBot
comment by mattnewport · 2010-07-14T20:25:11.637Z · LW(p) · GW(p)

Multiple axes:

  • Degree of uncertainty and magnitude of duration of the length of time before it will be 'safe'.
  • Degree of effort involved in avoidance (temporarily holding off on reading a specific email vs. actively avoiding certain knowledge and filtering all information for a long and unspecified duration).
  • Severity of consequences (delayed or somewhat sub-standard performance on a near term project deadline vs. fundamental change or damage to your core values)
  • Scope of filtering (avoiding detailed contents of a specific email with a known and clearly delineated area of significance vs. general avoidance of whole areas of knowledge where you may not even have a good idea of what knowledge you may be missing out on).
  • Mental resources emphasized (short term attentional resources vs. deeply considered core beliefs and modes of thought and high level knowledge and understanding).
comment by HughRistik · 2010-07-14T20:28:19.176Z · LW(p) · GW(p)

That's part of it, and also how far into the future one thinks that might occur.

comment by WrongBot · 2010-07-14T20:18:32.198Z · LW(p) · GW(p)

In my perception, the gap is less about certainty and more about timescale; I'd draw a line between "in a normal human lifetime" and "when I have a better brain" as the two qualitatively different timescales that you're talking about.

comment by Tyrrell_McAllister · 2010-07-14T00:35:31.323Z · LW(p) · GW(p)

I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the 'more harm than good' would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld.

But this is the way to think of WrongBot's claim. The conscious you, the part over which you have deliberate control, is but a small part of the goal-seeking activity that goes on in your brain. Some of that goal-seeking activity is guided by interests that aren't really yours. Sometimes you ought to ignore the interests of these other agents in your brain. There is some possibility that you should sometimes do this by keeping information from reaching those other agents, even though this means keeping the information from yourself as well.

comment by cousin_it · 2010-07-13T07:36:03.901Z · LW(p) · GW(p)

Your examples of "identity politics" and "power corrupts" don't seem to illustrate "dangerous knowledge". They are more like dangerous decisions. Am I missing the point?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T08:20:46.048Z · LW(p) · GW(p)

Situations creating modes of thought that make your corrupted hardware turn you into a bad person.

comment by xamdam · 2010-07-15T21:13:13.840Z · LW(p) · GW(p)

Come to think of it, a related argument was made, poetically, in Watchmen: Dr. Manhattan knew everything, it did clearly change his utility function (he became less human) and he mentioned appreciating not knowing the future when Adrian blocked it with tacheons. Poetry, but something to think about it.

Replies from: ABranco
comment by ABranco · 2010-07-19T02:21:16.556Z · LW(p) · GW(p)

He referred to something along the lines of "the sensation of being surprised", if I recall it correctly. Would you choose to know everything, if you could, but then never having this sensation again?

Replies from: None
comment by [deleted] · 2013-07-04T12:27:12.031Z · LW(p) · GW(p)

Would you choose to never get sick, if you could, but then never having this sensation (of getting healthy) again?

comment by Unknowns · 2010-07-13T18:31:08.116Z · LW(p) · GW(p)

This is completely wrong. You might as well tell a baby to avoid learning language, since this will change its utility function, it will begin to have an adult's utility function, instead of a baby's.

comment by Psychohistorian · 2010-07-13T05:51:16.761Z · LW(p) · GW(p)

Not to evoke a recursive nightmare, but some utility function alterations appear to be strictly desirable.

As an obvious example, if I were on a diet and I could rewrite my utility function such that the utilities assigned to consuming spinach and cheesecake were swapped, I see no harm in making that edit. One could argue that my second-order utility (and all higher) function should be collapsed into my first-order one, such that this would not really change my meta-utility function, but this issue just highlights the futility of trying to cram my complex, conflicting, and oft-inconsistent desires into a utility function.

This does raise an interesting issue: if I'm a strictly selfish utilitarian, do I not want my utility function to be that which will attain the highest expected utility? Selfishness is not necessary; it just makes the question much simpler.

Replies from: WrongBot, NancyLebovitz, orthonormal
comment by WrongBot · 2010-07-13T15:13:12.309Z · LW(p) · GW(p)

I wouldn't claim that any human is actually able to describe their own utility function; they're much too complex and riddled with strange exceptions and pieces of craziness like hyperbolic discounting.

I also think that there's some confusion surrounding the whole idea of utility functions in reality, which I should have been more explicit about. Your utility function is just a description of what you want/value; it is not explicitly about maximizing happiness. For example, I don't want to murder people, even under circumstances where it would make me very happy to do so. For this reason, I would do everything within my power to avoid taking a pill that would change my preferences such that I would then generally want to murder people; this is the murder pill I mentioned.

As for swapping the utilities of spinach and cheesecake, I think the only way that makes sense to do so would be to change how you perceive their respective tastes, which isn't a change to your utility function at all. You still want to eat food that tastes good; changing that would have much broader and less predictable consequences.

This does raise an interesting issue: if I'm a strictly selfish utilitarian, do I not want my utility function to be that which will attain the highest expected utility? Selfishness is not necessary; it just makes the question much simpler.

Only if your current utility function is "maximize expected utility." (It isn't.)

comment by NancyLebovitz · 2010-07-13T12:10:53.752Z · LW(p) · GW(p)

Anorexia could be viewed as an excessive ability to rewrite utility functions about food.

If you don't have the ability to include context, the biological blind god may serve you better than the memetic blind god.

comment by orthonormal · 2010-07-13T14:44:05.276Z · LW(p) · GW(p)

This does raise an interesting issue: if I'm a strictly selfish utilitarian, do I not want my utility function to be that which will attain the highest expected utility?

This is a particular form of wireheading; fortunately, for evolutionary reasons we're not able to do very much of it without advanced technology.

Replies from: Vladimir_Nesov, red75
comment by Vladimir_Nesov · 2010-07-13T18:38:39.959Z · LW(p) · GW(p)

This does raise an interesting issue: if I'm a strictly selfish utilitarian, do I not want my utility function to be that which will attain the highest expected utility?

This is a particular form of wireheading

I'd say it's rather a form of conceptual confusion: you can't change a concept ("change" is itself a "timeful" concept, meaningful only as a property within structures which are processes in the appropriate sense). But it's plausible that creating agents with slightly different explicit preference will result in a better outcome than, all else equal, if you give those agents your own preference. Of course, you'd probably need to be a superintelligence to correctly make decisions like this, at which point creation of agents with given preference might cease to be a natural concept.

comment by red75 · 2010-07-13T19:48:22.253Z · LW(p) · GW(p)

I am afraid that advanced technology is not necessary. Literal wireheading.

comment by steven0461 · 2010-07-14T17:56:22.231Z · LW(p) · GW(p)

If you're being held back by worries about your values changing, you can always try cultivating a general habit of reverting to values held by earlier selves when doing so is relatively easy. I call it "reactionary self-help".

Replies from: PhilGoetz, Vladimir_Nesov, MichaelVassar, Kingreaper
comment by PhilGoetz · 2010-07-14T19:18:08.296Z · LW(p) · GW(p)

I don't think that makes sense. Changing back is no more desirable than any other change.

Once you've changed, you've changed. Changing your utility function is undesirable. But it isn't bad. You strive to avoid it; but once it's happened, you're glad it did.

Replies from: steven0461
comment by steven0461 · 2010-07-14T20:36:06.381Z · LW(p) · GW(p)

Right; that's what happens by default. But if you find that because your future self will want to keep its new values, you're overly reluctant to take useful actions that change your values as a side effect, you might want to precommit to roll back certain changes; or if you can't keep track of all the side effects, it's conceivable you want to turn it into a general habit. I could see this either being a good or bad idea on net.

Replies from: WrongBot
comment by WrongBot · 2010-07-17T00:06:31.100Z · LW(p) · GW(p)

I don't think you can do this. Your future self, not sharing your values, will have no reason to honor your present self's precommitment.

Replies from: mattnewport
comment by mattnewport · 2010-07-17T00:16:13.408Z · LW(p) · GW(p)

Precommitment implies making it expensive or impossible for your future self not to honor your commitment.

Replies from: WrongBot
comment by WrongBot · 2010-07-17T00:46:39.338Z · LW(p) · GW(p)

Errr, how? I am familiar with the practice of precommitment, but most of the ways of creating one for oneself seem to rely on consequences not preferred by one's values. If one's values have changed, then, such a precommitment isn't very helpful.

Replies from: mattnewport
comment by mattnewport · 2010-07-17T02:12:49.305Z · LW(p) · GW(p)

In the context of the thread we're not talking about all your values changing, just some subset. Base the precommitment round a value you do not expect to change. Money is a reliable fallback due to it's fungibility.

Replies from: WrongBot
comment by WrongBot · 2010-07-17T02:23:41.874Z · LW(p) · GW(p)

This isn't as reliable as you think. It isn't often that people change how much importance they attach to money, but it isn't rare, either. Either way, is there a good way to guarantee that you'll lose access to money when your values change? That's tough for an external party to verify when you have an incentive to lie.

Replies from: mattnewport
comment by mattnewport · 2010-07-17T09:59:40.261Z · LW(p) · GW(p)

This is more reliable than you think. We live in a world where money is convertible to further a very wide range of values.

It doesn't have to be money. You just need a value that you have no reason to expect will change significantly as a result of exposure to particular 'dangerous thoughts'. Can you honestly say that you expect exposing yourself to information about sex differences in intelligence will radically alter the relative value of money to you though?

Escrow is the general name for a good way to guarantee that your future self will be bound by your precommitment. Depending on how much money is involved this could be as informal as asking a trusted friend who shares your current values to hold some money for a specified period and promise to donate it to a charity promoting the value you fear may be at risk if they judge you to have abandoned that value.

The whole point of precommitment is that you have leverage over your future self. You can make arrangements of cost and complexity up to the limit your current self values the matter of concern and impose a much greater penalty on your future self in case of breach of contract.

Ultimately I don't believe this is your true rejection. If you wished you could find ways to make credible precommitments to your current values and then undergo controlled exposure to 'dangerous thoughts' but you choose not to. That may be a valid choice from a cost/benefit analysis by your current values but it is not because the alternative is impossible, it is just too expensive for your tastes.

comment by Vladimir_Nesov · 2010-07-16T22:36:17.959Z · LW(p) · GW(p)

It's important to distinguish changes in values, from updating of knowledge about values in response to moral arguments. The latter emphatically shouldn't be opposed, otherwise you turn morally stupid.

Replies from: WrongBot
comment by WrongBot · 2010-07-17T00:09:25.729Z · LW(p) · GW(p)

That sounds like it would be isomorphic to always encouraging the updating of instrumental values, but not terminal ones, which strikes me as an unquestionably good idea in all cases where stupidity is not a terminal value.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-17T10:54:04.270Z · LW(p) · GW(p)

You don't update values, you update knowledge about values. Knowledge about terminal values might be as incomplete as knowledge about instrumental values. The difference is that with instrumental values, you usually update indifference, while with "terminal" values you start out with some idea of preference.

Replies from: red75
comment by red75 · 2010-07-17T11:45:29.982Z · LW(p) · GW(p)

What about newborns? If they have same terminal values as adults, then Kolmogorov complexity of terminal values should not exceed one of genome. Thus a) terminal values are updated or b) terminal values are not very complex or c) knowledge about terminal values is part of terminal values, which imply a).

comment by MichaelVassar · 2010-07-15T16:46:19.951Z · LW(p) · GW(p)

You can also try to engage in trade with your future selves, which most good formulations of CEV or its successors should probably enable.

comment by Kingreaper · 2010-07-16T22:29:20.126Z · LW(p) · GW(p)

I don't believe I could revert back easily under normal circumstances, so I can't see this advice actually being fruitful unless that fact about me is unusual.

comment by Richard_Kennaway · 2010-07-13T07:48:49.835Z · LW(p) · GW(p)

(I am making a distinction here between the parts of your brain that you have access to and can introspect about, which for lack of better terms I call “you” or “your consciousness”, and the vast majority of your brain, to which you have no such access or awareness, which I call “your brain.” This is an emotional manipulation, which you are now explicitly aware of. Does that negate its effect? Can it?)

You seem to think you know what the effect is. My immediate thought on reading "it will decide the output, not you" was "oh dear, dualism again", not "zomg im the prisoner of this alien machine!!!!", which seemed to be the effect you were going for.

Anyway, all I see here is defeatist scaremongering.

comment by xamdam · 2010-07-14T09:58:27.201Z · LW(p) · GW(p)

By the way, some people took similar position to yours in

What Is Your Dangerous Idea?: Today's Leading Thinkers on the Unthinkable

comment by Carinthium · 2010-11-23T09:14:55.052Z · LW(p) · GW(p)

Identity Politics: Agree- good point.

Power Corrupts: Irrelevant to those LWers who realistically will never gain large amounts of power and status. For those who do it is a matter of the dangers of increasing control, not avoiding dangerous thoughts.

On the comment about opening the door to bigotry: Even if bigotry has bad effects, given the limited amount of harm an individual can do and appropriate conscious supression of effects, isn't it worth it to prevent self-delusion?

comment by Ignoreme · 2010-07-23T09:10:18.133Z · LW(p) · GW(p)

Don't read this article. It's way too dangerous:

comment by steven0461 · 2010-07-14T18:08:13.650Z · LW(p) · GW(p)

If you're going to intentionally choose false beliefs, you should at least be careful to also install an aversion to using these beliefs to decide other questions you care about such as which intellectual institutions to trust, and an aversion to passing these beliefs on to other people. It's one thing to nuke your brain and quite another to fail to encase it in lead afterward.

comment by Thomas · 2010-07-15T08:23:59.866Z · LW(p) · GW(p)

Your brain cannot be trusted. It is not safe. You must be careful with what you put into it, because it will decide the output, not you.

This "it" may, or even should, relate to the idea itself. The same idea, the same meme, put into a healthy rational brains anywhere, will decide the same! Since the brains are just a rational machine always doing the best possible thing.

It is the input, what decides the output. Machine has no other (irrational) choices, than to process the input best way it can, and then to spit out the output.

It is not my calculator only, which outputs "12" to the input "5+7". It is every unbroken calculator in the world, which outputs the same.

So again. The input "decides" what the output should be, not the computer (brains).

Replies from: WrongBot
comment by WrongBot · 2010-07-15T10:12:19.528Z · LW(p) · GW(p)

It is not my calculator only, which outputs "12" to the input "5+7". It is every unbroken calculator in the world, which outputs the same.

This would also be true of unbroken brains, if there were any.

Replies from: Thomas
comment by Thomas · 2010-07-15T11:40:29.624Z · LW(p) · GW(p)

Mostly they are unbroken. At least the most of the time. They do perform their functions the best way they can.

And this is my point. They can't "decide" the output. The input "decides" the output.

Replies from: WrongBot, red75
comment by WrongBot · 2010-07-15T15:19:07.026Z · LW(p) · GW(p)

So far as I can tell you're agreeing with me, or at least arguing at a right angle to what this post was intended to discuss. Whether it's the brain or the input that does the deciding, there are some combinations of brain and input that produce results that may be contrary to one's conscious preferences.

The fact that all brains work in roughly the same way doesn't change the fact that they are not the ideal substrate for rational cognition in a modern environment.

Replies from: Thomas
comment by Thomas · 2010-07-15T16:18:46.124Z · LW(p) · GW(p)

So you say, you can't trust your brains.

But you can trust them in this debate?

Replies from: WrongBot, ciphergoth
comment by WrongBot · 2010-07-15T16:52:27.103Z · LW(p) · GW(p)

There is no obvious way in which genes that would cause my brain to deceive me in this sort of case would be selected for.

(If there is such a bias that applies here, I would lower the estimated accuracy of my argument accordingly.)

comment by Paul Crowley (ciphergoth) · 2010-07-15T16:38:52.727Z · LW(p) · GW(p)

We have to repair the ship while at sail. What alternate means of thinking about it do you propose?

Replies from: Thomas
comment by Thomas · 2010-07-15T16:42:57.771Z · LW(p) · GW(p)

Don't doubt too much in your brains. If you do, you can't reason. That you brains are okay is a necessary premise for any rational thinking.

Your premise could be wrong, but then you are doomed anyway.

Replies from: Vladimir_Nesov, ciphergoth
comment by Vladimir_Nesov · 2010-07-15T17:00:12.747Z · LW(p) · GW(p)

There is do "default" to fall back to when you "distrust your brain". Any act of "distrust" must be accompanied with a specific suggestion for improvement, which, where available, should surely be taken.

Replies from: Thomas
comment by Thomas · 2010-07-15T17:04:59.588Z · LW(p) · GW(p)

When you distrust your brains, it is an internal affair. You can only hope, that you will "follow the good guy in you" in such an event.

comment by Paul Crowley (ciphergoth) · 2010-07-15T16:47:24.617Z · LW(p) · GW(p)

Look up "Neurath's Boat" (sometimes "Neurath's Ship").

We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction.

Replies from: Thomas
comment by Thomas · 2010-07-15T16:53:45.001Z · LW(p) · GW(p)

Right now, you can't repair your brains very much. We have no good access to them. You have to trust them. At least for now.

What doesn't mean, they are right. It only means that this is your best bet you can make. At least for now.

Replies from: JanetK
comment by JanetK · 2010-07-15T17:10:41.226Z · LW(p) · GW(p)

I believe we have more control then we think we have. I call it 'mind maintenance' if you think about and very carefully try to analyze problems, biases, personal traps etc.it is possible to make a difference to how you approach things in the future. As long as you feel you are separate from your brain/mind or have some sort of magical free will or mistrust your own thinking, it will be very difficult to do mind maintenance. There is not a you and your brain. There is one brain and within it there is widespread awareness of some of its processes - together being you.

Replies from: Thomas
comment by Thomas · 2010-07-15T17:24:00.453Z · LW(p) · GW(p)

I believe we have more control then we think we have.

I believe that too. But only if the underlying mental process is sound. Than you will handle the inputs properly and the outputs will be satisfactory.

Even in the case of optical illusions, you can understand the context and everything will go by smoothly.

But if you doubt in your brains' abilities generally, it's nothing you can do.

comment by red75 · 2010-07-15T13:31:01.155Z · LW(p) · GW(p)

Your point contains 0 bits of information about brain. Everything can be treated as object whose output is a function of history of inputs.

Replies from: Thomas
comment by Thomas · 2010-07-15T13:37:09.583Z · LW(p) · GW(p)

I tried to inform you about inputs, not as much about brains. My inputs decides your outputs. Your brains are, what they are.

Replies from: red75
comment by red75 · 2010-07-15T14:10:55.466Z · LW(p) · GW(p)

Intractable. Brain inputs are partially dependent on brain outputs. Thus you need to exclude all inputs from inside future light cone originating at space-time point of brain formation to deny participation of brain in causal chain. This will render reasoning about brain functions nearly impossible.

Replies from: Thomas
comment by Thomas · 2010-07-15T14:49:13.801Z · LW(p) · GW(p)

To rephrase myself:

The set of all possible inputs is larger and much more diverse, than the set of all human brains.

The wast majority of inputs will be processed the same way, by the most brains.

The output is much more dependent of the input, than of the brains.

See this now?

Replies from: red75
comment by red75 · 2010-07-15T16:25:08.989Z · LW(p) · GW(p)

The set of all possible inputs is larger and much more diverse, than the set of all human brains.

Do you mean set of all possible sequences of inputs? As one sample of input (dominated by visual perception ~10^8 cone and rod cells * ~10 bit per cell =10^9 bits) is much less diverse than brain that contains ~10^14 synapses.

The wast majority of inputs will be processed the same way, by the most brains.

If you talk about the sequence of all inputs from birth to current moment, including genetic information, then yes, the sequence uniquely defines brain structure and output (and the sequence is partially dependent on previous outputs of brain). But this means that brain participates in its own development, and you can't say that inputs is all we need, as those inputs depend on brain's reactions (brain in vat is not counter example).

If you talk about some recent part of input sequence, than I can't see a basis for your assertion. If we have input space of N elements, output space of M elements, where N>>M, and M brains with different mappings from input to output, then counter example is that i-th brain always outputs i-th output.

Here is relevant article "Thou art physics" with relevant links.


Sorry for divergence from main topic, but I find it inappropriate and dangerous when brain is seen not as a "substrate" of conscious agent, but as a toy of laws of physics/circumstances. Especially because latter looks like rational point of view.