Posts

Comments

Comment by hhadzimu on Rationality Quotes December 2014 · 2014-12-03T23:13:25.710Z · LW · GW

Damon Runyon clearly has not considered point spreads.

Comment by hhadzimu on My concerns about the term 'rationalist' · 2009-06-06T00:21:18.422Z · LW · GW

"Many beliefs about procedure are exactly the opposite-- take believing that truth can be taken from the Bible. That procedure is self-justifying and there is no way to dispute it from within the assumptions of the procedure."

That's my point about rationality - the way I think about it, it would catch its own contradictions. In essence, a rationalist would recognize that rationalists don't "win." So as a result, committing yourself to rationality doesn't actually commit you to an outcome, as perhaps following a scripture would.

The bigger problem, I believe, is that most professed commitment to a procedure is superficial, and that instead most people simply bend the procedure to a preferred outcome. "The Devil may cite scripture for his purpose." The key, of course, is following the procedure accurately, and this is the community that'll keep you in line if you try to bend procedure to your preferred conclusion.

Comment by hhadzimu on My concerns about the term 'rationalist' · 2009-06-05T23:45:47.058Z · LW · GW

I'm inclined to agree on your latter point: looking at the results of the survey, it seems like it would be easy to go from 'rationalist' as a procedural label to 'rationalist' as shorthand for 'atheist male computer programmer using bayesian rules.' Of course, that's a common bias, and I think this community is as ready as any to fight it.

As for the former, I tried to address that by pointing out that rationalism means that we've already decided that updating priors is more effective than prayer. That said, I have a perhaps idealistic view of rationality, in that I think it's flexible enough to destroy itself, if necessary. I'd like to think that if we learned that our way of reasoning is inferior, we'd readily abandon it. A little too idealistic, perhaps.

That said, I will say that I find purely procedural labels less dangerous than substantive ones. You've alluded to the danger of conflating it with substantive labels like atheism, but that's a separate danger worth looking out for.

Comment by hhadzimu on My concerns about the term 'rationalist' · 2009-06-05T21:04:08.098Z · LW · GW

I think the danger here is far smaller than people are making it out to be. There is a major difference between the label "rationalist" and most other identities as Paul Graham refers to them. The difference is that "rationalist" is a procedural label; most identities are at least partially substantive, using procedural/substantive in the sense that the legal system does.

"Rationalist," which I agree is an inevitable shorthand that emerges when the topic of overcoming bias is discussed frequently, is exclusively a procedural label: such a person is expected to make decisions and seek truth using a certain process. This process includes Bayesian updating of priors based on evidence, etc. However, such a label doesn't commit the rationalist to any particular conclusion ex ante: the rationalist doesn't have to be atheist or theist, or accept any other fact as true and virtually unassailable. He's merely committed to the process of arriving at conclusions.

Other identities are largely substantive. They commit the bearer to certain conclusions about the state of the world. A Christian believes in a god with certain attributes and a certain history of the world. A Communist believes that a certain government system is better than all others. These identities are dangerous: once they commit you to a conclusion, you're unlikely to challenge it with evidence to ensure it is in fact the best one. That's the kind of identity Paul Graham is warning against.

Of course, these labels have procedural components: a Christian would solve a moral dilemma using the Bible; a Communist would solve an economic problem using communist theory. Similarly, rationalism substantively means you've arrived at the conclusion than you're biased and you can't trust your gut or your brain like most people do, but that's the extent of your substantive assumptions.

Since rationalism is a procedural identity rather than a substantive one, I see few of the dangers of using the term "rationalist" freely here.

Comment by hhadzimu on Of Gender and Rationality · 2009-04-16T20:39:51.138Z · LW · GW

Exposing yourself to any judgments, period, is risky. The OB crowd is perhaps the best-commenting community I've come across: they read previous comments and engage the arguments made there. How many other bloggers are like Robin Hanson and consistently read and reply to comments? Anyway, as a result, any comment is bound to be read and often responded to by others. There may not have been a point value attached, but judgments were made.

Comment by hhadzimu on Spay or Neuter Your Irrationalities · 2009-04-10T22:04:11.900Z · LW · GW

Agreed. I have trouble accepting this as a true irrationality. It strikes me as merely a preference. You lose time you could be listening to song A because of your desire to have the same play count for song B, but this is because you prefer the world where playcounts are equal to the world where they are unequal but you hear a specific song more. Is that really an irrational preference?

I also agree with VN's disclaimer: this time spent [wasted?] on equalizing playcounts could probably be used for something else. But at what point does the preference for a certain aesthetic outcome become irrational? What about someone who prefers a blue shirt to a red one? What about someone who can't enjoy a television show because the living room is messy? Someone who can't enjoy a party because there's an odd number of people attending? Someone who insists on eating the same lunch every day? Some of these are probably indicators of OCD, but it's really just an extreme point on a spectrum of aesthetic and similar preferences. At what point do preferences become irrational?

Comment by hhadzimu on Winning is Hard · 2009-04-03T21:15:06.921Z · LW · GW

I have to echo orthonormal: information, if processed without bias [availability bias, for example], should improve our decisions, and getting information is not always easy. I don't see how this raises any questions about the rational process, or as you say, principled fashion.

"But by what principled fashion should you choose not to eat the fugu?"

This seems like a situation where the simplest expected value calculation would give you the 'right' answer. In this case, the expected value of eating oysters is 1, the expected value of eating the fugu is the expected value of eating an unknown dish, which you'd probably base on your prior experiences with unknown dishes offered for sale in restaurants of that type. [I assume you'd expect lower utility in some places than others.] In this case, that would kill you, but that is not a failure of rationality.

In a situation without the constraints of the example, research on fugu would obviously provide you with the info you need. A web-enabled phone and google would provide you with everything you need to know to make the right call.

Humans actually solve this type of problem all the time, though the scales are perhaps less. A driver on a road trip may settle for low-quality food [a fast food chain, representing the oysters] for the higher certainty of his expected value [convenience, uniform quality]. It's simply the best use of available information.

Comment by hhadzimu on Where are we? · 2009-04-02T23:37:16.982Z · LW · GW

Chicago, IL.

Comment by hhadzimu on Your Price for Joining · 2009-03-26T18:44:58.942Z · LW · GW

Eliezer Yudkowsky does not sleep. He waits.

Comment by hhadzimu on The Mystery of the Haunted Rationalist · 2009-03-26T01:32:09.754Z · LW · GW

You're probably right, but this is a workaround around the question. In law school, they'd accuse you of fighting the hypothetical. You're in the least convenient possible world here: you're wide awake, 100%, for the entire relevant duration.

http://lesswrong.com/lw/2k/the_least_convenient_possible_world/

Comment by hhadzimu on Thoughts on status signals · 2009-03-24T15:59:34.557Z · LW · GW

"I am trying to politely tell you that you have a lot to learn about signaling." That's why I'm here :)

I think you bring up an interesting point here. I agreed with pwno that, once everyone is aware of a signal, it's no longer credible, especially if it's cheap. But I think you're right as well that for the signals you mentioned, it doesn't matter who knows that it's a signal or how long it's been around.

The distinction, I think, is what one is trying to signal. Signals of conformity to a group or cooperativeness to an ally might be affected differently by these factors than signals of higher status. In fact, the former may gain in credibility as they get older, in a "this is what our group has always done" kind of way, whereas in the latter, the signal may get weaker as time goes on. I'm not sure that this is what happens, but there's no reason to think that signals for different things are affected equally by changing factors.

Comment by hhadzimu on Thoughts on status signals · 2009-03-24T04:23:50.667Z · LW · GW

Good point. I should have made the distinction between status signals and "conformity" signals clearer. But I do think that there are very distinct mechanisms at work there, even though the ultimate end [higher status] is probably the same. [That is, we signal conformity to an employer to get a job that will give us higher status.]

Comment by hhadzimu on Thoughts on status signals · 2009-03-23T22:29:23.000Z · LW · GW

I think you're hitting a different, though related point. A business suit and a smile are probably not credible signals, though their absence is a credible signal of the opposite. it's easy to wear a business suit and fake a smile: each applicant to a job opening will likely come with both. Those that don't are almost instantaneously downgraded. It seems that the signal becomes a new baseline for behavior, and though it doesn't credibly signal anything, its absence signals something.

I'm not positive on the mechanism here: it's probably related to the fact that the signal is so low-cost, and that anyone failing to display it is either extremely low status, or signals some other defect.

Comment by hhadzimu on Dead Aid · 2009-03-17T20:36:55.856Z · LW · GW

Politics or not, I find this to be a great illustration of the real-world consequences of failure of rationality. The interesting question is at what point the mechanism breaks down.

he logical course of action for rich countries is to study the most effective methods of poverty alleviation and development, and apply those. We can see clearly that this is not happening, but it's unclear as to why:

-Are rich countries wrong about the conditions they're facing, and thus using improper methods? If so, is there a bias that causes them to misperceive conditions? -Have rich countries erroneously identified ineffective methods of aid as effective? If so, is there a bias that causes them to wrongly pick the wrong methods? -Do rich countries actually want to harm poor countries and keep them down, under the guise of aid? If so, why is this scheme able to go on for so long? -Are, as EY implies, rich countries more interested in buying their own moral satisfaction? If so, why do people get moral satisfaction from appearances of morality instead of actual morality - wouldn't it be better if we derived pleasure from actually helping others?

There's probably more points at which the mechanism can fail. In any case, I think this is a great example of horrible consequences of failures of rationality: an entire continent's development may be slowed down, and countless lives shortened or destroyed. Perhaps issues like these are good for the purposes of discussion of practical applications of rationality - we probably won't be able to make everyone in power in rich countries a rationalist. How do we get them to act rationally?

Comment by hhadzimu on Tarski Statements as Rationalist Exercise · 2009-03-17T20:21:32.064Z · LW · GW

I actually think it's the marginally different "Have you stopped beating your wife?" which allows for yes/no answers only, except that neither will help you.

Comment by hhadzimu on Rational Defense of Irrational Beliefs · 2009-03-12T20:29:18.226Z · LW · GW

part 2: "So what an expert rationalist should do to avoid this overconfidence trap?"

Apologies for flooding the comments, but I wanted to separate the ideas so they can be discussed separately. The question is how to avoid overconfidence, and bias in general. Picking up from last time:

If we can identify a bias, presumably we can also identify the optimal outcome that would happen in the absence of such bias. If we can do that, can't we also constrain ourselves in such a way that we can achieve the optimal outcome despite giving in to the bias? For example, David Balan referenced his own softball game, in which he swings a half-second to early and has been unable to tell himself "swing .5 seconds later" with any success. My advice to him was to change his batting stance such that the biased swing still produces the optimal outcome.

This idea of "changing your stance" is especially useful in situations in which you can't constrain yourself in other ways: in situations in which you know you will be biased and can't avoid making decisions in such situations. David would have to avoid the game altogether to correct his bias, but that's akin to saying that the dead don't commit bias: by adjusting his stance he can stay in the game AND have the right outcome.

In contrast to constraining your possible set of actions to unbiased ones [as I suggested in my other comment] the other possible way to deal with it is to set your starting point [your "stance"] such that the biased action/decision gets you to the right place.

Comment by hhadzimu on Rational Defense of Irrational Beliefs · 2009-03-12T20:13:02.303Z · LW · GW

"So what an expert rationalist should do to avoid this overconfidence trap?"

You mean, how should one overcome bias? Be less wrong, if you will? You've come to the right place. David Balan had a post that didn't receive enough attention over at OB: http://www.overcomingbias.com/2008/09/correcting-bias.html This comment roughly paraphrases the points I made there.

If we can identify a bias, presumably we can also identify the optimal outcome that would happen in the absence of such bias. There are two ways to achieve this, and I will post them in separate comments so they can be voted down separately.

If you can identify an optimal outcome for a situation in which you are likely to be biased, you can constrain yourself ahead of time such that you can't give in to the bias. The classical example is Odysseus tying himself to the mast to avoid giving in to the sirens' song. Tie yourself to the mast at a rational moment, so you don't err in a biased one.

Applying this to your example, if you are indeed trying to maximize returns on a portfolio, the last place you should make buy/sell decisions is on a loud, testosterone-laden trading floor. It's better to decide ahead of time on a model based on which one is going to manage: "If a tech stock has x earnings but y insider ownership, then buy." Stick to your rules [perhaps bind yourself through some automatic limit that you cannot circumvent] as long as they seem to be tending to achieve your goal [maximize return]. Revisit them if they don't seem to - but again, revisit them at a time you are not likely to be biased.

Comment by hhadzimu on Rational Defense of Irrational Beliefs · 2009-03-12T19:44:39.641Z · LW · GW

"So what an expert rationalist should do to avoid this overconfidence trap? The seeming answer is that we should rely less on our own reasoning and more on the “wisdom of the crowds."

As Bryan Caplan's "Myth of the Rational Voter" pretty convincingly shows, the wisdom of crowds is of little use when the costs of irrationality are low. It's true in democracy: voting for an irrational policy like tariffs has almost no cost, because a single vote almost never matters. The benefit of feeling good about voting for what you like to believe in is big, though.

Similarly, in religious matters, the costs to the individual are usually slight compared to the benefits: the cost of, say, weekly attendance of a church provides group bonding and social connections. [There are certainly places, and there were times, when costs were vastly higher - daily attendance, alms tax, etc. But the benefits were proportionately bigger, as your group would be key to defending your life.]

In either case, trusting the wisdom of crowds seems to be a dangerous idea: if the crowd is systematically biased, you're screwed.

Comment by hhadzimu on Raising the Sanity Waterline · 2009-03-12T19:12:19.318Z · LW · GW

Neither do I, though I'm often tempted to find a reason for why my iPod's shuffle function "chose" a particular song at a particular time. ["Mad World" right now.]

It seems that our mental 'hardware' is very susceptible to agency and causal misfires, leaving an opening for something like religious belief. Robin explained religious activities and beliefs as important in group bonding [http://www.overcomingbias.com/2009/01/why-fiction-lies.html], but the fact that religion arose may just be a historical accident. It's likely that something would have arisen in the same place as a group bonding mechanism - perhaps religion just found the gap first. From an individual perspective, this hardly means that the sanity waterline is low. In fact, evolutionarily speaking, playing along may be the sanest thing to do.

The relevant sentence from Robin's post: "Social life is all about signaling our abilities and cooperativeness, and discerning such signals from others." As Norman points out [link below], self-deception makes our signals more credible, since we don't have to act as believers if we are believers. As a result, in the ancestral environment at least, it's "sane" to believe what others believe and not subject it to a conscious and costly rationality analysis. You'd basically expend resources to find out a truth that would make it more difficult for me to deceive others, which is costly in itself.

Of course today, the payoff from signaling group membership is far lower than ever before, which is why religious belief, and especially costly religious activities, violate sanity. Which, perhaps, is why secularism is on the rise: http://www.theatlantic.com/doc/200803/secularism

Comment by hhadzimu on Raising the Sanity Waterline · 2009-03-12T16:43:32.699Z · LW · GW

Excellent description. Reminds me a little of Richard Dawkins in "The God Delusion," explaining how otherwise useful brain hardware 'misfires' and leads to religious belief.

You mention agency detection as one of the potential modules that misfire to bring about religious belief. I think we can generalize that a little more and say fairly conclusively that the ability to discern cause-and-effect was favored by natural selection, and given limited mental resources, it certainly favored errors where cause was perceived even if there was none, rather than the opposite. In the simplest scenario, imagine hearing a rustling in the bushes: you're better off always assuming there's a cause and checking for predators and enemies. If you wrote it off as nothing, you'd soon be removed from the gene pool.

Relatedly, there is evidence that the parts of the brain responsible for our ability to picture absent or fictional people are the same ones used in religious thought. It's understandable why these were selected for: if you come back to your cave to find it destroyed or stolen, it helps to imagine the neighboring tribe raiding it.

These two mechanisms seem to apply to religion: people see a cause behind the most mundane events, especially rare or unusual events. Of course they disregard the giant sample size of times such events failed to happen, but those are of course less salient. It's a quick hop to imagining an absent/hidden/fictional person -and agent - responsible for causing these events.

Undermining religion on rational grounds must thus begin with destroying the idea that there is necessarily an agent intentionally causing every effect. This should get easier: market economies are famously results of human action, but not of human design - any given result may be the effect of an agent's action, but not necessarily its intended cause. Thus, such results are not fundamentally different from, say, storms: effects of physical causes but with no intent behind them.

It would probably also help to remind people of sample size. I recently heard a story by a religious believer who based her faith on her grandfather's survival in the Korean war, which happened against very high odds. Someone like that must be reminded that many people did not survive similar incidents, and that there is likely no force behind it but random chance, much like, if life is possible on 0.000000001% of planets, and exists on the same percentage of those, given enough planets you will have life.

Comment by hhadzimu on You May Already Be A Sinner · 2009-03-10T21:29:09.418Z · LW · GW

We're forgetting signaling. Robin would never forgive us, because he sees it in a lot of things, and I happen to agree with him that it's far more pervasive than people think.

In fact, the Tversky example gives people two opportunities to signal: not only do they get to demonstrate higher pain tolerance [especially important for men], they also get to "demonstrate" a healthier heart. Both should be boosts in status.

The same goes for Calvinists: though, when you think about it, you truly believe in the elect, you don't think about it most of your life [as we know, much of our day to day life is subconsciously guided] and are instead focused on signaling your elect status with a good life.

For good measure, it even works with the car: you buy a new car to signal wealth to signal health.

However, I do believe that we engage in lots of automatic self-deception [making it easier to deceive others into believing we have higher status]: thus, we may actually believe that an extra car/a good life/a higher pain tolerance would improve your life expectancy/grace/heart, but that's merely the proximate cause. Ultimately, we're driven by status-seeking.

Comment by hhadzimu on Define Rationality · 2009-03-06T03:27:27.854Z · LW · GW

I disagree... I think "limited analysis resources" accounts for the very difference you speak of. I think the "rituals of cognition" you mention are themselves subjection to rationality analysis: if I'm understanding you correctly, you are talking about someone who knows how to be rational in theory but cannot implement such theory in practice. I think you run into three possibilities there.

One, the person has insufficient analytical resources to translate their theory into action, which Robin accounts for. The person is still rational, given their budget constraint.

Two, the person could gain the ability to make the proper translation, but the costs of doing so are so high that the person is better off with the occasional translation error. The person rationally chooses not to learn better translation techniques.

Three, the person systematically makes mistakes in the translations. That, I think, we can fairly call a bias, which is what we're trying to avoid here. The person is acting irrationally - if there is a predictable bias, it should have been corrected for.

On your last point: "[Robin would] give someone "rationality points" for coming up with a better algorithm that requires less clock cycles, while I would just give them "cleverness points"." I think I have to side with Robin here. On certain issues it might not matter how quickly or efficiently the rational result is arrived at, but I think in almost all situations coming up with a faster way to arrive at a rational result is more rational, since individuals face constraints of time and resources. While the faster algorithm isn't more rational on a single, isolated issue [assuming they both lead to the same rational result], the person would be able to move on to a different issue faster and thus have more resources available to be more rational in a different setting.

Comment by hhadzimu on Define Rationality · 2009-03-05T23:38:12.232Z · LW · GW

I think we're missing a fairly basic definition of rationality, one that I think most people would intuitively come to. It involves the question at what stage evidence enters the decision-making calculus.

Rationality is a process: it involves making decisions after weighing all available evidence and calculating the ideal response. Relevant information is processed consciously [though see Clarification below] before decision is rendered.

This approach is opposed to a different, less conscious process, which are our instinctive and emotional responses to situations. In these situations, actual evidence doesn't enter the conscious decision-making process; instead, our brains, having evolved over time to respond in certain ways to certain stimuli, automatically react in certain pre-programmed ways. Those ways aren't random, of course, but adaptions to the ancestral environment. The key is that evidence specific to the situation isn't actually weighed and measured: the response is based on the brain's evolved automatic reaction.

Clarification: A process that is nearly automatic is still a rational process if it is the result of repeated training, rather than innate. For example, those who drive manual transmission cars will tell you that after a short while, you don't think about shifting: you just do. It becomes "second nature." This is still a conscious process: over time, you become trained to interpret information more efficiently and react quickly. This differs from the innate emotional and instinctive responses: we are instinctively attracted to beautiful people, for example, without having to learn it over and over again - it's "first nature." Though the responses are similar in appearance, I think most people would say that the former is rational, the latter is not.