What Cost for Irrationality?
post by Kaj_Sotala · 2010-07-01T18:25:06.938Z · LW · GW · Legacy · 119 commentsContents
119 comments
This is the first part in a mini-sequence [? · GW] presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.
People who care a lot about rationality [? · GW] may frequently be asked why they do so. There are various answers, but I think that many of ones discussed here [? · GW] won't be very persuasive to people who don't already have an interest in the issue. But in real life, most people don't try to stay healthy because of various far-mode arguments for the virtue of health: instead, they try to stay healthy in order to avoid various forms of illness. In the same spirit, I present you with a list of real-world events that have been caused by failures of rationality.
What happens if you, or the people around you, are not rational? Well, in order from least serious to worst, you may...
Have a worse quality of living. Status Quo bias is a general human tendency to prefer the default state, regardless of whether the default is actually good or not. In the 1980's, Pacific Gas and Electric conducted a survey of their customers. Because the company was serving a lot of people in a variety of regions, some of their customers suffered from more outages than others. Pacific Gas asked customers with unreliable service whether they'd be willing to pay extra for more reliable service, and customers with reliable service whether they'd be willing to accept a less reliable service in exchange for a discount. The customers were presented with increases and decreases of various percentages, and asked which ones they'd be willing to accept. The percentages were same for both groups, only with the other having increases instead of decreases. Even though both groups had the same income, customers of both groups overwhelmingly wanted to stay with their status quo. Yet the service difference between the groups was large: the unreliable service group suffered 15 outages per year of 4 hours' average duration and the reliable service group suffered 3 outages per year of 2 hours' average duration! (Though note [? · GW] caveats [? · GW].)
A study by Philips Electronics found that one half of their products had nothing wrong in them, but the consumers couldn't figure out how to use the devices. This can be partially explained by egocentric bias on behalf of the engineers. Cognitive scientist Chip Heath notes that he has "a DVD remote control with 52 buttons on it, and every one of them is there because some engineer along the line knew how to use that button and believed I would want to use it, too. People who design products are experts... and they can't imagine what it's like to be as ignorant as the rest of us."
Suffer financial harm. John Allen Paulos is a professor of mathematics at Temple University. Yet he fell prey to serious irrationality which began when he purchased WorldCom stock at $47 per share in early 2000. As bad news about the industry began mounting, WorldCom's stock price started falling - and as it did so, Paulos kept buying, regardless of accumulating evidence that he should be selling. Later on, he admitted that his "purchases were not completely rational" and that "I bought shares even though I knew better". He was still buying - partially on borrowed money - when the stock price was $5. When it momentarily rose to $7, he finally decided to sell. Unfortunately, he didn't get off from work until the market closed, and on the next market day the stock had lost a third of its value. Paulos finally sold everything, at a huge loss.
Stock market losses due to irrationality are not atypical. From the beginning of 1998 to the end of 2001, the Firsthand Technology Value mutual fund had an average gain of 16 percent per year. Yet the average investor who invested in the fund lost 31.6 percent of her money over the same period. Investors actually lost a total of $1.9 billion by investing in a fund which was producing 16 percent of a profit per year. That happened because the fund was very volatile, causing people to invest and cash out at exactly the wrong times. When it gained, it gained a lot, and when it lost, it lost a lot. When people saw that it had been making losses, they sold, and when they saw it had been making gains, they bought. In other words, they bought when high and sold when low - exactly the opposite of what you're supposed to do if you want to make a profit. Reporting on a study of 700 mutual funds during 1998-2001, finanical reporter Jason Zweig noted that "to a remarkable degree, investors underperformed their funds' reported returns - sometimes by as much as 75 percentage points per year."
Be manipulated and robbed of personal autonomy. Subjects were asked to divide 100 usable livers to 200 children awaiting a transplant. With two groups of children, group A with 100 children and group B with 100 children, the overwhelming response was to allocate 50 livers to each, which seems reasonable. But when group A had 100 children, each with an 80 percent chance of surviving when transplanted, and group B had 100 children, each with a 20 percent chance of surviving when transplanted, people still chose the equal allocation method even if this caused the unnecessary deaths of 30 children. Well, that's just a question of values and not rationality, right? Turns out that if the patients were ranked from 1 to 200 in terms of prognosis, people were relatively comfortable with distributing organs to the top 100 patients. It was only when the question was framed as "group A versus group B" that people suddenly felt they didn't want to abandon group B entirely. Of course, these are exactly the same dilemma. One could almost say that the person who got to choose which framing to use was getting to decide on behalf of the people being asked the question.
Two groups of subjects were given information about eliminating affirmative action and adopting a race-neutral policy at several universities. One group was told that under race-neutral conditions, the probability of a black student being admitted would decline from 42 percent to 13 percent and the probability of a white student being admitted would rise from 25 percent to 27 percent. The other group was told that under race-neutral admissions, the number of black students being admitted would decrease by 725 and the number of white students would increase by 725. These two framings were both saying the same thing, but you can probably guess the outcome: support for affirmative action was much higher in the percentage group.
In a hypothetical country, a family with no children and an income of $35,000 pays $4,000 in tax, while a family with no children and an income of $100,000 pays $26,000 in tax. Now suppose that there's a $500 tax reduction for having a child for a family with an income of $35,000. Should the family with an income of $100,000 be given a larger reduction because of their higher income? Here, most people would say no. But suppose that instead, the baseline is that a family of two children with an income of $35,000 pays $3,000 in tax and that a family of two children with an income of $100,000 pays $25,000 in tax. We propose to make the families with no children pay more tax - that is, have a "childless penalty". Say that the family with the income of $100,000 and one child has their taxes set at $26,000 and the same family with no children has their taxes set at $27,000 - there's a childless penalty of $1,000 per child. Should the poorer family which makes $35,000 and has no children also pay the same $2,000 childless penalty as the richer family? Here, most people would also say no - they'd want the "bonus" for children to be equal for low- and high-income families, but they do not want the "penalty" for lacking children to be the high for same and low income.
End up falsely accused or imprisoned. In 2003, an attorney was released from prison in England when her conviction of murdering her two infants was overturned. Five months later, another person was released from prison when her charge of having murdered her children was also overturned. In both cases, the evidence presented against them had been ambiguous. What had convinced the jury was that in both cases, a pediatrician had testified that the odds of two children in the same family dying of infant death syndrome was 73 million to 1. Unfortunately, he had arrived to this figure by squaring the odds of a single death. Squaring the odds of a single event to arrive at the odds for it happening twice only works if the two events are independent. But that assumption is likely to be false in the case of multiple deaths in the same family, where numerous environmental and genetic factors may have affected both deaths.
In the late 1980s and early 1990s, many parents were excited and overjoyed to hear of a technique coming out of Australia that enabled previously totally non-verbal autistic children to communicate. It was uncritically promoted in highly visible media such as 60 Minutes, Parade magazine and the Washington Post. The claim was that autistic individuals and other children with developmental disabilities who'd previously been nonverbal had typed highly literate messages on a keyboard when their hands and arms had been supported over by the typewriter by a sympathetic "facilitator". As Stanovich describes: "Throughout the early 1990s, behavioral science researchers the world over watched in horrified anticipation, almost as if observing cars crash in slow motion, while a predictable tragedy unfolded before their eyes." The hopes of countless parents were dashed when it was shown that the "facilitators" had been - consciously or unconsciously - directing the children's hands on the right keys. It should have been obvious that spreading such news before the technique had been properly scientifically examined was dangerously irresponsible - and it gets worse. During some "faciliation" sessions, children "reported" having been sexually abused by their parents, and were removed from their homes as a result. (Though they were eventually returned.)
End up dead. After 9/11, people became afraid of flying and started doing so less. Instead, they began driving more. Unfortunately, car travel has a much higher chance of death than air travel. Researchers have estimated that over 300 more people died in the last months of 2001 because they drove instead of flying. Another group calculated that for flying to be as dangerous as driving, there would have to be an incident on the scale of 9/11 once a month!
Have your society collapse. Possibly even more horrifying is the tale of Albania, which had previously been a communist dictatorship but had made considerable financial progress from 1992 to 1997. In 1997, however, one half of the adult population had fallen victim to Ponzi schemes. In a Ponzi scheme, the investment itself isn't actually making any money, but rather early investors are paid off with the money from late investors, and eventually the system has to collapse when no new investors can be recruited. But when schemes offering a 30 percent monthly return began to become popular in Albania, competitors offering a 50-60 or even a 100 percent monthly return soon showed up, and people couldn't resist the temptation. Eventually both the government and economy of Albania collapsed. Stanovich describes:
People took out mortgages on their homes in order to participate. Others sold their homes. Many put their entire life savings into the schemes. At their height, an amount equal to 50 percent of the country's GDP was invested in Ponzi schemes. Before the schemes collapsed, they actually began to compete with wage income and distort the economy. For example, one business owner saw his workforce quickly slip from 130 employees to 70 because people began to think they could invest in the Ponzi schemes instead of actually working for their income.
The estimated death toll was between 1,700 and 2,000.
119 comments
Comments sorted by top scores.
comment by Vladimir_M · 2010-07-01T20:31:34.608Z · LW(p) · GW(p)
Regarding the "status quo bias" example with the utility company, I think it's fallacious, or at least misleading. For realistic typical humans with all their intellectual limitations, it is rational to favor the status quo when someone offers to change a deal that has so far worked tolerably well in ways that, for all you know, could have all sorts of unintended consequences. (And not to mention the swindles that might be hiding in the fine print.)
Moreover, if the utility company had actually started selling different deals rather than just conducting a survey about hypotheticals, it's not like typical folks would have stubbornly held to unfavorable deals for years. What happens in such situations is that a clever minority figures out that the new deal is indeed more favorable and switches -- and word about their good experience quickly spreads and soon becomes conventional wisdom, which everyone else then follows.
This is how human society works normally -- what you call "status quo bias" is a highly beneficial heuristic that prevents people from ruining their lives. It makes them stick to what's worked well so far instead of embarking on attractive-looking, but potentially dangerous innovations. When this mechanism breaks down, all kinds of collective madness can follow (speculative bubbles and Ponzi schemes being the prime examples). Generally, it is completely rational to favor a tolerably good status quo even if some calculation tells you that an unconventional change might be beneficial, unless you're very confident in your competence to do that calculation, or you know of other people's experiences that have confirmed it.
Replies from: Jiro, Unnamed, Kaj_Sotala, waveman, saliency↑ comment by Jiro · 2013-11-15T19:55:27.930Z · LW(p) · GW(p)
Replying to old post...
I would suggest something even stronger: the people exhibiting the "status quo bias" in the utility example are correct. The fact that a deal has worked out tolerably well in the real world is information and indicates that the deal has no hidden gotchas that the alternative might have. Bayseianism demands considering this information.
Where this gets confusing is the comparison between the two groups of customers, each starting out with the opposite plan. However, the customers don't have the same information--one group of customers knows that one plan is tolerable, and the other group knows that the other plan is tolerable. Given this difference in information, it is rational for each group to stick with the plan that they have. It is true, of course, that both groups of customers cannot actually be better off than the other, but all that that means is that if you make a decision that is probabilistically best for you, you can still get unlucky--each customer rationally concluded that the other plan had a higher chance of having a gotcha than a plan they know about, and that does not become irrational just because it turns out the other plan didn't have a gotcha after all.
↑ comment by Unnamed · 2010-07-01T23:56:34.142Z · LW(p) · GW(p)
I think the utility company example is fine. Lots of biases can be described as resulting from the use of a pretty good heuristic which leads people astray in that particular case, but that's still a cost of imperfect thinking. And this was a case where the alternative to the status quo was relatively simple - it was defined precisely and differed on only a small number of easily understandable dimensions - so concerns about swindles, unintended consequences, or limited understanding of complex changes shouldn't play a big role here.
In real life, social processes might eventually overcome the status quo bias, but there's still a lot of waste in the interim which the clever (aka more rational) minority would be able to avoid. Actually, in this case the change to utility coverage would probably have to be made for a whole neighborhood at once, so I don't think that your model of social change would work.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-07-02T08:01:17.602Z · LW(p) · GW(p)
I'd say the utility company example is, in an important sense, the mirror image of the Albanian example. In both cases, we have someone approaching the common folk with a certain air of authority and offering some sort of deal that's supposed to sound great. In the first case, people reject a favorable deal (though only in the hypothetical) due to the status quo bias, and in the second case, people enthusiastically embrace what turns out to be a pernicious scam. At least superficially, this seems like the same kind of bias, only pointed in opposite directions.
Now, while I can think of situations where the status quo bias has been disastrous for some people, and even situations where this bias might lead to great disasters and existential risks, I'd say that in the huge majority of situations, the reluctance to embrace changes that are supposed to improve what already works tolerably well is an important force that prevents people from falling for various sorts of potentially disastrous scams like those that happened in Albania. This is probably even more true when it comes to the mass appeal of radical politics. Yes, it would be great if people's intellects were powerful and unbiased enough to analyze every idea with pristine objectivity and crystal analytical clarity, but since humans are what they are, I'm much happier if they're harder to convince to change things that are already functioning adequately.
Therefore, I'm inclined to believe that a considerable dose of status quo bias is optimal from a purely consequentialist perspective. Situations where the status quo bias is gravely dangerous are far from nonexistent, but still exceptional, whereas when it comes to the opposite sort of danger, every human society is sitting on a powder keg all of the time.
↑ comment by Kaj_Sotala · 2010-07-01T20:38:58.545Z · LW(p) · GW(p)
That is also a very good point against the utility company example.
I think I'll remove it, unless somebody persuasively argues in its favor in a few hours or so.
Replies from: HughRistik, Blueberry, waveman↑ comment by HughRistik · 2010-07-01T21:17:21.413Z · LW(p) · GW(p)
Why don't you keep it, but add a note?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-07-02T00:07:12.744Z · LW(p) · GW(p)
I ended up adding a brief note linking to these comments.
↑ comment by waveman · 2016-08-19T05:04:50.309Z · LW(p) · GW(p)
Some things that look like biases are not so much, when looked at from a situational perspective. Taleb quotes the example of hyperbolic discounting (HD).
In HD people apply a much higher discount rate between e.g., today and tomorrow than bettween one year from not and one year and one day from now. Taleb argues that this can be rational if the person may now pay up at all i.e., credit risk. A person is much more likely to pay up now than tomorrow, because they are here today, but tomorrow they could be spending the money in Rio. In contrast the difference in credit risk between 365 and 366 days is negligible.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-08-19T14:45:14.135Z · LW(p) · GW(p)
There's another reason that it is reasonable to say no if a utility company offers to improve your service for more money.
Namely, we already know that they do not care about your service, but about their profits, and about the service only insofar as it helps their profits. So it is quite likely that there will be no real improvement to your service, but it will remain approximately the same. The company rightly expects that you will not keep careful track and that you will assume that your service has improved. Or if it does improve, it will not improve as much as they said, because they know you will not be keeping track well, and that even if you do, you will not have much recourse.
This is even more the case if some other company offers to replace your service, saying that you will get better service at a lower price. In Italy utility companies send roaming people offering this to everyone. If you accept, you will get worse service at a higher price, and you will have no legal recourse, because what they said about the price was technically true in terms of the base price, but false once all the calculations are done.
↑ comment by waveman · 2016-08-19T05:00:02.191Z · LW(p) · GW(p)
someone offers to change a deal that has so far worked tolerably well in ways that, for all you know, could have all sorts of unintended consequences
This exact thing happened to me last year. I signed up for a great new deal and now it has blown up in my face. The cost of safely switching from a fairly satisfactory status quo can be high - high R&D costs - especially when you are dealing with crooks and charlatans.
↑ comment by saliency · 2010-07-06T18:11:20.771Z · LW(p) · GW(p)
I agree,
Hayek, the knowledge problem man, himself makes the argument* that most often it is best to select the norm. That this norm is the product of lots of calculation that would be expensive for you to redo.
I think it was Thoreau who wrote a story about a man that each day on awaking would remember nothing from the day before; who would then have to rediscover the use of a chair and pencil. This man could only get so far in life.
The rational man knows that he can only get so far in life if he is always re-calculating instead of working off of what others have done.
One of the most important skills to develop is the skill of knowing when when you need to re-calculate.
- One reference would be in the first part of law legislation and liberty (v1)
↑ comment by Blueberry · 2010-07-06T20:19:53.129Z · LW(p) · GW(p)
I think it was Thoreau who wrote a story about a man that each day on awaking would remember nothing from the day before; who would then have to rediscover the use of a chair and pencil. This man could only get so far in life.
Anyone know what story? It sounds interesting. Also see the film Memento.
comment by bentarm · 2010-07-02T23:37:13.184Z · LW(p) · GW(p)
Re. the Roy Meadows/Sally Clark example, you say:
a pediatrician had testified that the odds of two children in the same family dying of infant death syndrome was 73 million to 1. Unfortunately, he had arrived to this figure by squaring the odds of a single death. Squaring the odds of a single event to arrive at the odds for it happening twice only works if the two events are independent. But that assumption is likely to be false in the case of multiple deaths in the same family
More importantly, the 73 million to 1 number was completely irrelevant.
The interesting number is not "the odds of two children in the same family dying of infant death syndrome" but "the odds of two children in the same family having died of infant death syndrome given that two children in that family had died", which are obviously much higher.
Of course, you need to know the first number in order to calculate the second (using Bayes' theorem) but Meadows (and everyone else present at the trial) managed to conflate the two.
edit 05/07: corrected bizarre thinko at the end of penultimate paragraph.
Replies from: komponisto, None↑ comment by komponisto · 2010-07-03T00:00:43.064Z · LW(p) · GW(p)
The interesting number is not "the odds of two children in the same family dying of infant death syndrome" but "the odds of two children in the same family having died of infant death syndrome given that two children in that family had died", which is obviously much lower.
You meant "higher", right? (Or, alternatively, "odds against".)
Replies from: bentarm↑ comment by bentarm · 2010-07-04T23:39:29.318Z · LW(p) · GW(p)
Yes, I do mean "higher". I also mean "are"... I guess what was going through my head is that 1:1 (or whatever the actual number is) involves a lower number than 73,000,000:1 (although I'm not entirely sure that I didn't just make a mistake). I'll edit.
↑ comment by [deleted] · 2014-10-30T20:31:19.144Z · LW(p) · GW(p)
Our statistics professor mentioned this example in the first lecture. It is also known as the Prosecutor's fallacy and probably happens way to often.
comment by Roko · 2010-07-02T13:35:25.869Z · LW(p) · GW(p)
The problem with most of these is that they are ways that other people's irrationality hurts you.
It is no good to advise someone that it is in their interest to spend time and effort becoming more rational, because if everyone else were more rational, then their life would be better. I control my rationality, not everyone else's. And that is the problem.
Out of your 10 examples, only two (financial losses due to trying to predict the stock market and driving instead of flying after 9/11) are cases where your investment in rationality pays back to you. And the 9/11 case is hardly a very big return -- if 300 extra people died in the USA as a result, then that's a 1 in 1,000,000 reduction in probability of death for being rational, which translates to an $20 return (statistical value of a life), which is worth far less than the time it would take the average person to learn rationality. Perhaps you could argue the figure of $20 up to $100 or so, but still, that isn't a big enough return. I'm not counting the Albanian Ponzi schemes because if the society you're in collapses, it is no use to you that you avoided the Ponzi Scheme (and I think that leaving Albania was unlikely to be an option for most, though if anyone can find significant numbers of counterexamples, I am prepared to count that one)
So 90% of your examples give people no incentive to be rational.
As I have said before, irrationality is a commons problem
EDIT: I'm shifting to thinking that the Albania example should count. After all, you could at least have put a significant amount of money into gold, foreign currency or other assets, right? You know, when I get some money I'm going to see if it's possible for someone in the UK to just go and turn sterling into gold without suffering a killer loss.
But then again, correctly predicting that the society is going to collapse is a high bar, much higher than realizing it's not a good idea to invest in a pyramid scheme.
Replies from: Alexandros↑ comment by Alexandros · 2010-07-02T15:01:33.204Z · LW(p) · GW(p)
Significant numbers of counterexamples: http://en.wikipedia.org/wiki/Albanian_immigrants_in_Greece
Replies from: Roko↑ comment by Roko · 2010-07-02T16:21:58.900Z · LW(p) · GW(p)
Voted up for trying to find evidence against my hypothesis, but this doesn't seem to be what we're after. It says these were mostly late 1980s and early 1990s, i.e. just before and after the fall of communism, not just before the pyramid schemes of 1997.
Also they're mostly illegal immigrants, so presumably they couldn't take their assets with them. Though maybe if you'd seen it coming you could have fled to greece with your life savings in dollar notes.
Replies from: Alexandros↑ comment by Alexandros · 2010-07-02T19:44:55.298Z · LW(p) · GW(p)
The current albanian immigrant population of greece is about 12,5% of the current population of albania. It's also not clear that the numbers reflect illegal or legal immigration (I think the real numbers were much higher at their peak, perhaps more than 2x). Also consider that Italy was a more preferred destination. And then count in the rest of the world, to where albanian immigration got directed after the early 90s.
What it is safe to say is that many people did leave Albania when they were given a chance. Also, the crisis erupted in January of 1997, and I assume the ponzi scheme had been brewing for a few years before that.
In any case, this was a minor point, but I just happen to be aware of the extent of albanian immigration to greece for (obvious?) reasons.
Replies from: Roko↑ comment by Roko · 2010-07-03T10:52:02.620Z · LW(p) · GW(p)
I'd be interested to see the details. My suspicion is that the costs to leaving may have swamped the benefits for many people, and that the optimal move for a rational person under the circumstances would have been to hold on to assets and US$ and weather the storm, which would probably have been less bad than for those who lost their life savings, but still pretty bad.
comment by Vladimir_M · 2010-07-01T20:08:16.456Z · LW(p) · GW(p)
Regarding the "financial harm" example: the only irrational thing Paulos did was keeping all his eggs in one basket, instead of diversifying his portfolio. As a rule almost without exception, unless you have insider information, it's never more rational to buy one single stock instead of another, regardless of which ones are soaring and which ones plunging at the moment. If you're not convinced, consider the following: at each point during Paulos's purchases of Worldcom stock, if it had been certain from public information, or even highly likely, that the stock would keep plunging, then shorting it would have been a killer investment -- to the point where it would be foolish for big investors to invest in anything else at the moment. But of course, that was not the case.
Maybe I'm reading too much into your example, but it seems like you believe that investing in Worldcom stock at that point was somehow especially irrational compared to other single-stock investments. Yet, any non-diversified investment in a single stock is pretty much an equivalent gamble, and Paulos was not more irrational than other people who get into that sort of gambling. (Except arguably for a tiny number of extremely sophisticated investors on Wall Street who have superior ways of making predictions from public information, but even the existence of such creatures -- as opposed to those favored by survivorship bias -- is uncertain.)
Edit: Come to think of it, some of the MWI proponents here should agree that by their criteria, there was nothing irrational about Paulos's investment at all.
Replies from: Eliezer_Yudkowsky, Kaj_Sotala↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-01T20:18:09.492Z · LW(p) · GW(p)
Come to think of it, some of the MWI proponents here should agree that by their criteria, there was nothing irrational about Paulos's investment at all.
Anyone with diminishing returns on the utility of money doesn't like volatility, whether probabilistic or MWI.
Replies from: cousin_it, Vladimir_M↑ comment by cousin_it · 2010-07-01T20:30:05.840Z · LW(p) · GW(p)
Which reminds me. We never did get an answer to why the SIAI doesn't sink all its income into lottery tickets :-)
To everyone else: not all humans have diminishing returns on the utility of money. (The technical term is "risk-averse"). Some humans are "risk-loving" with regard to money. An easy way to make someone risk-loving is to threaten to kill them tomorrow unless they have 2x their current wealth. If you do that to two people, you'll be treated to the entertaining spectacle of them playing poker with each other and happily betting their entire worldly wealth. Poor people, and people who want to find a partner to mate, often indulge in such behavior quite rationally. Dueling because of a woman and all that.
Replies from: WrongBot↑ comment by WrongBot · 2010-07-01T20:54:11.954Z · LW(p) · GW(p)
I suspect that risk-loving humans are just humans who don't understand probability. If you threaten to kill someone unless they double their wealth in twenty-four hours, they should still prefer a 50:50 double or nothing gamble to a 25:75 quadruple or nothing one. Beyond a certain point, the utility of money always has diminishing returns; what varies between people is the location of that point.
Replies from: cousin_it↑ comment by cousin_it · 2010-07-01T21:03:03.496Z · LW(p) · GW(p)
That's correct... kind of... but let's make things a bit clearer.
"Risk-averse" and "risk-loving" are shorthand words that describe someone's curve of utility against money. Risk-loving means the curve bulges downward ("convex"), risk-averse means the curve bulges upward ("concave"). You're just pointing out that the curve may be locally convex in the vicinity of the person's current wealth, but concave elsewhere. It is probably true that most people won't desire the extra million as much after they get their first billion. But if you have no opportunity to make huge bets, you may well be risk-loving on small bets without being ignorant or irrational.
Replies from: WrongBot↑ comment by WrongBot · 2010-07-01T21:50:22.398Z · LW(p) · GW(p)
Do you have any examples of real economic circumstances under which a sane person (someone who isn't solely concerned with maximizing the number of Porsches they own, e.g.) would have a convex utility/money curve?
(If there is a way to phrase this question so that it seems more curious and less confrontational, please assume that I said that instead.)
Replies from: bill, cousin_it↑ comment by cousin_it · 2010-07-01T22:07:26.915Z · LW(p) · GW(p)
Human beings don't eat money. Your utility/money curve depends on the prices of things you can buy with the money, and the relative utilities of those things. Both factors can vary widely. I know no law of nature saying a $1000 gadget can't give you more than twice the utility of a $500 gadget. For the most direct example, the $1000 gadget could be some kind of money-printing device (e.g. a degree of higher education).
Replies from: twanvl, prase↑ comment by twanvl · 2010-07-01T22:33:46.989Z · LW(p) · GW(p)
That is (or should be) the reason why people to borrow money. You borrow if the utility gain of having more money now outweighs the loss of utility by having to pay back more money later.
But note that utility becomes more complicated when time gets involved. The utility of a dollar now is not the same as the utility of a dollar next week.
↑ comment by prase · 2010-07-01T22:12:23.942Z · LW(p) · GW(p)
This can explain locally convex curves. But is it imaginable to have a convex curve globally?
Replies from: Kingreaper, steven0461, Blueberry↑ comment by Kingreaper · 2010-07-01T23:08:57.199Z · LW(p) · GW(p)
It's imaginable for an AI to have such a curve, but implausible for a human having a globally convex curve.
Replies from: prase↑ comment by steven0461 · 2010-07-01T23:17:48.776Z · LW(p) · GW(p)
I just imagined it so that means it must be imaginable (e.g. you have a head that can contain arbitrarily many happy implants and because of their particular design they all multiply each other's effect). It doesn't seem very realistic, though, at least for humans.
↑ comment by Blueberry · 2010-07-01T22:50:13.051Z · LW(p) · GW(p)
Yes. y = log(x) is convex globally. A logarithmic utility function makes sense if you think of each additional dollar being worth an amount inversely proportional to what you have already.
Replies from: Peter_de_Blanc↑ comment by Peter_de_Blanc · 2010-07-01T23:01:40.697Z · LW(p) · GW(p)
No, your example is concave. The above posters were referring to functions with positive second derivative.
Replies from: steven0461↑ comment by steven0461 · 2010-07-01T23:08:54.223Z · LW(p) · GW(p)
The mnemonic I was taught is "conve^x like e^x"
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2010-12-05T20:09:59.971Z · LW(p) · GW(p)
I learned "concave up" like e^x and "concave down" like log x.
Replies from: Manfred↑ comment by Vladimir_M · 2010-07-01T20:56:09.038Z · LW(p) · GW(p)
That's true, but given that bankruptcy laws provide an effective floor for how badly you can suffer due to losses, and getting rich can provide for superlinear utility of money for some people (especially since being a savvy investor is very high status), I would expect that some would find the shut-up-and-multiply outcome positive given their preferences. This especially in the context of MWI, in which this strategy guarantees a roughly predictable future mix of your impoverished (but not horribly suffering), reasonably well-off, and super-rich and high-status future selves.
Of course, I don't think that this is implied by MWI as such; that's why I wrote "some of the MWI proponents."
↑ comment by Kaj_Sotala · 2010-07-01T20:17:39.817Z · LW(p) · GW(p)
According to the book, when the stock price started plunging, pretty much all evidence indicated that it probably wasn't going to go up again. He apparently wasn't shorting, either. Paulos is also quoted as admitting both that he only looked for favorable evidence, as well as knowing that he should've been selling when he was in fact buying.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-07-01T21:05:01.571Z · LW(p) · GW(p)
Kaj_Sotala:
According to the book, when the stock price started plunging, pretty much all evidence indicated that it probably wasn't going to go up again.
Trouble is, such certainty is inherently impossible in the stock market. If the stock is certain not to go up, then there is no point holding it, which means that you should get rid of it -- which means that everyone will sell it, causing it to plunge. But then shorting it becomes a certain killer investment. (And the same reasoning also works even if we only have a high probability rather than certainty, since lots of big investors with well-pooled risk will be attracted even if just the expected values are out of the ordinary in any direction.)
Generally speaking, it is a very close and safe approximation of reality to simply assume that any extraordinary future performance of any stock that would be implied by the public information, whether good or bad, automatically triggers reactions from investors that invalidate this prediction. Thus, if you're going to gamble with single-stock investing, you may well just pick it randomly.
comment by Vladimir_M · 2010-07-01T21:15:50.483Z · LW(p) · GW(p)
Another interesting point :
After 9/11, people became afraid of flying and started doing so less. Instead, they began driving more. Unfortunately, car travel has a much higher chance of death than air travel.
I have no doubt that there is a widespread and fundamentally irrational bias when it comes to people's fear of flying vs. driving. However, I'm not sure how much the above change was due to irrational fears, and how much due to the newly introduced inconveniences and indignities associated with air travel. Are there actually some reliable estimates about which cause was predominant? I'm sure at least some of the shift was due to entirely rational decisions motivated by these latter developments.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2010-12-05T20:04:36.248Z · LW(p) · GW(p)
But how rational is it really if you switch from flying to driving because you don't want the inconveniences, and end up dead? I guess it would be if the probability of dying was low enough, but it would have to be very low.
comment by MatthewW · 2010-07-02T19:43:26.833Z · LW(p) · GW(p)
One group was told that under race-neutral conditions, the probability of a black student being admitted would decline from 42 percent to 13 percent and the probability of a white student being admitted would rise from 25 percent to 27 percent. The other group was told that under race-neutral admissions, the number of black students being admitted would decrease by 725 and the number of white students would increase by 725. These two framings were both saying the same thing, but you can probably guess the outcome: support for affirmative action was much higher in the percentage group.
These two framings aren't saying the same thing at all. The proposed policy might be the same in both cases, but the information available to the two groups about its effects is different.
Replies from: Blueberry↑ comment by Blueberry · 2010-07-02T19:46:12.908Z · LW(p) · GW(p)
I assume they were also told the total number of black and white students, in which case the information would be the same.
Replies from: MatthewW, tut↑ comment by tut · 2010-07-03T07:47:38.859Z · LW(p) · GW(p)
Ok, so what is the largest mersenne prime with a billion digits?
Or in other words, having information in that sense is not the same thing as using information, and the difference does not imply irrationality.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-03T08:59:47.714Z · LW(p) · GW(p)
Ok, so what is the largest mersenne prime with a billion digits?
There isn't one.
Replies from: Blueberry↑ comment by Blueberry · 2010-07-03T10:02:41.104Z · LW(p) · GW(p)
How do you know?
Replies from: wedrifid↑ comment by wedrifid · 2010-07-03T10:19:15.376Z · LW(p) · GW(p)
Overwhelming prior makes my claim more likely to be correct than the majority of claims made by myself or others. ;)
- Mersenne primes are powers of two - 1.
- There are 3 powers of two with 1 billion digits, and a 0.32 (that is, log10(2) -3) chance of a 4th.
- It is proven that the powers of two for a mersenne prime must themselves be primes.
- In order to have 1 billion decimal digits the power of two must have 1billion + 1digits.
- There aren't all that many numbers with 1 billion + 1 digits that are prime.
- Of the powers of two of those numbers - 1 a ridiculously smaller proportion will also be prime.
- Given that there are only three possibilities... I'm confident to the point of not being able to conveniently express my confidence numerically that there are no mersenne primes with a billion digits.
- Even if I made a couple of mistakes in the above reasoning the remainder would still give me cause to be confident in my assertion. (I have a suspicion about my expression of exactly which two ridiculously big numbers must be prime for one of up to the four candidates.)
↑ comment by Blueberry · 2010-07-04T05:01:47.127Z · LW(p) · GW(p)
Overwhelming prior makes my claim more likely to be correct than the majority of claims made by myself or others. ;)
Very Bayesian of you! This is potentially confusing, though, in that you made a mathematical claim. Frequently mathematical claims mean that you have a proof of something, not that it's very likely. This issue comes up with computerized proofs in mathematics, like the four-color theorem. It's very likely to be true, and is usually considered proven, but we don't actually have a formal proof, only a computer-based one.
Note that your logic would apply equally well to Mersenne primes of N digits, for sufficiently large N. This makes sense in a Bayesian framework, but in a mathematical framework, you could take these statements and "prove" that there were a finite number of Mersenne primes. Mathematical proofs can combine in this way, though Bayesian statements of near-certainty can't. For instance, for any individual lottery ticket, it won't win the lottery, but I can't say that no ticket will win.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-07-04T06:37:37.339Z · LW(p) · GW(p)
Actually, Georges Gonthier did give a formal (computer-verified) proof of the four-color theorem. Also, I believe that before that, every 5 years, someone would give a simpler version of the original proof and discover that the previous version was incomplete.
Replies from: MatthewW, Blueberry↑ comment by MichaelVassar · 2010-07-06T16:44:35.687Z · LW(p) · GW(p)
I think that the above only gives the odds that there are no such primes unless there is some good deep reason (presumably a set of symmetries, which doesn't seem at all likely since billion is an arbitrary seeming round decimal) for there to be some such prime or primes. Without that caveat, such statements would bite-in-the-ass far too many people historically who would have made overly confident mathematical claims. To clarify; I think you should be ridiculously confident, but not as confident as your reasoning by itself would justify.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-07T04:53:17.185Z · LW(p) · GW(p)
To clarify; I think you should be ridiculously confident, but not as confident as your reasoning by itself would justify.
I agree (and voted accordingly). The influence of the direct probability I calculated would be utterly overwhelmed in my confidence calculation compared to meta-uncertainty. I certainly wouldn't go as far as placing 1:10,000 odds, for example, even though my calculations would put it at 1^(-lots). In fact, I can't even assign extreme odds to something as obvious as there is no Jehova, except for signalling purposes. I know enough about the way me (and my species) think that assigning extreme probabilities would be ridiculously overconfident. (How this relates to things like Pascal's wager is a different and somewhat more philosophically difficult problem.)
Replies from: faul_sname↑ comment by faul_sname · 2012-08-05T04:09:34.783Z · LW(p) · GW(p)
1^(-lots)
This would be 1.
Replies from: wedrifidcomment by thomblake · 2010-07-01T19:01:59.119Z · LW(p) · GW(p)
There are a lot of particular studies and events mentioned here - more specific references would be good. If they're all just culled from the one book, page numbers would work.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-07-01T19:53:35.757Z · LW(p) · GW(p)
Page numbers:
- Pacific Gas and Electric: 82
- Philips Electronics: 112-113
- John Paulos: 8-9
- Stock market losses: 59-60
- Dividing livers: 91-92
- Affirmative action: 97
- Tax framing: 86-88
- Child murder: 132-133
- Autistic: 130-132
- 9/11: 76
- Albania: 152-155
comment by Vladimir_M · 2010-07-01T21:55:01.137Z · LW(p) · GW(p)
Heh... another comment that's just occurred to me:
Here, most people would also say no - they'd want the "bonus" for children to be equal for low- and high-income families, but they do not want the "penalty" for lacking children to be the high for same and low income.
Again, this is by no means necessarily irrational. The effects of government policies are by no means limited to their immediate fiscal implications. People typically care much more -- and often with good reason -- about their status-signaling implications. By deciding to frame its tax policy in terms of "X is good and normal, but Y is even better" rather than "Y is good and normal, but X should be penalized," the government sends off different tremendously powerful signals about the status that it ascribes to different groups of people.
Average folks can be terribly innumerate when asked questions of this sort, but they'll clue onto the status implications of different alternatives instinctively. These concerns may well be important in practice -- even if a myopic view focused solely on the accounting issues would dismiss them as sheer bias. Of course, it's arguable to what extent this particular example is about realistic status-related concerns, but that's a question to be answered with non-trivial reasoning, not outright dismissal.
Replies from: WrongBot, Kaj_Sotala↑ comment by WrongBot · 2010-07-02T00:31:18.203Z · LW(p) · GW(p)
An alternate hypothesis: people are loss-averse.
One of the best pieces of evidence for this theory is an incident that occurred during the development of online role-playing game World of Warcraft. While the game was in beta testing, its developer, Blizzard, added a "rest" system to the game to help casual players develop their characters at a pace slightly closer to that of the game's more serious players, who tended to devote much more time to the game and thus "leveled up" much more quickly.
The rest system gives "rested experience" at a gradual rate to players who are not logged into the game. As initially implemented, characters who had available rest experience would acquire experience points for their character at a 100% rate, diminishing their rest experience in the process. Once you were out of rest experience, your character was reduced to earning experience at a 50% rate. Because rest experience accumulated slowly, only while offline, and capped out after about a day and a half, players who logged on to the game every day for short periods of time were able to earn experience points most efficiently, lowering the extent to which they were outpaced by heavy players.
But while the system was achieving its goal, almost all of the game's testers hated it, no matter how much they played. They felt like they were being penalized for playing too long, which just didn't seem fair.
Blizzard fixed it by changing the rested rate to 200% and the normal rate to 100%, without changing the actual number of experience points earned.
They just relabeled the percentages, told everyone that that was what they were doing, and then everyone stopped complaining and was perfectly happy with the system.
↑ comment by Kaj_Sotala · 2010-07-02T00:09:37.780Z · LW(p) · GW(p)
On the other hand, when the contradiction is pointed out to test subjects afterwards, they agree that it doesn't make sense. That implies that status implications aren't ultimately that big of a deal.
comment by xamdam · 2010-07-01T19:06:15.439Z · LW(p) · GW(p)
Love the post. Just pointing out that it largely does not answer the X-treme rationality question, (at least as I define it). Regular rationality will do for most of these cases, except perhaps the autism one - there are extreme emotions involved (speaking from personal experience, I fell for the MMR racket for a while due to personal acquaintance of the main doctors involved on top of all the emotions of wanting your kid to get better; it was a real lollapalooza effect).
comment by Sniffnoy · 2010-07-01T19:23:03.348Z · LW(p) · GW(p)
Another group calculated that for driving to be as dangerous as flying, there would have to be an incident on the scale of 9/11 once a month!
I'm assuming "driving" and "flying" should be swapped here?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-07-01T19:40:10.514Z · LW(p) · GW(p)
Yeah. They are now, thanks.
comment by SilasBarta · 2010-07-01T18:53:26.400Z · LW(p) · GW(p)
Why does general "rationality" skill prevent these things, rather than (or better than) situation-specific knowledge? Yes, if you were a good rationalist and could apply it to all parts of your life with ease, yes, you'd dodge the problems listed. But does this generalize to calling the development of rationalist skills a good use of your time per unit disutility avoided?
Rationality, I think, has to develop in you pretty well before it becomes the source of your resistance to these problems.
So I'm not sure if these are costs of irrationality per se, but rather, of lacking both well-developed rationality, and a specific insight.
Replies from: Kaj_Sotala, Aurini↑ comment by Kaj_Sotala · 2010-07-01T20:27:28.615Z · LW(p) · GW(p)
Not sure if this is what you were asking, but...
I think it's pretty obvious from the examples that irrationality in general is causing massive amounts of damage in society, all the time. Even mild improvements in people's average rationality would probably pay themselves back many-fold.
The question of whether it'd more efficient to just teach people the situation-specific knowledge versus general rationality skills is an open one. Certainly teaching even just situation-specific knowledge would probably be worthwhile, and it's likely that it would be easier. On the other hand, if you only teach situation-specific knowledge, then they might not be able to apply it properly in similar but related situations, and it will only help protect people against the problems that you know exist. General rationality skills would help even against problems you don't know exist.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-07-01T20:37:21.112Z · LW(p) · GW(p)
I think it's pretty obvious from the examples that irrationality in general is causing massive amounts of damage in society, all the time. Even mild improvements in people's average rationality would probably pay themselves back many-fold.
In the sense that a certain amount of improvement in rationality would prevent these things, yes, you're right. But I disagree with this promotion of it as "the" cause of these failures, because you have to be pretty advanced in your level of rationality and willingness to apply it (e.g. against social pressure) before your rationality "automatically" causes you to save yourself from these negative events.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-07-01T20:58:39.107Z · LW(p) · GW(p)
I don't know. Rationality certainly doesn't automatically protect you against any of these, but I suspect that even a very basic thing like being in the habit of actually thinking about things some more in a critical light would already have given many people a much better chance of avoiding most of these. Stanovich:
Several studies have shown that practice at the simple strategy of triggering the thought "think of the opposite" can help to prevent a host of the thinking errors studied in the heuristics and biases literature, including but not limited to: anchoring biases, overconfidence effects, hindsight bias, confirmation bias, and self-serving biases.
Several of those were probably also involved in many of the examples listed.
(The techniques discussed for promoting rationality that are discussed in the book will be the topic of a separate post.)
Replies from: SilasBarta↑ comment by SilasBarta · 2010-07-01T21:11:02.931Z · LW(p) · GW(p)
Rationality certainly doesn't automatically protect you against any of these
I meant "automatic" in the sense that you don't feel your are doing anything different with your ritual of cognition when you make the choice (that avoids whatever calamity is in the example).
Several studies have shown that practice at the simple strategy of triggering the thought "think of the opposite"
That seems like a benefit to a specific heuristic, not "rationality " per se. I agree that simple heuristics can be very powerful, but not because they're instances of someone's general improvement in the "rationality" skill.
Rationality is when you can understand why the "think of the opposite" heuristic works and can come up with such effective strategies on your own.
ETA: A better way to put what I'm saying: intelligence is in the being who writes the program or builds the computer, not the computer that executes it. It's in the one who comes up with the simple but effective rule, not the one who's capable of implementing it.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-07-01T21:39:11.106Z · LW(p) · GW(p)
I think we have different definitions for rationality. For me, teaching beginning-level rationality is just teaching people to recognize various biases, teaching them useful heuristics, and so forth. Coming up with your own heuristics is a more advanced skill, but a part of the same discipline nonetheless.
If you're teaching people to program, you start by teaching them a specific programming language and how to do some basic stuff in that one. If you're teaching people math, you start by some specific subarea of math and practice problems. If you're teaching a foreign language, you start with some basic rules of grammar and vocabulary. And so on. It's very rare that you'd be able to directly teach "the general X skill", regardless of what X was. Instead, you teach relatively specific stuff, and gradually they'll learn to think in the way required by the skill.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-07-01T22:13:24.698Z · LW(p) · GW(p)
I don't disagree with that. What I'm saying, rather, is that you shouldn't try to persuade someone to take computer science 101 on the grounds that, "hey, programmers make a lot of money!", or that, "They hand out good candy in CompSci 101!", both of which you seem to be doing here. (The latter metaphor refers to you describing benefits that can be had without having to learn rationality per se.)
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-07-02T00:15:24.032Z · LW(p) · GW(p)
I'm not sure of what I originally intended, because I didn't really think in those terms while writing the post, but afterwards I'd say that the most reasonable way to use the post to argue would be "these are the kinds of problems you'll encounter in your life, if you take a CS101 class you'll learn to deal with some of them and it'll be at least somewhat useful, and if you want to you can take more classes and learn how to deal with even more problems of this kind".
↑ comment by Aurini · 2010-07-01T19:07:08.163Z · LW(p) · GW(p)
Up vote, because you're right, but Kaj's intention here was to provide real-world examples for convincing the layman of rationality's value. A similar example for the world of politics:
"I don't care about politics."
"But you smoke weed, right? That's a political issue - politics boils down to whether or not you have to worry about the cops knocking on your door and busting you for non-criminal behaviour."
So while his examples aren't perfect from our perspective, what he's really aiming for is situations which can be exploited through the dark arts to convince people to be rational. If I can come up with any better ones, I'll post them later today.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-07-01T20:00:21.554Z · LW(p) · GW(p)
Up vote, because you're right, but Kaj's intention here was to provide real-world examples for convincing the layman of rationality's value. A similar example for the world of politics:...
Well, then it seems like exaggeration to the point of lying. It's more like if you said, "you should exercise so you can look like Arnold Schwarzenegger / Megan Fox and get roles in big Hollywood movies!"
Is it possible? Sure, if you put an insane level of effort into it and are in just the right circumstances, but it's not a accurate picture to present to someone of the benefits.
I think instead you should teach a specific rationalist skill (like making knowledge truly part of yourself) and justify it by the more reachable benefits.
comment by Nisan · 2010-07-01T22:50:34.577Z · LW(p) · GW(p)
Great post; thanks for providing these examples.
One textual complaint: This passage is unclear:
Should the reduction for the family with an income of $100,000 be the same, or should they be given more of a reduction because of their bigger income? Here, most people would say no.
(Either/or question answered with yes/no.)
Replies from: Unnamed, Kaj_Sotala↑ comment by Unnamed · 2010-07-01T23:15:12.465Z · LW(p) · GW(p)
I had trouble following the whole child tax penalty/reduction example, and was going to request that Kaj_Sotala simplify or clarify the whole example (and fix that sentence in particular).
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-07-02T00:10:05.392Z · LW(p) · GW(p)
Fixed the sentence. Is there anything else that's unclear in it?
Replies from: Unnamed↑ comment by Unnamed · 2010-07-02T01:04:13.940Z · LW(p) · GW(p)
There are a lot of numbers, and they aren't given in a way that makes the important ones prominent and the parallelism between the two framings transparent. You're switching between a $500 increment and a $1,000 increment, varying the number of children between 0, 1, and 2, and repeating the $35k and $100k numbers (which are distracting labels for the groups, not relevant numbers). Was the setup:
Option 1
- rich 2-child pays $26,000
- rich 0-child pays $27,000
- poor 2-child pays $3,000
- poor 0-child pays $4,000
Option 2
- rich 2-child pays $25,000
- rich 0-child pays $27,000
- poor 2-child pays $3,000
- poor 0-child pays $4,000
with Option 1 preferred when it's a tax reduction (0-child is the baseline) and Option 2 preferred when it's a tax penalty (2-child is the baseline)?
I might get rid of the numbers entirely, and just say something like: Another study explored the idea that families with more children should pay less in taxes, framing the tax difference as either a reduction or a penalty, and tested whether people thought the amount of the reduction/penalty should vary depending on the family's income. In one version, two-child families pay less than families without children because of a tax reduction, and in the other families without children pay more than two-child families because of a tax penalty. Of course, the two versions are equivalent, but when it was framed as a tax penalty most people wanted the size of the penalty to increase as the family's income increased, but when it was framed as a tax reduction most wanted the size of the reduction to be the same regardless of the family's income.
Maybe you could add a sentence or two at the end with the relevant numbers.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-07-02T18:40:22.687Z · LW(p) · GW(p)
Thanks, this is good feedback. I'll make those changes shortly.
↑ comment by Kaj_Sotala · 2010-07-02T00:08:11.949Z · LW(p) · GW(p)
Edited.
comment by Jimdrix_Hendri · 2015-11-14T16:38:55.699Z · LW(p) · GW(p)
The example given for status quo bias is not necessarily indicative of impaired rationality. That are such things as hysteresis effects:
Consider the case of the family subject to frequent power outages. They will learn to adjust. This could be as simple as buying an alternative power source (generator). Or, perhaps they adopt their life to perform activities requiring no power whenever their is an outage. If you have already bought a generator, it might not be worth your while to pay a higher price for a more reliable power supply. Whereas the family accustomed to a stable supply faces capital cost associated with making an adjustment.
comment by simplicio · 2010-07-05T06:36:36.273Z · LW(p) · GW(p)
Here, most people would also say no - they'd want the "bonus" for children to be equal for low- and high-income families, but they do not want the "penalty" for lacking children to be the high for same and low income.
Note typo.
Great post! I actually started trying to argue against your analysis here in the child tax example, based on my own intuition. Then I realized I was being a sophist. I had good reasons for both preferences, but the reason for the progressive penalty wasn't applied to the flat bonus, nor vice versa.
I might have to be careful about how this 'politics' thing affects my thinking.
Replies from: AmagicalFishy↑ comment by AmagicalFishy · 2015-11-14T04:03:13.899Z · LW(p) · GW(p)
I know this post is five years old, but can someone explain this to me? I understood that both questions could have an answer of no because one may want to minimize the monetary loss / maximize the monetary gain of the poorer family—therefore, the poorer family should get a higher reduction and a lower penalty. Am I misunderstanding something about the situation?
comment by Kingreaper · 2010-07-02T00:33:33.971Z · LW(p) · GW(p)
My reasoning with the country taxation one would be:
A family earning $0 will be paying what amount of taxes, before any special bonuses or penalties?
I would expect them to be paying $0.
If they have one child, they then either have to pay $1000, or they receive $500. That's a big difference between the two hypothetical worlds.
It's an issue of missing information (what tax do people at other incomes pay) being filled in with reasonable answers.
Least Convenient Possible World time: even with full information on the taxing system, and the cost of living in the area, and both scenarios being functionally identical, people would still disagree in how each situation should be treated. However, in that situation, I would be less likely to be one of them.
comment by 27chaos · 2014-11-09T16:37:56.039Z · LW(p) · GW(p)
You use indirect evidence to suggest that rationality increases people's ability to choose good actions and thus improves their lives. However, direct empirical evidence contradicts this: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1692437 https://ideas.repec.org/p/otg/wpaper/1308.html
I think that the valley of bad rationality is much wider than we'd at first suspect.
comment by stokejes · 2010-07-30T16:10:09.961Z · LW(p) · GW(p)
I read a blog entry about Status Quo Bias the other day that is in a similar vein as this post, though even more cynical: http://youarenotsosmart.com/2010/07/27/anchoring-effect/
comment by beriukay · 2010-07-03T13:08:05.297Z · LW(p) · GW(p)
I thought the Freakonomics guy debunked that driving was more dangerous than flying, when you correct for the amount of time spent doing either activity.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-03T13:12:35.177Z · LW(p) · GW(p)
Does he still consider it more dangerous per km traveled?
Replies from: beriukay↑ comment by beriukay · 2010-07-03T13:43:08.133Z · LW(p) · GW(p)
I can't say for sure, because I loaned the book out and never got it back, but I don't believe he talked about per km traveled.
Two critics of his book address your question. One states with nothing but authority that per distance, flying is absolutely safer. The second brings up the point that if they have the same risk over time, and one travels a helluva lot faster, then you are spending less time at risk. So you would be correct to think it was safer per km.
comment by WrongBot · 2010-07-01T20:15:12.764Z · LW(p) · GW(p)
I have an objection to the first example listed.
Pacific Gas asked customers with unreliable service whether they'd be willing to pay extra for more reliable service, and customers with reliable service whether they'd be willing to accept a less reliable service in exchange for a discount. The customers were presented with increases and decreases of various percentages, and asked which ones they'd be willing to accept. The percentages were same for both groups, only with the other having increases instead of decreases. Even though both groups had the same income, customers of both groups overwhelmingly wanted to stay with their status quo.
Let's look at this scenario with some numbers attached. A and B each pay Pacific Gas $100 each month. A has less reliable service, and B has more reliable service. To simplify the given scenario, let's say that only one percentage change in price is offered to each group, 30%. So A is offered a $30 surcharge for more reliable service, and B is offered a $30 discount for less reliable service. Money has diminishing marginal utility, so these two offers can not be compared in an entirely straightforward fashion. Let's suppose that A and B both have a monthly discretionary spending budget of $500, with their other income going to fixed costs like rent. If A takes Pacific Gas's offer, his budget drops to $470. If B takes his own offer, his budget goes up to $530. In terms of marginal utility, the difference between $470 and $500 is bigger than the difference between $500 and $530. If the disutility of less reliable service is smaller than that difference, then there are utility values for which both A and B are rational. If the disutility is instead larger than the difference between the expected positive of utilities for A and B, then one of them must be rational.
So in this example it's unclear that there's any irrationality at all; even if there is, it's easy to just assume one would be in the "better" group and so avoid confronting the reality of the Status Quo bias. A better example would involve a situation in which group A's status quo is worse than group B's status quo, group membership is determined randomly, and the members of group A tend to reject an offer which would clearly move them into a position identical to B's status quo.
Replies from: Unnamed, Kaj_Sotala↑ comment by Unnamed · 2010-07-02T00:07:52.115Z · LW(p) · GW(p)
If we're talking about small amounts of money (relative to people's incomes) the utility of money should be extremely close to linear. It seems very implausible that this effect would be big enough to account for the results of the study.
I have also seen many studies which are designed to directly rule out this kind of possibility (e.g. some people have a chance to get $30, others get $30 and then have a chance to lose $30), and they consistently find similar results to studies that don't take that extra step (e.g. some people have a chance to get $30, others have a chance to lose $30).
↑ comment by Kaj_Sotala · 2010-07-01T20:29:52.885Z · LW(p) · GW(p)
This is a very good point.
comment by DuncanS · 2010-07-03T10:02:34.181Z · LW(p) · GW(p)
But this is just unfair. You're judging rationality according to rational arguments, and so OF COURSE you end up finding that rationality is sooo much better.
I, on the other hand, judge my irrationality on an irrational basis, and find that actually it's much better to be irrational.
What's the difference? Of course in response to this question you're bound to come up with even more rational arguments to be rational, but I don't see how this gets you any further forward.
I, on the other hand, being irrational, don't have to argue about this if I don't want to. What kind of sense would it make to argue rationally about the advantages of irrationality anyway? Surely this is a contradiction in terms? But the nice thing about being irrational is that I can irrationally use rationality from time to time anyway, and then just stop and go back to being irrational again when irrationality is clearly more inspired.
OK - so I'm messing about. But you can't prove rationality is more rational by rational argument. Well, you can, but it's irrational in a way as you're assuming the very thing you're trying to prove. It's as example of trying to pick yourself up by your own bootstraps.
Replies from: Bongo, Kaj_Sotala, RobinZ, wedrifid, wedrifid, syllogism↑ comment by Bongo · 2010-07-03T22:37:18.806Z · LW(p) · GW(p)
A rationality-resister doesn't mean the same thing with "rationality" as a rationalist does. It's just something that nerds refer to to humiliate them in debates. Not techniques for constructing a map that fits the territory, etc. They probably don't even have a concept for that.
Therefore: maybe you can turn rationality-resisters into rationalists if you teach them rationality but don't call it rationality and don't attack their beliefs. (Eliezer's sequences taught rationality without attacking, for example, religion (too often), but a lot of people were probably turned off from reading them by just the word rationality).
↑ comment by Kaj_Sotala · 2010-07-05T05:58:07.522Z · LW(p) · GW(p)
If a person were completely irrational, sure, this post wouldn't convince them. (Of course, they also wouldn't live for long.) But it never tried to convince completely irrational people, it tried to convince people who were kinda-rational already that investing effort in further improving their rationality would be worth it.
Replies from: DuncanS↑ comment by DuncanS · 2010-07-05T07:00:21.083Z · LW(p) · GW(p)
I guess in the context for which it's intended, it works OK. It's a book introduction, after all. Most irrational people don't really have much of a map of where and how they are being irrational - and in fact commonly consider themselves to be very rational persons, when it comes to it. (I strongly suspect this is like driving - most people considering themselves to be above average - where being incompetent also robs you of awareness of your lack of performance...) The common reaction would probably be to nod along, thinking how terrible it is that all these people are so irrational, and enjoy the read. For an introduction, that's good enough.
↑ comment by RobinZ · 2010-07-03T16:37:43.367Z · LW(p) · GW(p)
I think many of us have considered these ideas before. Eliezer Yudkowsky certainly has.
[Y]ou are not, in general, safe if you reflect on yourself and achieve internal coherence. The Anti-Inductors who compute that the probability of the coin coming up heads on the next occasion, decreases each time they see the coin come up heads, may defend their anti-induction by saying: "But it's never worked before!"
The fact of the matter is: either you are that crazy that you will be incapable of developing a rationality that works ... or you aren't. If you are, you will lose. If you aren't, you can probably judge the rationality you have according to the rational arguments you have to develop a better rationality.
Replies from: DuncanS↑ comment by DuncanS · 2010-07-03T17:01:52.826Z · LW(p) · GW(p)
Just had a look at what Eliezer said there. I think it's not quite the same thing as what I'm talking about here. It's true that if you have in your mind a system of rationality - that leads you in a rational way to improve what you have over time. I agree this works if you have the required intelligence and don't start with an entirely pathological system of rationality.
Let me give a slightly more concrete example. I had a conversation some time ago regarding homeopathy - that branch of alternative medicine that uses ingredients which have been diluted down by a factor of 10 - in this case 120 times in succession. This results in an overall dilution of 1 in 10^120. Since there are only 10^85 or so atoms in the entire observable universe, this provides a very high degree of certainty that there is none of the active ingredient in the homeopathic bottle that this person swore was highly effective.
Pointing this out had no effect, as you might expect. In fact, the power of the treatment is said to rise as it becomes more dilute. The person absolutely believed in the power of that remedy, even though they agreed with my argument that in fact there were no molecules of the original substance in the bottle. I don't suppose talking about placebos and hypnotic suggestion would have made any difference either - in fact I believe I did mention the placebo effect. No difference at all.
We've all come across stuff like this. My point is that the applicability of rationality is what is at issue in arguments like this. I say it is - they say that in some way it isn't. My argument stops me from buying the homeopathic remedy, but it is almost irrelevant to the other person because rationality itself is what is at issue.
Replies from: RobinZ↑ comment by RobinZ · 2010-07-03T17:09:21.768Z · LW(p) · GW(p)
Wait, are you asking how to convince an irrational human being to be rational?
Replies from: DuncanS↑ comment by DuncanS · 2010-07-03T17:38:21.983Z · LW(p) · GW(p)
Sort of. And we all know the answer to that question is that it's often completely impossible.
Some of the examples in the article are matters where human hardware tends to lead us in the wrong direction. But others - particularly the Albanian case, are to a large extent failures of intent. Good quality rationality is a long term investment that many people choose not to make. The result is vulnerability to believing impossible things. Irrationality is often a choice, and I think that, long term, our failure to be rational springs as much from choosing not to be as much as it does from failures in execution when sincerely trying to be. You can compensate, to a degree, for our hardware based inclinations to see patterns where none exist, or stick with what we have. But nothing compensates for choosing the irrational.
We can all see that irrationality is expensive to varying degrees depending on what you do. But this is only convincing to those of us who are already convinced and don't need to know. So what was the article intending to do?
So yes - sort of.
Replies from: RobinZ↑ comment by RobinZ · 2010-07-03T18:28:58.669Z · LW(p) · GW(p)
Sort of. And we all know the answer to that question is that it's often completely impossible.
Not to sound insufficiently pessimistic, but I don't think that's been rigorously established. It doesn't seem impossible to raise the sanity waterline - it seems more likely that we have inferential distances to cross and armors built to protect false beliefs we must pierce.
Replies from: DuncanS↑ comment by wedrifid · 2010-07-03T11:07:29.411Z · LW(p) · GW(p)
I removed the upvote that someone had placed on this because the above reasoning is intuitive but wrong. We can conclude that it is wrong for the same reason and with the same confidence that we can conclude "all X are Y; B is Y; therefore, B is X" is wrong.
To evaluate claims we must use the best tools of reasoning that are available to us. This applies whether or not we are evaluating claims about reasoning itself.
↑ comment by wedrifid · 2010-07-03T11:07:02.518Z · LW(p) · GW(p)
I removed the upvote that someone had placed on this because the above reasoning is intuitive but wrong. We can conclude that it is wrong for the same reason and with the same confidence that we can conclude "all X are Y; B is Y; therefore, B is X" is wrong.
To evaluate claims we must use the best tools of reasoning that are available to us. This applies whether or not we are evaluating claims about reasoning itself.
Replies from: DuncanS↑ comment by DuncanS · 2010-07-03T15:43:40.754Z · LW(p) · GW(p)
Splendid. Here is a perfect example of a rational reason to be rational. I did say someone would be bound to come up with that.
I don't understand your all X is Y comment - I understand the logical fallacy, but not why you think it's relevant.
You said..."To evaluate claims, we must use the best tools of reasoning that are available to us?" You might, but not everybody does. Why should we always use reasoning? Not everybody does. And some of us mean not to......
I am still messing about - I do believe that rationality is the right thing to do. But some are deadly serious when they say that your use of reason blinds you to the important stuff. My point is that you cannot prove them wrong, for the very applicability of proof on such matters is what is at issue. Obviously they won't accept that your proof is a good answer. They are right - it is only relevant insofar as argument itself is relevant, which is what you are disagreeing about !
Irrationally, I decided I'd upvote you because your reasoning is just as wrong as mine.
Replies from: Vladimir_Nesov, wedrifid↑ comment by Vladimir_Nesov · 2010-07-03T19:27:54.693Z · LW(p) · GW(p)
Duncan: see the links in this comment.
↑ comment by wedrifid · 2010-07-03T16:14:12.661Z · LW(p) · GW(p)
To other readers: Can anyone think of the Eliezer post that is on the tip of my tongue? I can't find the link without recalling the keywords!
Replies from: Vladimir_Nesov, orthonormal↑ comment by Vladimir_Nesov · 2010-07-03T19:26:23.776Z · LW(p) · GW(p)
Mainly
but these also seem relevant:
- http://lesswrong.com/lw/gr/the_modesty_argument/
- http://lesswrong.com/lw/h9/tsuyoku_vs_the_egalitarian_instinct/
↑ comment by DuncanS · 2010-07-04T00:29:55.629Z · LW(p) · GW(p)
It is not easy to escape this problem.
Let me start by trying to summarise Eliezer's argument - perhaps using slightly different terminology. If I have a given stream of sensory experience, what is the correct interpretation of it? I would say it is that which allows you to compress the stream (and the theory with which you explain it) down to the smallest possible size. You can then use this as a means of making a prediction of what the next bit of the sensory stream might be.
This has quite a few nice features - not least of which is if someone else comes up with a different interpretation of the stream, you can simply weigh it against yours, and if theirs weighs more, that renders it statistically more unlikely, and statistically more likely to give incorrect predictions as well. And weighing compressed data sets is mathematics, and not a matter of opinion. You can reasonably say that their heavier 'interpretation' is adding information that you know - from your compression - is not in the stream. Where did this extra information come from? It's just wrong.
The next question is obvious - how could someone possibly consider themselves to be logically right to do something other than this? And here Eliezer is home and dry - this is the only logically right thing to do. Anyone doing something different is assuming the laws of statistics and reason do not apply to them. In all of this I'm with Eliezer all the way. It's inductive reasoning, which means we only have expectations about what we have not yet seen, and not certainty. But at least we know that we can't do better.
All of this is beyond question, and not my point. There is another major choice, which is to disbelieve rationality altogether, or regard it as of limited applicability. Throw it out - baby, bathwater, everything. And trust something else instead. And explicitly believe that this something else is NOT a purely rational means of truth, but something else. This gives you absolute license to impose any number of interpretations on the data. Of course the rationalists are blind - they tell me this data only tells me X, but I can see so much more in it than that! Two and two may be four, rationally, but in fact the whole is more than the sum of its parts. If someone proves that the extra stuff isn't actually in the data, well fine - I knew that. These things aren't knowable by the rational mind, one needs divine revelation, or natural intuition, or spiritual sensitivity..... One comes to believe the world is fundamentally not rational, not rationally explainable, not rationally reducible, and certainly not statistically analyzable. Forget all that stuff, and just trust your animal instincts.
And here you end at an impasse. Eliezer, in his article, states that he expects nature to give such irrational persons a few lessons in the school of hard knocks. They are living in a mindset full of confabulations and perceptual artifacts. The irrationalists would see him as living his life with his head under a bucket, restricted to what logic can tell him, and missing out on every other part of his humanity.
Who is right? Rationally, Eliezer. Irrationally, I have no idea - is there even such a thing as 'right' in this case? Would I even care? If one denies rationality, one can believe in anything, if believe is the right word for it.
Just to be clear, I do not believe in extra-rational means of knowledge, and I believe rationality to be universally applicable. But I regard this as a belief, as any attempt at proof is begging this question on one side or the other.
Replies from: Vladimir_Nesov, wedrifid↑ comment by Vladimir_Nesov · 2010-07-04T07:38:30.459Z · LW(p) · GW(p)
It's not a "rationalist" thing, it's a human thing. What are you evaluating the adequacy of rituals of cognition with? You're already what you are, which is what you use. There are no universally convincing arguments, and one accepts, say, Occam's razor, not because it's "rational", but because we are the kind of agents that are compelled by this principle. Don't distinguish between "rational" and "magical", ask what moves you, on reflection, what do you believe to get you the results, and whether you believe the argument for why it does.
Links:
↑ comment by wedrifid · 2010-07-04T02:38:46.205Z · LW(p) · GW(p)
It is not easy to escape this problem.
Believe it or not Vladmir, Eliezer and I all understand the limitations of thought, the dependence on initial priors. Searching for "anti-inductive" will get you some hits. That we still claim that you need to use every resource you have at your disposal to evaluate your resources is significant.
Who is right? Rationally, Eliezer. Irrationally, I have no idea - is there even such a thing as 'right' in this case?
Eliezer, no, no.
If one denies rationality, one can believe in anything, if believe is the right word for it.
There is one line of reasoning that I find is actually more effective on irrational people than rational ones. Argumentum ad baculum.
↑ comment by orthonormal · 2010-07-20T23:30:36.616Z · LW(p) · GW(p)
↑ comment by syllogism · 2010-12-09T02:36:56.404Z · LW(p) · GW(p)
You're starting one premise back from where the post is jumping off from.
The post assumes as a premise that we have some goals, and there's an empirical question about what cognitive strategies will best achieve those goals.
One strategy is to use our intuitions. We have some built in software for doing quasi-Bayesian analysis, and our heuristics perform to an acceptable standard in many contexts.
The other strategy is to use more formal analysis. The post argues for this second strategy, pointing out predictable failure points for our intuition heuristics.
Which one maximises your chances of achieving your goals is an empirical question. It's possible that as bad as our heuristics are, we're just incapable of the metacognition to do it formally. Maybe all we end up doing is giving ourselves tools for rationalisation. Most of the people in this community don't believe that, but it's not a philosophical question.
You go one premise back, to the point where we're choosing a strategy. Sure, you can reject rationality altogether. Then it doesn't make sense to talk about deciding on a cognitive strategy. But if you accept as axiomatic that you have some goals, and you want to figure out how to further them, then we arrive at this interesting empirical question: what's the most effective methodology for human decision making? It's not a contradiction to say "I'm going to rationally decide that the best strategy is not to engage in this kind of metacognition, as all it does is lead me astray".
Replies from: DuncanS↑ comment by DuncanS · 2010-12-09T22:24:11.687Z · LW(p) · GW(p)
I agree with nearly all of what you're saying up here, about heuristics, metacognition, and whether our rational mind is actually powerful enough to beat our instinctive one in practical situations.
I think the original poster was assuming we have some goals, and then pointing out the many disadvantages of choosing an irrational strategy to get to them.
Why would one choose an irrational strategy? Is it because we're too stupid to know it was irrational? Sometimes. Perhaps we chose it knowing it was irrational? Sometimes that happens too.
In neither case is it that useful to hear that an irrational strategy isn't as rational as a rational strategy, and can be rationally expected to have a worse outcome. Either they picked that strategy thinking it was rational, in which case that point is irrelevant, or they picked it thinking it was irrational, in which case they clearly don't think that rationality is right when it says that rationality is always better.