The Evolutionary Heuristic and Rationality Techniques
post by ChrisHallquist · 2013-11-10T07:30:54.279Z · LW · GW · Legacy · 27 commentsContents
27 comments
Nick Bostrom and Anders Sandberg (2008) have proposed what they call the "evolutionary heuristic" for evaluating possible ways to enhance humans. It begins with posing a challenge, the "evolutionary optimality challenge" or EOC: "if the proposed intervention would result in an enhancement, why have we not already evolved to be that way?"
They write that there seem to be three main categories of answers to this challenge (what follows are abbreviated quotes, see original paper for full explanation):
- Changed tradeoffs: "Evolution 'designed' the system for operation in one type of environment, but now we wish to deploy it in a very different type of environment..."
- Value discordance: "There is a discrepancy between the standards by which evolution measured the quality of her work, and the standards that we wish to apply..."
- Evolutionary restrictions: "We have access to various tools, materials, and techniques that were unavailable to evolution..."
In their original paper, Bostrom and Sandberg are interested in biological interventions like drugs and embryo selection, but it seems that their heuristic could also tell us a lot about "rationality techniques," i.e. methods of trying to become more rational expressible in the form of how-to advice, like what you often find advocated here at LessWrong or by CFAR.
Applying the evolutionary heuristic to rationality techniques supports the value of things like statistics, science, and prediction markets. However, it also gives us reason to doubt that a rationality technique is likely to be effective when it does't have any good answer to the EOC.
Let's start with values dissonance. I've previously noted that much human irrationality seems to be evolutionarily adaptive: "We have evolved to have an irrationally inflated view of ourselves, so as to better sell others on that view." That suggests that if you value truth more than inclusive fitness, you might want to take steps to counteract that tendency, say by actively trying to force yourself to ignore how having various beliefs will affect your self-image or others' opinions of you. (I spell this idea out a little more carefully at the previous link).
But it seems like the kind of rationality techniques discussed at LessWrong generally don't fall into the "values dissonance" category. Rather, if they make any sense at all, they're going to make sense because of differences between the modern environment environment and the ancestral environment. That is, they fall under the category of "changed tradeoffs." (Note: it's unclear to me how the category of "evolutionary restrictions" could apply to rationality techniques. Suggestions?)
For example, consider the availability heuristic. This is the thing that makes people wrongly assume the risk of getting attacked by a shark when you go swimming is really high because of one memorable news story they saw about a shark attack. But if you think about it, the availability heuristic probably wasn't that much of a problem 100,000 years ago on the African savannah. Back then, if you heard a story about someone getting eaten by a lion, it was probably because someone in your band or a neighboring band had gotten eaten by a lion. That probably meant the chance of getting eaten by a lion was non-trivial in the area where you were and you needed to watch out for lions.
On the other hand, if you're a modern American and you hear a story about someone getting eaten by a shark, it was probably because you heard about it on the news. Maybe it happened hundreds of miles away in Florida. News outlets selectively report sensational stories, so for all you know that was the only person to get eaten by a shark out of 300 million Americans that year, maybe even in the past few years. Thus it is written: don't try to judge the frequency of events based on how often you hear about them on the news; use Google to find the actual statistics.
The value—and counter-intuitiveness—of a lot of scientific techniques seems similarly explicable. Randomized, double-blind, placebo-controlled studies with 1000 subjects are hard to do if you're a hunter-gatherer band with 50 members. Even when primitive hunter-gatherers could have theoretically done a particular experiment, rigorous scientific experiments are a lot of work. They may not pass cost-benefit analysis if it's just your band that will be using the results. In order for science to be worthwhile, it helps to have a printing press that you can use to share your findings all over the world.
A third example is prediction markets, indeed markets in general. In a post last month, Robin Hanson writes (emphasis mine):
Speculative markets generally do an excellent job of aggregating information...
Yet even though this simple fact seems too obvious for finance experts to mention, the vast majority of the rest of news coverage and commentary on all other subjects today, and pretty much every day, will act as if they disagreed. Folks will discuss and debate and disagree on other subjects, and talk as if the best way for most of us to form accurate opinions on such subjects is to listen to arguments and commentary offered by various pundits and experts and then decide who and what we each believe. Yes this is the way our ancestors did it, and yes this is how we deal with disagreements in our personal lives, and yes this was usually the best method.
But by now we should know very well that we would get more accurate estimates more cheaply on most widely discussed issues of fact by creating (and legalizing), and if need be subsidizing, speculative betting markets on such topics. This isn’t just vague speculation, this is based on very well established results in finance, results too obvious to seem worth mentioning when experts discuss finance. Yet somehow the world of media continues to act is if it doesn’t know. Or perhaps it doesn’t care; punditry just isn’t about accuracy.
The evolutionary heuristic suggests a different explanation for reluctance to use prediction markets: the fact that "listen to arguments and form your own opinion" was the best method we had on the African savannah meant we evolved to use it. Thus, other methods, like prediction markets, feel deeply counter-intuitive, even for people who can appreciate their merits in the abstract.
In short, the evolutionary heuristic supports what many people have already concluded for other reasons: most people could do better at forming their view of the world by relying more on statistics, science, and (when available) prediction markets. People likely fail to rely on them as much as they should, because those were not available in the ancestral environment, and therefore relying on them does not come naturally to us.
There's a flip side to this, though, that the evolutionary heuristic might suggest certain rationality techniques are unlikely to work. In an earlier version of this post, I suggested a CFAR experiment in trying to improve people's probability estimates as an example of such a rationality technique. But as Benja pointed out in the comments, our ancestors faced little selection pressure for accurate verbal probability estimates, which suggests there might be a lot of room to improve people's verbal probability estimates.
On the other hand, given that our ancestors managed to be successful without being good at making verbal probability estimates might suggest that rationality techniques based on improving that skill would be unlikely to result in increased performance in areas where the skill isn't obviously relevant. (Yvain's post Extreme Rationality: It's Not That Great is relevant here.) On the other other hand, maybe abstract reasoning skills like "making verbal probability estimates" is generally useful for dealing with evolutionarily novel problems.
27 comments
Comments sorted by top scores.
comment by Desrtopa · 2013-11-11T03:57:52.438Z · LW(p) · GW(p)
A few years back, Sylvester Stallone was caught bringing chemical supplements into, if I remember correctly, Australia, that were not legal for him to use there without a prescription. He apologized, but stated his conviction that the supplements were valuable for nearly any older man; they made him feel much younger and able to perform athletically at a much higher standard, and he believed that in a matter of years, they would be available over the counter.
My first thought was that if there were a simple hormonal adaptation which could offer such benefits to human beings without associated risks that outweigh them, we would have evolved it already, so for most people who experienced benefits from the supplements they would probably not be worth the danger.
But some time later, I was revisiting the matter, and it occurred to me that in our evolutionary environment, the expected status of Sylvester Stallone would be "dead." Individuals in their sixties would already be sufficiently unlikely to be contributing further to the persistence of genes that adaptations favoring their health which did not provide benefits prior to their elder status would likely not have offered enough of an advantage to be selected for.
This probably best fits into the category of "value discordance"; the qualities which evolution selects for may be separate from or narrower than our own values even in domains such as health and physical integrity.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-11-11T11:31:27.690Z · LW(p) · GW(p)
Indeed, AFAIK this is also one of the proposed explanations for why we suffer from age-related decay in the first place: there hasn't been a strong evolutionary pressure to develop maintenance mechanisms that could keep our body running indefinitely, since it was more effective to just create organisms that would be good at reproducing and surviving while they were young.
comment by Benya (Benja) · 2013-11-10T10:42:13.055Z · LW(p) · GW(p)
An example of this: CFAR has published some results on an experiment where they tried to see if they could improve people's probability estimates by asking them how surprised they'd be by truth about some question turning out one way or another. They expected it would, but it turned out it didn't. And that doesn't surprise me. If imagined feelings of surprise contained some information naive probability-estimation methods didn't, why wouldn't we have evolved to tap that information automatically?
Because so few of our ancestors died because they got numerical probability estimates wrong.
I agree with the general idea in your post, but I don't think it strongly predicts that CFAR's experiment would fail. Morever, if it predicts that, why doesn't it also predict that we should have evolved to sample our intuitions multiple times and average the results, since that seems to give more accurate numerical estimates? (I don't actually think this single article is very strong evidence for or against this interpretation of the hypothesis by itself, but neither do I think that CFAR's experiment is; I think the likelihood ratios aren't particularly extreme in either case.)
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-11-10T20:12:56.884Z · LW(p) · GW(p)
Ah, you're right. Will edit post to reflect that.
comment by timtyler · 2013-11-10T12:12:39.858Z · LW(p) · GW(p)
On the other hand, I think the evolutionary heuristic casts doubt on the value of many other proposals for improving rationality. Many such proposals seem like things that, if they worked, humans could have evolved to do already. So why haven't we?
Most such things would have had to evolve by cultural evolution. Organic evolution makes our hardware, cultural evolution makes our software. Rationality is mostly software - evolution can't program such things in at the hardware level very easily.
Cultural evolution has only just got started. Education is still showing good progress - as manifested in the Flynn effect. Our rationality software isn't up to speed yet - partly because is hasn't had enough time to culturally evolve its adaptations yet.
Replies from: Furslid, fubarobfusco, torekp, ChrisHallquist↑ comment by Furslid · 2013-11-10T20:02:46.891Z · LW(p) · GW(p)
I think that this is an application of the changing circumstances argument to culture. For most of human history the challenges faced by cultures were along the lines of "How can we keep 90% of the population working hard at agriculture?" "How can we have a military ready to mobilize against threats?" "How can we maintain cultural unity with no printing press or mass media?" and "How can we prevent criminality within our culture?"
Individual rationality does not necessarily solve these problems in a pre-industrial society better than blind duty, conformity and superstitious dread. It's been less than 200 years since these problems stopped being the most pressing concerns, so it's not surprising that our culture hasn't evolved to create rational individuals.
↑ comment by fubarobfusco · 2013-11-10T17:40:06.442Z · LW(p) · GW(p)
Education is still showing good progress - as manifested in the Flynn effect.
I thought a lot of that was accounted for by nutrition and other health factors — vaccination and decline in lead exposure come to mind.
↑ comment by ChrisHallquist · 2013-11-10T17:40:28.174Z · LW(p) · GW(p)
It's been suggested that the Flynn effect is mostly a matter of people learning a kind of abstract reasoning that's useful in the modern world, but wasn't so useful previously.
There's also a broader point to be made about why evolution would've built humans to be able to benefit from better software in the first place, that involves the cognitive niche hypothesis. Hmmm... I may need to do a post on the cognitive niche hypothesis at some point.
Replies from: timtyler↑ comment by timtyler · 2013-11-10T18:40:49.757Z · LW(p) · GW(p)
There's also a broader point to be made about why evolution would've built humans to be able to benefit from better software in the first place, that involves the cognitive niche hypothesis.
I think we understand why humans are built like that. Slow-reproducing organisms often use rapidly-reproducing symbiotes to help them adapt to local environments. Humans using cultural symbionts to adapt to local regions of space-time is a special case of this general principle.
Instead of the cognitive niche, the cultural niche seems more relevant to humans.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-11-10T19:43:27.389Z · LW(p) · GW(p)
Slow-reproducing organisms often use rapidly-reproducing symbiotes to help them adapt to local environments.
Ah, that's a good way to put it. But it should lead us to question the value of "software" improvements that aren't about being better-adapted to the local environment.
comment by RolfAndreassen · 2013-11-10T20:37:15.731Z · LW(p) · GW(p)
Note: it's unclear to me how the category of "evolutionary restrictions" could apply to rationality techniques.
Opportunity costs of applying system 2? It seems like an important rationality technique is "just stop and think about it for, like, thirty seconds, instead of jumping to the first conclusion your brain throws up". I suggest that this may have been costlier, relative to available nutrients, in the ancestral environment; it's also possible that we now more rarely face the kind of problem where any answer, even wrong, is better than standing about. (Although I guess that comes under changed tradeoffs, actually.)
comment by gjm · 2013-11-10T19:37:53.799Z · LW(p) · GW(p)
Note: it's unclear to me how the category of "evolutionary restrictions" could apply to rationality techniques. Suggestions?
Anything that requires the use of the internet, a smartphone, mathematical techniques developed in the last 10,000 years, etc. Our ancestors didn't have those.
(I think the border between "changed tradeoffs" and "evolutionary restrictions" is fuzzy.)
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-11-10T19:40:42.811Z · LW(p) · GW(p)
I had thoughts along similar lines, but then I decided those things really belonged in the "changed tradeoffs" category. But maybe?
comment by MugaSofer · 2013-11-23T16:34:24.518Z · LW(p) · GW(p)
Note: it's unclear to me how the category of "evolutionary restrictions" could apply to rationality techniques. Suggestions?
Ooh, ooh! I have one!
The value—and counter-intuitiveness—of a lot of scientific techniques seems similarly explicable. Randomized, double-blind, placebo-controlled studies with 1000 subjects are hard to do if you're a hunter-gatherer band with 50 members. Even when primitive hunter-gatherers could have theoretically done a particular experiment, rigorous scientific experiments are a lot of work. They may not pass cost-benefit analysis if it's just your band that will be using the results. In order for science to be worthwhile, it helps to have a printing press that you can use to share your findings all over the world.
comment by joaolkf · 2013-11-22T11:24:42.302Z · LW(p) · GW(p)
I have spent half of my MPhil thesis on this evolutionary heuristic. Its major issue is ignoring the actual causal patterns governing the functioning of a trait, in favor of the trait evolutionary history, which can be often not very informative of such pattern. I will write a proper comment here latter.
comment by jmmcd · 2013-11-10T19:35:12.573Z · LW(p) · GW(p)
it's unclear to me how the category of "evolutionary restrictions" could apply to rationality techniques. Suggestions?
Not sure if this simple example is what you had in mind, but -- evolution wasn't capable of making us grow nice smooth erasable surfaces on our bodies, together with ink-secreting glands in our index fingers, so we couldn't evolve the excellent rationality technique of writing things down to remember them. So when writing was invented, the inventor was entitled to say "my invention passes the EOC because of the "evolutionary restrictions" clause".
comment by christopherj · 2014-01-01T21:23:34.647Z · LW(p) · GW(p)
The evolutionary heuristic suggests a different explanation for reluctance to use prediction markets: the fact that "listen to arguments and form your own opinion" was the best method we had on the African savannah meant we evolved to use it. Thus, other methods, like prediction markets, feel deeply counter-intuitive, even for people who can appreciate their merits in the abstract.
There's a good reason why pundits wouldn't advise you to ignore pundits and just look at the relevant prediction market. Also most people who like to listen to arguments and form their own opinion, greatly value the fact that they can use net neutral balance of argument as support for their position (either for self-satisfaction or to help them proselytize). But then I've always thought that was a form of entertainment rather than news/education.
comment by V_V · 2013-11-10T13:31:09.116Z · LW(p) · GW(p)
Prediction markets are a bad example since their effectiveness is still highly speculative.
Replies from: christopherj, MugaSofer↑ comment by christopherj · 2014-01-01T21:17:29.198Z · LW(p) · GW(p)
Prediction markets are a bad example since their effectiveness is still highly speculative.
How much/at what odds would you be willing to bet about that?
Replies from: V_V↑ comment by V_V · 2014-01-02T00:01:25.901Z · LW(p) · GW(p)
About the effectiveness of prediction markets being still highly speculative?
How do you exactly bet about that?
Replies from: christopherj↑ comment by christopherj · 2014-01-04T21:09:39.711Z · LW(p) · GW(p)
Easy. First, you clarify what you mean about the effectiveness of prediction markets being highly speculative, in such a way that it can be measured, and then you make a bet. For example, you could bet that the odds ratio of bets in prediction markets have historically not aligned with their probability, making their use as an indicator of probability for current propositions suspect. If you are right, you could earn a lot of money! And even more money if you think that people will bet against you inefficiently.
Replies from: V_V↑ comment by V_V · 2014-01-05T08:46:33.607Z · LW(p) · GW(p)
Or I could bet that a teapot isn't orbiting the Sun.
If you have evidence that prediction markets are effective in broad scenarios, show it. Trying to reverse the burden of proof by proposing a bet is not going to work.
Replies from: christopherj↑ comment by christopherj · 2014-01-06T02:17:25.304Z · LW(p) · GW(p)
Who said anything about burden of proof? If you're right, you could be making a lot of money by betting that you are right. Not only would your opponents lose money, but losing money is a great way to learn a lesson that won't be trivialized or forgotten.
↑ comment by MugaSofer · 2013-11-23T16:38:32.549Z · LW(p) · GW(p)
Really? Their effectiveness at, say, running countries or other practical applications, sure; but their effectiveness at collating information, as was referred to in the article?
Replies from: V_V↑ comment by V_V · 2013-11-23T21:13:38.419Z · LW(p) · GW(p)
It seems to me that the article referred to practical applications:
Replies from: MugaSoferBut by now we should know very well that we would get more accurate estimates more cheaply on most widely discussed issues of fact by creating (and legalizing), and if need be subsidizing, speculative betting markets on such topics.
↑ comment by MugaSofer · 2013-11-24T18:37:36.192Z · LW(p) · GW(p)
Ah, I assumed they were referring to political discussions (elections and so on) due to the reference to punditry immediately afterwards? I suppose I can see how either interpretation could result due to different emphases, so this is useful information to the OP regardless.