Generalizing from One Trend

post by katydee · 2013-01-18T01:21:43.206Z · LW · GW · Legacy · 23 comments

Contents

23 comments

Related: Reference Class of the Unclassreferenceable, Generalizing From One Example

Many people try to predict the future. Few succeed.

One common mistake made in predicting the future is to simply take a current trend and extrapolate it forward, as if it was the only thing that mattered-- think, for instance, of the future described by cyberpunk fiction, with sinister (and often Japanese) multinational corporations ruling the world. Where does this vision of the future stem from?

Bad or lazy predictions from the 1980s, when sinister multinational corporations (and often Japanese ones) looked to be taking over the world.[1]

Similar errors have been committed by writers throughout history. George Orwell thought 1984 was an accurate prediction of the future, seeing World War II as inevitably bringing socialist revolution to the United Kingdom and predicting that the revolutionary ideals would then be betrayed in England as they were in Russia. Aldous Huxley agreed with Orwell but thought that the advent of hypnosis and psychoconditioning would cause the dystopia portrayed in 1984 to evolve into that he described in Brave New World. In today's high school English classes, these books are taught as literature, as well-written stories-- the fact that the authors took their ideas seriously would come as a surprise to many high school students, and their predictions would look laughably wrong.

Were such mistakes confined solely to the realm of fiction, they would perhaps be considered amusing errors at best, reflective of the sorts of mishaps that befall unstudied predictions. Unfortunately, they are not. Purported "experts" make just the same sort of error regularly, and failed predictions of this sort often have negative consequences in reality.

For instance, in 1999 two economists published the book Dow 36,000, predicting that stocks were about to reach record levels; the authors of the book were so wrapped up in recent gains to the stock market that they assumed that such gains were in fact the new normal state of affairs, that the market hadn't corrected for this yet, and that once stocks were correctly perceived as safe investments the market would skyrocket. This not only did not happen, but the dot-com bubble burst shortly after the book was published.[2] Anyone following the market advice from this book lost big.

In 1968, the biologist Paul R. Ehrlich, seeing disturbing trends in world population growth, wrote a book called The Population Bomb, in which he forecast (among other things) that "The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now." Later, Ehrlich doubled down on this prediction with claims such as  "By the year 2000 the United Kingdom will be simply a small group of impoverished islands, inhabited by some 70 million hungry people ... If I were a gambler, I would take even money that England will not exist in the year 2000."

Based on these predictions, Ehrlich advocated cutting off food aid to India and Egypt in favor of preserving food supplies for nations that were not "lost causes;" luckily, his policies were not adopted, as they would have resulted in mass starvation in the countries suddenly deprived of aid. Instead, food aid continued, and as population grew, food production did as well. Contrary to the increase in starvation and global death rates predicted by Ehrlich, global death rates decreased, the population increased by more than Ehrlich had predicted would lead to disaster, and the average amount of calories consumed per person increased as well.[3]

 

So what, then, is the weakness that causes these analysts to make such errors?

Well, just as you can generalize from one example when evaluating others and hence fail to understand those around you, you can generalize from one trend or set of trends when making predictions and hence fail to understand the broader world. This is a special case of the classic problem where "to a man with a hammer, everything looks like a nail;" if you are very familiar with one trend, and that's all you take into account with your future forecasts, you're bound to be wrong if that trend ends up not eating the world.

On the other hand, the trend sometimes does eat the world. It's very easy to find long lists of buffoonish predictions where someone woefully understimated the impact of a new technology.[4] Further, determining exactly when and where a trend is going to stop is quite difficult, and most people are incompetent at it, even at a professional level-- if this were easy, the stock market would look extraordinarily different!

So my advice to those who would predict the future is simple. Don't generalize from one trend or even one group of trends. Especially beware of viewing evidence that seems to support your predictions as evidence that other people's predictions must be wrong-- the notebook of rationality cares not for what "side" things are on, but rather for what is true. Even if the trend you're relying on does end up being the "next big thing," the rest of the world will have a voice as well.[5]


[1] I predict that the work of Cory Doctorow and those like him will seem similarly dated a decade down the line, as the trends they're riding die down. If you're reading this during or after December 2022, please let me know what you think of this prediction.

[2] The authors are, of course, still employed in cushy think-tank positions.

[3]  Ehrlich has doubled down on his statements, now claiming that he was "way too optimistic" in The Population Bomb and that the world is obviously doomed.

[4] I personally enjoy the Bad Opinion Generator (warning: potentially addictive)

[5] Technically, this isn't always true. But you should assume it is unless you have extremely good reasons to believe otherwise, and even still I would be very careful before assuming that your thing is the thing.

23 comments

Comments sorted by top scores.

comment by Qiaochu_Yuan · 2013-01-18T03:26:17.116Z · LW(p) · GW(p)

I would tentatively describe the kind of forecasting you're talking about as "narrative forecasting." A good narrative has maybe one or two important ideas; too many important ideas would confuse the reader. A good narrative has a human-satisfactory description of a plausible-looking chain of causes and effects leading to the future; probabilistic statements, complicated webs of cause and effect, and so forth also confuse readers. And a good narrative is generally a bad prediction because reality doesn't work that way.

comment by CarlShulman · 2013-01-18T06:27:58.972Z · LW(p) · GW(p)

In Philip Tetlock's work, simple trend projection of geopolitical/economic events did better than pundit predictions. Better trend analysis, e.g. taking into account the rate at which other trends have sustained themselves, physical limits, etc, could do better, but I would be measured in critique of the simple version.

Replies from: katydee
comment by katydee · 2013-01-18T07:26:47.483Z · LW(p) · GW(p)

I agree, but I think most pundits just take one thing, decide it's the most important, and project it. A lot of pundits probably do this explicitly because they have to push a certain political narrative while others fall into the "narrative forecasting" trap that Qiaochu_Yuan describes in this excellent comment.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-18T07:42:54.758Z · LW(p) · GW(p)

Also because it's computationally difficult to figure out how multiple trends will interact.

Replies from: katydee
comment by katydee · 2013-01-18T10:02:17.383Z · LW(p) · GW(p)

I honestly doubt most pundits even try to apply basic statistical methods to their assessments. Nate Silver strikes me as the exception, not the rule, and it's important to remember that he only acquired his current credibility after doing very well in predicting results for two US Presidential elections in a row.

comment by RowanE · 2013-01-18T02:23:40.865Z · LW(p) · GW(p)

Did Orwell and Huxley actually believe in the dystopias they were writing as predictions of the future? I find this hard to believe so I'd at least like some sources for that.

Replies from: katydee
comment by katydee · 2013-01-18T02:57:19.089Z · LW(p) · GW(p)

Did Orwell and Huxley actually believe in the dystopias they were writing as predictions of the future?

Yes and yes. This letter from Huxley to Orwell is also illuminating.

In my original draft of this post I expanded the discussion of these points considerably, but ultimately decided that it was distracting and that the post was perhaps too long anyway.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-18T05:47:51.871Z · LW(p) · GW(p)

As for Huxley, I would like to point out:

1) Brave New World was set in the far future.

2) Many of the trends he was generalizing from, e.g. improvements in biotechnology and disintegration of the traditional family structure, have continued since his time.

Replies from: katydee
comment by katydee · 2013-01-18T07:24:41.886Z · LW(p) · GW(p)

Did you read the letter? Huxley thought the Brave New World society would be set up "within the next generation" thanks to the advent of psychoanalysis and hypnosis and the combination of these techniques with psychoactive drugs.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-18T07:34:48.842Z · LW(p) · GW(p)

Interesting, I am reminded of a Milton Friedman quote where he says that in retrospect his predictions were right about what would happen, but wrong about when.

comment by Stuart_Armstrong · 2013-01-18T17:51:25.079Z · LW(p) · GW(p)

I don't see this a problem of simple trend prediction (which, as Carl pointed out, is often better than experts). Instead I see it as an error bar failure - predicting the dow would reach 36,000 in 1999 was perfectly reasonable. Confidently predicting the Dow would reach 36,000 was ridiculous.

comment by John_Maxwell (John_Maxwell_IV) · 2013-01-19T09:03:52.570Z · LW(p) · GW(p)

This suggests an interesting strategy for altruistic futurists: Instead of trying to predict how trends are going to go and attempting to influence them from the better, just accumulate money in a donor-advised fund (or probably a more flexible investment vehicle, really) and take it out in the event of a true emergency. ("Civilization's rainy-day money.")

Some thoughts on this idea: It might be a good idea to establish a mailing list to coordinate with others doing the same thing. For certain unpleasant scenarios, the government can be counted on to intervene. But it's not obvious that their intervention will be quick or effective. Given government's existence, however, our comparative advantage might be in intervening before things reach a full-scale crisis.

Replies from: katydee
comment by katydee · 2013-01-19T09:54:38.186Z · LW(p) · GW(p)

One existing group along these lines is the Lifeboat Foundation, which claims to be attempting to establish a backup system for mankind, but it's not clear to me to what extent they actually do anything.

I do think the "Civilization's rainy-day money" idea is a good one in principle but I fear it would be expended on tragic but non-existential threats (like the latest big earthquake/hurricane/tsunami) rather than saved for existential risks.

Further, in the event that an existential risk did become apparent I am not sure that having "rainy-day money" would really enhance our response, simply in that we might not have enough time to spend the money on useful projects.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-01-19T11:43:43.179Z · LW(p) · GW(p)

I do think the "Civilization's rainy-day money" idea is a good one in principle but I fear it would be expended on tragic but non-existential threats (like the latest big earthquake/hurricane/tsunami) rather than saved for existential risks.

I'm suggesting this as something for LW users to do.

Further, in the event that an existential risk did become apparent I am not sure that having "rainy-day money" would really enhance our response, simply in that we might not have enough time to spend the money on useful projects.

Yeah, some degree of forecasting is probably a good idea.

Replies from: katydee
comment by katydee · 2013-01-19T12:00:44.868Z · LW(p) · GW(p)

I'm suggesting this as something for LW users to do.

I know, but I don't have a huge degree of confidence in this gathering enough funds in order to be meaningful if it were LW-users only; to scale up to the level where it would be able to actually influence these risks substantially I think it would have to draw in money (and hence influence) from outsiders.

comment by Oscar_Cunningham · 2013-01-18T08:40:12.406Z · LW(p) · GW(p)

luckily, his policies were not adopted, as they would have resulted in mass starvation in the countries suddenly deprived of aid.

Ironic to see an untestable prediction in the middle of a post about the difficulty of prediction.

So my advice to those who would predict the future is simple. Don't generalize from one trend or even one group of trends. Especially beware of viewing evidence that seems to support your predictions as evidence that other people's predictions must be wrong-- the notebook of rationality cares not for what "side" things are on, but rather for what is true. Even if the trend you're relying on does end up being the "next big thing," the rest of the world will have a voice as well.

So what can we do?

Replies from: aelephant, JoshuaZ
comment by aelephant · 2013-01-18T10:35:50.814Z · LW(p) · GW(p)

Accept that randomness is a fact of life & that prediction is basically impossible in any kind of complex system. Make sure that your position is either resilient to big negative unpredictable events or better yet, antifragile. (Yes I've been reading a lot of Nassim Taleb recently.)

comment by JoshuaZ · 2013-01-19T19:40:02.078Z · LW(p) · GW(p)

Ironic to see an untestable prediction in the middle of a post about the difficulty of prediction.

In context, the prediction in question is not controversial. It is very hard to see cutting off aid in that context would not have resulted in the immediate starvation of many people in those countries. There simple would not have been enough food.

comment by summerstay · 2013-01-18T17:26:24.155Z · LW(p) · GW(p)

Regarding Cyberpunk, Gibson wasn't actually making a prediction, not in the way you're thinking. He was always making a commentary on his own time by exaggerating certain aspects of it. See here, for instance: http://boingboing.net/2012/09/13/william-gibson-explains-why-sc.html

comment by ikrase · 2013-01-18T04:30:27.288Z · LW(p) · GW(p)

This is pretty good actually.

What does anybody think of the statement that Kurzweil is doing this very badly with technological development, or with certain trends in it? ALthough a lot of the trends didn't even seem established at all...

Replies from: katydee
comment by katydee · 2013-01-18T05:35:19.702Z · LW(p) · GW(p)

This recent post helps shed some light on that subject.

Replies from: ikrase
comment by ikrase · 2013-01-19T07:54:32.605Z · LW(p) · GW(p)

I thought that after reading the post. Although is there a list of the predictions analyzed for the survey?

Replies from: katydee
comment by katydee · 2013-01-19T08:28:39.560Z · LW(p) · GW(p)

More details are available in this Discussion post, including Excel spreadsheets.