Futuristic Predictions as Consumable Goods
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-04-10T00:18:17.000Z · LW · GW · Legacy · 19 commentsContents
19 comments
The Wikipedia entry on Friedman Units tracks over 30 different cases between 2003 and 2007 in which someone labeled the "next six months" as the "critical period in Iraq". Apparently one of the worst offenders is journalist Thomas Friedman after whom the unit was named (8 different predictions in 4 years). In similar news, some of my colleagues in Artificial Intelligence (you know who you are) have been predicting the spectacular success of their projects in "3-5 years" for as long as I've known them, that is, since at least 2000.
Why do futurists make the same mistaken predictions over and over? The same reason politicians abandon campaign promises and switch principles as expediency demands. Predictions, like promises, are sold today and consumed today. They produce a few chewy bites of delicious optimism or delicious horror, and then they're gone. If the tastiest prediction is allegedly about a time interval "3-5 years in the future" (for AI projects) or "6 months in the future" (for Iraq), then futurists will produce tasty predictions of that kind. They have no reason to change the formulation any more than Hershey has to change the composition of its chocolate bars. People won't remember the prediction in 6 months or 3-5 years, any more than chocolate sits around in your stomach for a year and keeps you full.
The futurists probably aren't even doing it deliberately; they themselves have long since digested their own predictions. Can you remember what you had for breakfast on April 9th, 2006? I bet you can't, and I bet you also can't remember what you predicted for "one year from now".
19 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Stuart_Armstrong · 2007-04-10T08:54:37.000Z · LW(p) · GW(p)
I've been thinking about this problem a bit. I think that every futurist paper should include a section where it lists, clearly, exactly what counts as a failure for this prediction. In fact, that would be the most important piece of the paper to read, and those with the most stringent (and short term) criteria for failure should be rewarded.
And, in every new paper, the author should list past failure, along with a brief sketch of why the errors of the past no longer apply here. This is for the authors themselves as much as for the readers - they need to improve and calibrate their predictions. Maybe we could insist that new papers on a certain subject are not allowed unless past errors in that subject are addressed?
Of course, to make this all honest and ensure that errors aren't concealed or minimized, we should ensure that people are never punished for past errors, only for a failure to improve.
Now, if only we could extend such a system to journalists as well... :-)
comment by Ilkka_Kokkarinen · 2007-04-10T13:43:06.000Z · LW(p) · GW(p)
I think that every futurist paper should include a section where it lists, clearly, exactly what counts as a failure for this prediction. In fact, that would be the most important piece of the paper to read, and those with the most stringent (and short term) criteria for failure should be rewarded.
Not only that, but that section should also include a monetary deposit that the author forfeits if his predictions turn out to be false. This would allow the readers to see how much belief the author himself has in his theories.
There could even be some centralized service that keeps track of these predictions and deposits and their payments, perhaps allowing people to browse this list ranked and sorted on various criteria.
comment by Kaj_Sotala · 2007-04-10T14:21:41.000Z · LW(p) · GW(p)
Not only that, but that section should also include a monetary deposit that the author forfeits if his predictions turn out to be false. This would allow the readers to see how much belief the author himself has in his theories.
Of course, if one predicts something to happen a relatively long time from now, this might not work because the deposit effectively feels lost (hyperbolic discounting). For instance, I wrote an essay speculating on true AI within 50 years: regardless of how confident I am of the essay's premises and logical chains, I wouldn't deposit any major sums to it, simply because "I'll get it back in 50 years" is far enough in the future to feel equivalent to "I'll never get it back". I have more use for that money now. (Not to mention that inflation would eat pretty heavily on the sum, unless an interest of some sort was paid.)
Were we talking about predictions made for considerably shorter time scales, then deposits would probably work better, but I still have a gut feeling that any deposits made on predictions with a time scale of several years would be much lower than was to be expected from the futurists' actual certainty of opinion. (Not to mention that the deposits would vary based on the personal income level of each futurist, making accurate comparisons harder.)
Replies from: gwern↑ comment by gwern · 2011-01-06T18:02:30.426Z · LW(p) · GW(p)
http://www.saunalahti.fi/~tspro1/artificial.html gives me an access-forbidden error.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-01-06T18:49:06.476Z · LW(p) · GW(p)
It's at http://www.xuenay.net/artificial.html now; however, at this point in time I find it to be mediocre at best.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-04-10T21:17:46.000Z · LW(p) · GW(p)
Say, Kaj, where'd you get that "50 years" figure from?
comment by Robin_Hanson2 · 2007-04-10T21:50:02.000Z · LW(p) · GW(p)
Stuart, and Ilkka, how about you guys go first, with your next paper? It is easy to say what other people should do in their papers.
comment by Kaj_Sotala · 2007-04-10T22:58:39.000Z · LW(p) · GW(p)
Eliezer, good question. Now that I think of it, I realize that my AI article may have been a bit of a bad example to use here - after all, it's not predicting AI within 50 years as such, but just making the case that the probability for it happening within 50 years is nontrivial. I'm not sure of what the "get the deposit back" condition on such a prediction would be...
...but I digress. To answer your question: IBM was estimating that they'd finish building their full-scale simulation of the human brain in 10-15 years. Having a simulation where parts of a brain can be selectively turned on or off at will or fed arbitrary sense input would seem very useful in the study of intelligence. Other projections I've seen (but which I now realize I never cited in the actual article) place the development of molecular nanotech within 20 years or so. That'd seem to allow direct uploading of minds, which again would help considerably in the study of the underlying principles of intelligence. I tacked 30 years on that to be conservative - I don't know how long it takes before people learn to really milk those simulations for everything they're worth, but modern brain imaging techniques were developed about 15 years ago and are slowly starting to produce some pretty impressive results. 30 years seemed like an okay guess, assuming that the two were comparable and that the development of technology would continue to accelerate. (Then there's nanotech giving enough computing power to run immense evolutionary simulations and other brute-force methods of achieving AI, but I don't really know enough about that to estimate its impact.)
So basically the 50 years was "projections made by other people estimate really promising stuff within 20 years, then to be conservative I'll tack on as much extra time as possible without losing the point of the article entirely". 'Within 50 years or so' seemed to still put AI within the lifetimes of enough people (or their children) that it might convince them to give the issue some thought.
comment by Brian · 2007-04-11T03:45:59.000Z · LW(p) · GW(p)
I just happened to read a clever speech by Michael Crichton on this topic today. I think his main point echoes yours (or yours his).
http://www.crichton-official.com/speeches/speeches_quote07.html
Replies from: gwern↑ comment by gwern · 2011-01-06T18:04:41.192Z · LW(p) · GW(p)
Working link: http://web.archive.org/web/20070411012839/http://www.crichton-official.com/speeches/speeches_quote07.html
Nice speech (although I disagree with the general discounting of all value for predictions); Crichton reminds me a lot of Scott Adams - he says a lot of insightful things, but occasionally also says something that drives me nuts.
"Media carries with it a credibility that is totally undeserved. You have all experienced this, in what I call the Murray Gell-Mann Amnesia effect. (I call it by this name because I once discussed it with Murray Gell-Mann, and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have.)
Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward-reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.
In any case, you read with exasperation or amusement the multiple errors in a story-and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know."
I also liked this (even though such people are fish in a barrel):
"One of the clearest proofs of this is the "Currents of Death" controversy. This fear of cancer from power lines originated with the New Yorker, which has been a gushing fountainhead of erroneous scientific speculation for fifty years. But the point is this: all the people who ten years ago were frantic to measure dangerous electromagnetic radiation in their houses now spend thousands of dollars buying magnets to attach to their wrists and ankles, because of the putative healthful effects of magnetic fields. They don't remember these are the same fields they formerly wanted to avoid. And since they don't remember, you can't lose with any future speculation."
And one of the teethgrinders:
The first is the report in Science magazine January 18 2001 (Oops! a fact) that contrary to prior studies, the Antarctic ice pack is increasing, not decreasing, and that this increase means we are finally seeing an end to the shrinking of the pack that has been going on for thousands of years, ever since the Holocene era. I don't know which is more surprising, the statement that it's increasing, or the statement that its shrinkage has preceded global warming by thousands of years.
A little one-sided, me thinks: http://en.wikipedia.org/wiki/Antarctica#Ice_mass_and_global_sea_level
comment by Stuart_Armstrong · 2007-04-11T08:27:55.000Z · LW(p) · GW(p)
Not only that, but that section should also include a monetary deposit that the author forfeits if his predictions turn out to be false.
That I strongly disagree with. We don't want to discourage people from taking risks, we want them to improve with time. If there's money involved, then people will be far shyer about the rigour of the "failure section".
Ideally, we want people to take the most pride in saying "I was wrong before, now I'm better."
Stuart, and Ilkka, how about you guys go first, with your next paper? It is easy to say what other people should do in their papers.
Alas, not much call for that in mathematics - the failure section would be two lines: "if I made a math mistake in this paper, my results are wrong. If not, then not."
However, I am planning to write other papers where this would be relevant (next year, or even this one, hopefully). And I solemly swear in the sight of Blog and in the presence of this blogregation, that when I do so, I will include a failure section.
And the people here are invited to brutally skewer or mock me if I don't do so.
Fine print at the end of the contract: Joint papers with others are excluded if my co-writer really objects.
Replies from: gwern↑ comment by gwern · 2011-01-06T18:30:27.518Z · LW(p) · GW(p)
However, I am planning to write other papers where this would be relevant (next year, or even this one, hopefully). And I solemly swear in the sight of Blog and in the presence of this blogregation, that when I do so, I will include a failure section.
Did you?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-01-07T16:34:02.995Z · LW(p) · GW(p)
I did, in a paper that was rejected. The subsequent papers were not relevant (maths and biochemistry). But I will try and include this in the Oracle AI paper when it comes out.
Replies from: wnoise↑ comment by wnoise · 2011-01-07T18:33:03.724Z · LW(p) · GW(p)
I did, in a paper that was rejected.
And you didn't resubmit it to other journals?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-01-07T19:09:30.025Z · LW(p) · GW(p)
It was rambling and obsolete :-)
Rewritting it was more trouble than it was worth; you can find it at www.neweuropeancentury.org/GodAI.pdf if you want.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-04-11T08:39:30.000Z · LW(p) · GW(p)
Alas, not much call for that in mathematics - the failure section would be two lines: "if I made a math mistake in this paper, my results are wrong. If not, then not."
Actually, the failure section would be: "If my results are wrong, I made a math mistake in this paper. If I made no mistake in this paper, my results are correct."
comment by Stuart_Armstrong · 2007-04-11T08:41:24.000Z · LW(p) · GW(p)
IBM was estimating that they'd finish building their full-scale simulation of the human brain in 10-15 years. Having a simulation where parts of a brain can be selectively turned on or off at will or fed arbitrary sense input would seem very useful in the study of intelligence. Other projections I've seen (but which I now realize I never cited in the actual article) place the development of molecular nanotech within 20 years or so.
Then you could make an interim prediction on the speed of these developments. If IBM are predicting a simulation of the human brain in 10-15 years, what would have to be true in 5 years if this is on track?
Same thing for nanotechnology - if those projections are right, what sort of capacities would we have in 10 years time?
But I completely agree with you about the unwisdom of using cash to back up these predictions. Since futurology speculations are more likely to be wrong than correct (because prediction is so hard, especially about the future) improving people's prediction skills is much more usefull than punishing failure.
comment by Stuart_Armstrong · 2007-04-11T08:47:25.000Z · LW(p) · GW(p)
Alas, not much call for that in mathematics - the failure section would be two lines: "if I made a math mistake in this paper, my results are wrong. If not, then not."
Actually, the failure section would be: "If my results are wrong, I made a math mistake in this paper. If I made no mistake in this paper, my results are correct."
Indeed! :-) But I was taking "my results" to mean "the claim that I have proved the results of this paper." Mea Culpa - very sloppy use of language.
comment by A1987dM (army1987) · 2013-08-01T17:56:09.561Z · LW(p) · GW(p)
I'm surprised that nobody in this comment thread mentioned fusion power.