Futurism's Track Record

post by lukeprog · 2014-01-29T20:27:24.738Z · LW · GW · Legacy · 17 comments

Contents

17 comments

It would be nice (and expensive) to get a systematic survey on this, but my impressions [1] after tracking down lots of past technology predictions, and reading histories of technological speculation and invention, and reading about “elite common sense” at various times in the past, are that:

Naturally, as someone who thinks it’s incredibly important to predict the long-term future as well as we can while also avoiding overconfidence, I try to put myself in a position to learn what past futurists were doing right, and what they were doing wrong. For example, I recommend: Be a fox not a hedgehog. Do calibration training. Know how your brain works. Build quantitative models even if you don’t believe the outputs, so that specific pieces of the model are easier to attack and update. Have broad confidence intervals over the timing of innovations. Remember to forecast future developments by looking at trends in many inputs to innovation, not just the “calendar years” input. Use model combination. Study history and learn from it. Etc.

Anyway: do others who have studied the history of futurism, elite common sense, innovation, etc. have different impressions about futurism’s track record? And, anybody want to do a PhD thesis examining futurism’s track record? Or on some piece of it, ala this or this or this? :)


  1. I should explain one additional piece of reasoning which contributes to my impressions on the matter. How do I think about futurist predictions of technologies that haven’t yet been definitely demonstrated to be technologically feasible or infeasible? For these, I try to use something like the truth-tracking fields proxy. E.g. very few intellectual elites (outside Turing, von Neumann, Good, etc.) in 1955 thought AGI would be technologically feasible. By 1980, we’d made a bunch of progress in computing and AI and neuroscience, and a much greater proportion of intellectual elites came to think AGI would be technologically feasible. Today, I think the proportion is even greater. The issue hasn’t been “definitely decided” yet (from a social point of view), but things are strongly trending in favor of Good and Turing, and against (e.g.) Dreyfus.  ↩

17 comments

Comments sorted by top scores.

comment by [deleted] · 2014-01-30T01:22:16.382Z · LW(p) · GW(p)

Unmentioned in your post but personally more problematic is failing to predict something. Predicting X and getting less than X, well, okay. Failing to predict at all things like home computers, the web, the fall of the Soviet Union, antibiotics, now those are serious black eyes for futurism.

Replies from: advancedatheist
comment by advancedatheist · 2014-01-30T15:13:22.867Z · LW(p) · GW(p)

Failing to predict at all things like home computers

Why does this misconception persist? The inventor/science fiction writer Murray Leinster predicted networked home computers, a Google-like search engine, voice interfaces and an accidentally emerging AI in his well known story, "A Logic Named Joe," published in 1946:

http://www.baen.com/chapters/W200506/0743499107___2.htm

Replies from: gwern
comment by gwern · 2014-01-31T03:13:46.867Z · LW(p) · GW(p)

Did Leinster publish in academic journals and reasonably counts under a category like futurism? Or pulp sci-fi magazines and fiction?

Replies from: Alsadius
comment by Alsadius · 2014-02-01T03:45:17.811Z · LW(p) · GW(p)

Pulp sci-fi is closer to what I think of what I hear "futurism" than anything published in a reputable journal.

Replies from: Vulture
comment by Vulture · 2014-02-04T15:42:47.943Z · LW(p) · GW(p)

But so much pulp sci-fi was published, and in such variety, that one could find a plausible "fit" for pretty much any conceivable future invention.

Replies from: Alsadius
comment by Alsadius · 2014-02-04T23:05:32.852Z · LW(p) · GW(p)

Respectable predictions are even more common, though, so I'm not sure how meaningful either one can be.

Replies from: Vulture
comment by Vulture · 2014-02-05T17:20:11.282Z · LW(p) · GW(p)

It feels icky to claim that something was "predicted" by a fictional story that made no claim to being serious prediction, though.

comment by Lalartu · 2014-01-31T13:39:46.805Z · LW(p) · GW(p)

I think best summary on technological predictions is Your Flying Car Awaits by Paul Milo. In short: most predictions are wrong, no matter who did them and in what technical field. Overoptimistic are more common than too conservative. Also, predictions made in late 19th - early 20th centuries are vastly more correct then those from 1950 - 1970.

As for predicting things like consequences and commercial sence (given that tech is feasible), problem is that they are dependent on a lot of exact implementation details and outside factors.

One good example is airships. The idea was first mentioned in late 17th century, the first prototype flew in mid-19th and mass-production started just before the WW1. During that time lots of different authors did lots of predictions how airships will be used and change the world. They all were totally wrong (except maybe Jules Verne). Airships cannot capture or destroy cities, cannot sink navies, they are not practical for carrying paratroopers and useless as fighters. In civilian use airship is just flying catastrophe, about 1000 rate more dangerous than same tech level airplane, expensive and ineffective. This all comes from details like airframe strenth and wind drag which are hard to predict. The same thing, just lesser in magnitude happened with civilian nuclear ships and supersonic airliners.

Also, there is no good way to predict political and legislative changes. It could happen that for example medicine wasn't so regulated but Internet was banned from the very beginning.

Replies from: lukeprog, private_messaging
comment by lukeprog · 2014-02-01T06:12:38.992Z · LW(p) · GW(p)

Thanks, I don't think I've seen that book before.

comment by private_messaging · 2014-02-02T08:22:34.605Z · LW(p) · GW(p)

Yeah. Non-experts are more numerous and make predictions which are far more random. Essentially my model of non-expert prediction is "a bunch of people take more or less outdated expert predictions and add random jitter".

comment by Stefan_Schubert · 2014-01-31T23:20:02.973Z · LW(p) · GW(p)

Interesting idea. I think one thing you have to do, though, is to be very systematic in your choice of which past predictions that you study. Otherwise there is a risk you end up focusing on those who were right, or those who were spectacularly wrong, etc.

I guess in general it is easier to predict technological progress than its impact on society, and of course generally easier to predict incremental changes of existing technologies than qualitative shifts.

A major problem is, I guess, inventions, or aspects of inventions, that people have a hard time even conceiving of at present.

Meteorologists are quite good at predicting the weather in the near future with some accuracy but far worse when you move ten days ahead. Similarly the further you go into the future the harder it obviously becomes to predict. That said, general trends can be predicted both weather- and climate-wise though exact dates cannot be given.

comment by somervta · 2014-01-30T05:25:51.662Z · LW(p) · GW(p)

Anyway: do others who have studied the history of futurism, elite common sense, innovation, etc. have different impressions about futurism’s track record? And, anybody want to do a PhD thesis examining futurism’s track record? Or on some piece of it, ala this or this or this? :)

What kind of PhD program would you do this in? I'm guessing econ, just because broadness.

Replies from: IlyaShpitser, Lumifer, fowlertm
comment by IlyaShpitser · 2014-01-30T12:33:21.166Z · LW(p) · GW(p)

History :).

comment by Lumifer · 2014-01-30T16:48:50.032Z · LW(p) · GW(p)

What kind of PhD program would you do this in?

Some kind of humanities, I guess. History, sociology...

comment by fowlertm · 2014-01-30T16:25:42.941Z · LW(p) · GW(p)

I too think it would be economics, though probably of a more philosophical type, like what they do at the London School of Economics.

And yes, I'd be very interested in doing something like that :)

comment by advancedatheist · 2014-01-29T23:14:04.699Z · LW(p) · GW(p)

Transhumanists need to stop setting arbitrary dates within current life expectancies for when we allegedly "become immortal." These forecasts make no logical sense, and you wind up sounding like asses for publishing them.

For example James D. Miller in Singularity Rising writes:

But now if you die before 2045, you might miss out on millions of years of life. The high cost of death has made survival to 2045 your top priority.

Uh, guys, plenty of people alive in 2014 will probably live another 31 years any way through natural maturation and aging; they won't mysteriously become capable of living for "millions of years" by surviving to January 1, 2045.

If you want to set a date which shows some ambition and at least makes more sense than implying that living another 31 years = "living forever," pick one in the 23rd Century like, say, 2245. If you can survive to 2245 in good shape, you might have successfully overcome major hurdles to your radical life extension so far. ("Past performance doesn't guarantee future results.")

Replies from: James_Miller
comment by James_Miller · 2014-01-30T17:15:55.073Z · LW(p) · GW(p)

This is way out of context.

This is from a subsection of my book that assumes someone gives you a magical scroll that contains numerous predictions that come true and a prediction that a singularity will occur in 2045. This was obviously a thought experiment about how you would behave if you somehow knew there would be a singularity in 2045, not an assertion that the singularity will happen in 2045. Indeed, I used the scroll device so the reader wouldn't think I was predicting there would be a 2045 singularity.