Beware the science fiction bias in predictions of the future

post by Nikita Sokolsky (nikita-sokolsky) · 2024-08-19T05:32:47.372Z · LW · GW · 20 comments

This is a link post for https://nsokolsky.substack.com/p/beware-the-science-fiction-bias-in

Contents

  We all like a good story
  We want to live in modestly interesting times
  We want interesting things to happen during our lifetimes
None
20 comments

We want to know the future, so we like reading predictions. These predictions can significantly vary in how long their time horizon is:

There’s three elements in how predictions are evaluated by society:

  1. Accuracy: easy for short time horizons, hard for long ones. Poker players will quickly lose their money if their judgement is poor. Weather models will go out of favor if they consistently predict sun when it rains. But a VC fund can continue operating for a decade or two despite making glaring mistakes.
  2. Political bias: if you like nuclear power you’ll be inclined to believe predictions promising a fusion power revolution. If you dislike Trump you’ll be skeptical of a model projecting him as the winner of the next election.
  3. Quality of storytelling: a prediction that follows the patterns of good fiction is more interesting to read and would thus be more popular, just like Star Wars is always going to be more popular than a book on space engineering. I would call this the science fiction bias.

We all like a good story

Imagine the year is 2011 and you’re writing a movie about a future global pandemic. You hire a scientist known for helping eradicate smallpox (Larry Brilliant) as your consultant and overall try to make your story somewhat plausible. You title the movie ‘Contagion’ and release it with the following plot elements:

Nine years later a global pandemic happens in real life but (fortunately) the plotline is much more mundane:

‘Contagion’ made $136.5m in the box office back in the day. How much would it have made if was released with the same plotline as our 2020 pandemic? Probably a lot less. A movie where Paltrow sits at home coughing for a week or two and then goes back to work just doesn’t sound as exciting. Some other examples of this include:

We want to live in modestly interesting times

Terry Pratchet’s had a funny quote about a phrase (incorrectly) attributed to ancient Chinese writings:

I believe that the phrase is may "you live in interesting times," and is the lowest in a trilogy of Chinese curses that continue "may you come to the attention of those in authority" and finish with "may the gods give you everything you ask for." I have no idea about its authenticity.

Does this mean we want to live in “boring” times? Imagine if an omniscient seer told you at the age of 18 that absolutely nothing “interesting” would happen in the next 100 years. Even if you were born in a highly prosperous city in Switzerland, would such knowledge make you happier? I would argue that most of us are hoping to live in reasonably interesting times, which biases us towards predictions that promise change rather than stagnation.

We want interesting things to happen during our lifetimes

Quite a few of people are interested in working on AI safety. But lets once again imagine we have access to an all-powerful seer who tells us that AGI will not be developed before the year 2150. How many people would still be interested in working on AI safety then? Probably a lot less. The same deal applies to every area of human innovation: Elon Musk wants to personally witness humans landing on Mars, quantum computing researches hope to see a quantum computer built, fusion companies want to start generating power now rather than in 100 years.

And this isn’t restricted to scientists, entrepreneurs and engineers. Ukrainians want their territory liberated now rather than in 50 years. Americans worry about their personal Social Security checks much more than they worry about future fiscal troubles. Climate change advocates sound a lot more convincing if they promise something dramatic to happen now rather than sometime in the next century.


To summarize, I’d like to propose a three-prong test for judging predictions for how much science fiction bias they could possibly have:

  1. Does the prediction sound like it could be the plot of a successful book or movie?
  2. If the prediction came true, would you say you’re living in “interesting” times?
  3. Does it promise things to happen in the next 3-4 decades or does it talk about a distant future?

If the answer is yes to all of the above, I’d be a little more skeptical. And conversely when writing predictions, I’d try to judge whether I’m trying to write a good story or trying to actually predict the future. Alas, reality is often disappointing.

20 comments

Comments sorted by top scores.

comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-08-19T14:22:28.233Z · LW(p) · GW(p)

For the arrival of AGI the answer to your last questions is YES, YES, YES. 

You should be very skeptical - as I was when I first encountered the rationalists. 

Yet the evidence that AGI will likely happen in our lifetime is overwhelming. 

Suspiciously so. 

Sometimes extraordinary things do happen. Sometimes dozens bits of evidence is right in front of your lieing eyes. We are already living in an extraordinary time. You only have to see what's right in front of you. 

I suspect anthropics but chalk it up to whatever god you pray to. 

Replies from: Sherrinford
comment by Sherrinford · 2024-08-19T17:26:10.844Z · LW(p) · GW(p)

Actually, lesswrong AGI warnings don't sound like they could be the plot of a successful movie. In a movie, John Connor organizes humanity to fight against skynet. That does not seem plausible with LW-typical nanobot scenarios.

Replies from: ChristianKl, D0TheMath, nikita-sokolsky
comment by ChristianKl · 2024-08-20T14:28:39.321Z · LW(p) · GW(p)

To me, nanobots don't them like the are central to LW stories about AI risk.

If you would ask people on an LW census "If AI causes the extinction of humans, how likely do you think that nanobots play a huge part in that", I would expect the median percentage to be single digits or lower. 

Replies from: Sherrinford
comment by Sherrinford · 2024-08-20T14:42:31.096Z · LW(p) · GW(p)

I agree that Nanobots are not a necessary part of AI takeover scenarios. However, I perceive them as a very illustrative kind of "the AI is smart enough for plans that make resistance futile and make AI takeover fast" scenarios.

The word "typical" is probably misleading, sorry; most scenarios on LW do not include Nanobots. OTOH, LW is a place where such scenarios are at least taken seriously. 

So p(scenario contains Nanobots|LW or rationality community is the place of discussion of the scenario) is probably not very high, but p(LW or rationality community is the place of discussion of the scenario|scenario contains Nanobots) probably is...?

comment by Garrett Baker (D0TheMath) · 2024-08-19T19:26:03.444Z · LW(p) · GW(p)

This seems pretty false. There is at least one pretty successful fiction book written about the intelligence explosion (which, imo, would have been better if in subsequent books gur uhznaf qvqa'g fheivir).

Replies from: Sherrinford
comment by Sherrinford · 2024-08-19T21:28:00.089Z · LW(p) · GW(p)

Gnargh. Of course someone has a counterexample. But I don't think that is the typical lw AGI warning scenario. However, this could become a "no true Scotsman" discussion...

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2024-08-19T22:16:11.269Z · LW(p) · GW(p)

You mean like Gwern's It Looks Like You’re Trying To Take Over The World? I think that made a good short story. Though I don't think it would make a good movie, since there's little in the way of cool visuals.

Greg Egan's Crystal Nights is also more similar to the usual way things are imagined, though uhznavgl vf fnirq ol gur hayvxryl qrhf rk znpuvan bs vg orvat rnfvre sbe gur fvzhyngrq pvivyvmngvba gb znxr n cbpxrg qvzrafvba guna gnxr bire gur jbeyq.

Crystal Nights is also very similar to Eliezer's That Alien Message [LW · GW] / Alicorn's Starwink.

Edit: There are also likely tons more such books written by Ted Chiang, Vernor Vinge, Greg Egan, and others, which I haven't read yet so can't list with confidence and without spoilers to myself.

Replies from: Sherrinford, Rana Dexsin
comment by Sherrinford · 2024-08-20T06:00:38.712Z · LW(p) · GW(p)

Thanks for the list! Yes, it is possible to imagine stories that involve a superintelligence.

I could not imagine a movie/successful story where everybody is killed by an AGI within seconds because it has prepared that in secrecy and nobody realized it, and nobody could do anything about it. Seems like lacking a happy end and even a story.

However, I am glad to be corrected, and will check the links, the stories will surely be interesting!

comment by Rana Dexsin · 2024-08-20T01:45:36.709Z · LW(p) · GW(p)

Ted Chiang's Chrystal Nights

Minor corrections: “Crystal Nights” does not have an H in the first word and is by Greg Egan. (The linked copy is on his own website, in fact, which also includes a number of his other works.)

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2024-08-20T03:47:33.522Z · LW(p) · GW(p)

Thanks! I remember consciously thinking both those things, but somehow did the opposite of that.

comment by Nikita Sokolsky (nikita-sokolsky) · 2024-08-19T17:32:40.772Z · LW(p) · GW(p)

Nanobots destroying all humans at once are indeed poor sci-fi. But how much of this story's popularity hinges on it happening within our lifetimes?

Replies from: Sherrinford
comment by Sherrinford · 2024-08-19T18:31:51.590Z · LW(p) · GW(p)

I don't understand this question. Why would the answer to that question matter? (In your post, you write "If the answer is yes to all of the above, I’d be a little more skeptical.") Also, the "story" is not really popular. Outside of LessWrong discussions and few other places, people seem to think that every expectation about the future that involves a superintelligent agentic AGI sounds like science fiction and therefore does not have to be taken seriously.

Replies from: nikita-sokolsky
comment by Nikita Sokolsky (nikita-sokolsky) · 2024-08-19T22:12:58.161Z · LW(p) · GW(p)

Sorry for not being clear. My question was whether LW really likes the nanobot story because we think it might happen within our own lifetimes. If we knew for a fact that human-destroying-nanobots would take another 100 years to develop, would discussing them still be just as interesting?

Side note: I don't think the "sci-fi bias" concept is super coherent in my head, I wrote this post as best as I can, but I fully acknowledge that its not fully fleshed out.

Replies from: Sherrinford
comment by Sherrinford · 2024-08-20T06:27:46.206Z · LW(p) · GW(p)

Yes, people care about things that are expected to happen today rather than in 1,000 years or later. That is a problem that people fighting against climate change have been pointing out for a long time. At the same time, with respect to AI, my impression is that many people do not react to developments that will quickly have strong implications, while some others write a lot about caring about humanity's long-term future.

comment by Templarrr (templarrr) · 2024-08-19T09:08:08.984Z · LW(p) · GW(p)

If the answer is yes to all of the above

Point 2 needs rephrasing. 


"Does it sound exciting or boring?" "Yes"

comment by Noosphere89 (sharmake-farah) · 2024-08-20T15:30:25.274Z · LW(p) · GW(p)

This link is also relevant, and BTW, this goes for why I'm a priori skeptical of misaligned AI, even without evidence against it, because it fits way too well with story logic, and in particular one thing that misaligned AIs provide in terms of story value is an immediate conflict, whereas safe and/or aligned AIs provide less conflict and story opportunities.

https://www.understandingai.org/p/predictions-of-ai-doom-are-too-much

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2024-08-22T03:13:58.070Z · LW(p) · GW(p)

There are other ways to be skeptical about fictional paradigms regarding AI. For example, a common paradigm is that AIs escape human control, and then there is a long struggle. An alternative paradigm is that once conflict emerges, the AIs win quickly and humans are permanently marginalized thereafter. 

comment by Valdes (Cossontvaldes) · 2024-08-20T08:04:28.938Z · LW(p) · GW(p)

And even when the AGI does do work (The Doctor), it’s been given human-like emotions. People don’t want to read a story where the machines do all the work and the humans are just lounging around.

I am taking the opportunity to recommend the culture by Ian M. Banks here is a good entry point to the series, the books can be read in almost any order. It's not like they find no space for human-like actors, but I still think these books show -by being reasonably popular- that there is an audience for stories about civilizations where AGI does all the work.

Of course, your original point still stands if you say "most people" instead.