Beware the science fiction bias in predictions of the future
post by Nikita Sokolsky (nikita-sokolsky) · 2024-08-19T05:32:47.372Z · LW · GW · 20 commentsThis is a link post for https://nsokolsky.substack.com/p/beware-the-science-fiction-bias-in
Contents
We all like a good story We want to live in modestly interesting times We want interesting things to happen during our lifetimes None 20 comments
We want to know the future, so we like reading predictions. These predictions can significantly vary in how long their time horizon is:
- Seconds to Minutes: poker players, high-frequency traders
- Minutes to Hours: day traders, in-game sports betting
- Days to Weeks: weather forecasts, outcomes of sports tournaments
- Months to Years: hedge funds, election outcomes, venture capitalists
- Decades to Centuries: technological trends, climate change projections
- Centuries to Millennia: rise and fall of countries and civilizations
- Millions to Billions of Years: life cycle of stars, ultimate fate of the Solar System and our universe
There’s three elements in how predictions are evaluated by society:
- Accuracy: easy for short time horizons, hard for long ones. Poker players will quickly lose their money if their judgement is poor. Weather models will go out of favor if they consistently predict sun when it rains. But a VC fund can continue operating for a decade or two despite making glaring mistakes.
- Political bias: if you like nuclear power you’ll be inclined to believe predictions promising a fusion power revolution. If you dislike Trump you’ll be skeptical of a model projecting him as the winner of the next election.
- Quality of storytelling: a prediction that follows the patterns of good fiction is more interesting to read and would thus be more popular, just like Star Wars is always going to be more popular than a book on space engineering. I would call this the science fiction bias.
We all like a good story
Imagine the year is 2011 and you’re writing a movie about a future global pandemic. You hire a scientist known for helping eradicate smallpox (Larry Brilliant) as your consultant and overall try to make your story somewhat plausible. You title the movie ‘Contagion’ and release it with the following plot elements:
- The virus has a fatality rate of 25-30% in every age group.
- Infection develops extremely rapidly, with people dying within 48 hours after being infected.
- It infects a large number of people in every country within weeks after the first human infection.
- Society comes close to a full collapse, people are scared to leave their homes.
- The US enforces a very harsh quarantine in every city.
- And, of course, patient zero is Gwyneth Paltrow eating at a fancy restaurant.
Nine years later a global pandemic happens in real life but (fortunately) the plotline is much more mundane:
- The fatality rate ranged from 0.005% to 1% for people under the age of 70.
- It usually took at least 2 weeks for someone to die from the virus after exposure.
- The first person was infected in late November 2019 and it took four months for the virus to spread far enough to trigger lockdowns globally.
- There were some shortages of toilet paper and whatnot, some degree of panic buying and people tried to stay at home but the situation was far less dramatic compared to the movie.
- The US didn’t enforce any lockdowns seriously. You could freely travel around in the lower 48 states and private gatherings were (almost) never prosecuted.
- Patient zero was an anonymous wet market customer (although Gwyneth Paltrow did get infected early on in the pandemic but luckily she survived just fine).
‘Contagion’ made $136.5m in the box office back in the day. How much would it have made if was released with the same plotline as our 2020 pandemic? Probably a lot less. A movie where Paltrow sits at home coughing for a week or two and then goes back to work just doesn’t sound as exciting. Some other examples of this include:
- Star Trek ships having AGI but still needing humans in charge. And even when the AGI does do work (The Doctor), it’s been given human-like emotions. People don’t want to read a story where the machines do all the work and the humans are just lounging around. Katja Grace had a nice post [LW · GW] about our intrinsic desire to be useful, which explains why an AGI Captain Kirk would not sit well with viewers.
- In The Matrix, the AI is neither fully aligned (helping humans live in a utopia), nor is it fully misaligned (destroying everyone immediately, including Zion). It's balanced just right to make for a cool story.
- Biden becoming convinced AI is a major risk after watching Mission Impossible.
- The Kennedy assassination conspiracy. Lee Harvey Oswald’s actual story is very much mundane compared to the grand conspiracies invented around him. For a similar reason everyone very quickly forgot about the guy who attempted to shoot Trump: it’s just too boring of a story.
- Almost all ‘Facebook’ stories about Albert Einstein. He wasn’t a poor student, didn’t write witty quips about marriage and didn’t have his driver give a speech on his behalf.
- The boiling frog story is a myth: frogs will jump out no matter how slowly you heat the water. So is the banana, monkeys and a ladder story used in hundreds of business books.
- A German book for young adults about the dangers of nuclear energy was partially credited for convincing the German public to abandon nuclear power.
We want to live in modestly interesting times
Terry Pratchet’s had a funny quote about a phrase (incorrectly) attributed to ancient Chinese writings:
I believe that the phrase is may "you live in interesting times," and is the lowest in a trilogy of Chinese curses that continue "may you come to the attention of those in authority" and finish with "may the gods give you everything you ask for." I have no idea about its authenticity.
Does this mean we want to live in “boring” times? Imagine if an omniscient seer told you at the age of 18 that absolutely nothing “interesting” would happen in the next 100 years. Even if you were born in a highly prosperous city in Switzerland, would such knowledge make you happier? I would argue that most of us are hoping to live in reasonably interesting times, which biases us towards predictions that promise change rather than stagnation.
We want interesting things to happen during our lifetimes
Quite a few of people are interested in working on AI safety. But lets once again imagine we have access to an all-powerful seer who tells us that AGI will not be developed before the year 2150. How many people would still be interested in working on AI safety then? Probably a lot less. The same deal applies to every area of human innovation: Elon Musk wants to personally witness humans landing on Mars, quantum computing researches hope to see a quantum computer built, fusion companies want to start generating power now rather than in 100 years.
And this isn’t restricted to scientists, entrepreneurs and engineers. Ukrainians want their territory liberated now rather than in 50 years. Americans worry about their personal Social Security checks much more than they worry about future fiscal troubles. Climate change advocates sound a lot more convincing if they promise something dramatic to happen now rather than sometime in the next century.
To summarize, I’d like to propose a three-prong test for judging predictions for how much science fiction bias they could possibly have:
- Does the prediction sound like it could be the plot of a successful book or movie?
- If the prediction came true, would you say you’re living in “interesting” times?
- Does it promise things to happen in the next 3-4 decades or does it talk about a distant future?
If the answer is yes to all of the above, I’d be a little more skeptical. And conversely when writing predictions, I’d try to judge whether I’m trying to write a good story or trying to actually predict the future. Alas, reality is often disappointing.
20 comments
Comments sorted by top scores.
comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-08-19T14:22:28.233Z · LW(p) · GW(p)
For the arrival of AGI the answer to your last questions is YES, YES, YES.
You should be very skeptical - as I was when I first encountered the rationalists.
Yet the evidence that AGI will likely happen in our lifetime is overwhelming.
Suspiciously so.
Sometimes extraordinary things do happen. Sometimes dozens bits of evidence is right in front of your lieing eyes. We are already living in an extraordinary time. You only have to see what's right in front of you.
I suspect anthropics but chalk it up to whatever god you pray to.
Replies from: Sherrinford↑ comment by Sherrinford · 2024-08-19T17:26:10.844Z · LW(p) · GW(p)
Actually, lesswrong AGI warnings don't sound like they could be the plot of a successful movie. In a movie, John Connor organizes humanity to fight against skynet. That does not seem plausible with LW-typical nanobot scenarios.
Replies from: ChristianKl, D0TheMath, nikita-sokolsky↑ comment by ChristianKl · 2024-08-20T14:28:39.321Z · LW(p) · GW(p)
To me, nanobots don't them like the are central to LW stories about AI risk.
If you would ask people on an LW census "If AI causes the extinction of humans, how likely do you think that nanobots play a huge part in that", I would expect the median percentage to be single digits or lower.
Replies from: Sherrinford↑ comment by Sherrinford · 2024-08-20T14:42:31.096Z · LW(p) · GW(p)
I agree that Nanobots are not a necessary part of AI takeover scenarios. However, I perceive them as a very illustrative kind of "the AI is smart enough for plans that make resistance futile and make AI takeover fast" scenarios.
The word "typical" is probably misleading, sorry; most scenarios on LW do not include Nanobots. OTOH, LW is a place where such scenarios are at least taken seriously.
So p(scenario contains Nanobots|LW or rationality community is the place of discussion of the scenario) is probably not very high, but p(LW or rationality community is the place of discussion of the scenario|scenario contains Nanobots) probably is...?
↑ comment by Garrett Baker (D0TheMath) · 2024-08-19T19:26:03.444Z · LW(p) · GW(p)
This seems pretty false. There is at least one pretty successful fiction book written about the intelligence explosion (which, imo, would have been better if in subsequent books gur uhznaf qvqa'g fheivir).
Replies from: Sherrinford↑ comment by Sherrinford · 2024-08-19T21:28:00.089Z · LW(p) · GW(p)
Gnargh. Of course someone has a counterexample. But I don't think that is the typical lw AGI warning scenario. However, this could become a "no true Scotsman" discussion...
Replies from: D0TheMath↑ comment by Garrett Baker (D0TheMath) · 2024-08-19T22:16:11.269Z · LW(p) · GW(p)
You mean like Gwern's It Looks Like You’re Trying To Take Over The World? I think that made a good short story. Though I don't think it would make a good movie, since there's little in the way of cool visuals.
Greg Egan's Crystal Nights is also more similar to the usual way things are imagined, though uhznavgl vf fnirq ol gur hayvxryl qrhf rk znpuvan bs vg orvat rnfvre sbe gur fvzhyngrq pvivyvmngvba gb znxr n cbpxrg qvzrafvba guna gnxr bire gur jbeyq.
Crystal Nights is also very similar to Eliezer's That Alien Message [LW · GW] / Alicorn's Starwink.
Edit: There are also likely tons more such books written by Ted Chiang, Vernor Vinge, Greg Egan, and others, which I haven't read yet so can't list with confidence and without spoilers to myself.
Replies from: Sherrinford, Rana Dexsin↑ comment by Sherrinford · 2024-08-20T06:00:38.712Z · LW(p) · GW(p)
Thanks for the list! Yes, it is possible to imagine stories that involve a superintelligence.
I could not imagine a movie/successful story where everybody is killed by an AGI within seconds because it has prepared that in secrecy and nobody realized it, and nobody could do anything about it. Seems like lacking a happy end and even a story.
However, I am glad to be corrected, and will check the links, the stories will surely be interesting!
↑ comment by Rana Dexsin · 2024-08-20T01:45:36.709Z · LW(p) · GW(p)
Ted Chiang's Chrystal Nights
Minor corrections: “Crystal Nights” does not have an H in the first word and is by Greg Egan. (The linked copy is on his own website, in fact, which also includes a number of his other works.)
Replies from: D0TheMath↑ comment by Garrett Baker (D0TheMath) · 2024-08-20T03:47:33.522Z · LW(p) · GW(p)
Thanks! I remember consciously thinking both those things, but somehow did the opposite of that.
↑ comment by Nikita Sokolsky (nikita-sokolsky) · 2024-08-19T17:32:40.772Z · LW(p) · GW(p)
Nanobots destroying all humans at once are indeed poor sci-fi. But how much of this story's popularity hinges on it happening within our lifetimes?
Replies from: Sherrinford↑ comment by Sherrinford · 2024-08-19T18:31:51.590Z · LW(p) · GW(p)
I don't understand this question. Why would the answer to that question matter? (In your post, you write "If the answer is yes to all of the above, I’d be a little more skeptical.") Also, the "story" is not really popular. Outside of LessWrong discussions and few other places, people seem to think that every expectation about the future that involves a superintelligent agentic AGI sounds like science fiction and therefore does not have to be taken seriously.
Replies from: nikita-sokolsky↑ comment by Nikita Sokolsky (nikita-sokolsky) · 2024-08-19T22:12:58.161Z · LW(p) · GW(p)
Sorry for not being clear. My question was whether LW really likes the nanobot story because we think it might happen within our own lifetimes. If we knew for a fact that human-destroying-nanobots would take another 100 years to develop, would discussing them still be just as interesting?
Side note: I don't think the "sci-fi bias" concept is super coherent in my head, I wrote this post as best as I can, but I fully acknowledge that its not fully fleshed out.
Replies from: Sherrinford↑ comment by Sherrinford · 2024-08-20T06:27:46.206Z · LW(p) · GW(p)
Yes, people care about things that are expected to happen today rather than in 1,000 years or later. That is a problem that people fighting against climate change have been pointing out for a long time. At the same time, with respect to AI, my impression is that many people do not react to developments that will quickly have strong implications, while some others write a lot about caring about humanity's long-term future.
comment by Garrett Baker (D0TheMath) · 2024-08-19T13:47:31.594Z · LW(p) · GW(p)
See also: Tyler Cowen’s Be Suspicious of Stories
Replies from: nikita-sokolsky↑ comment by Nikita Sokolsky (nikita-sokolsky) · 2024-08-19T22:13:52.845Z · LW(p) · GW(p)
Thank you! Here's the transcript: https://www.lesswrong.com/posts/4kphivjxngJmEdWsN/transcript-tyler-cowen-on-stories [LW · GW]
comment by Templarrr (templarrr) · 2024-08-19T09:08:08.984Z · LW(p) · GW(p)
If the answer is yes to all of the above
Point 2 needs rephrasing.
"Does it sound exciting or boring?" "Yes"
comment by Noosphere89 (sharmake-farah) · 2024-08-20T15:30:25.274Z · LW(p) · GW(p)
This link is also relevant, and BTW, this goes for why I'm a priori skeptical of misaligned AI, even without evidence against it, because it fits way too well with story logic, and in particular one thing that misaligned AIs provide in terms of story value is an immediate conflict, whereas safe and/or aligned AIs provide less conflict and story opportunities.
https://www.understandingai.org/p/predictions-of-ai-doom-are-too-much
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2024-08-22T03:13:58.070Z · LW(p) · GW(p)
There are other ways to be skeptical about fictional paradigms regarding AI. For example, a common paradigm is that AIs escape human control, and then there is a long struggle. An alternative paradigm is that once conflict emerges, the AIs win quickly and humans are permanently marginalized thereafter.
comment by Valdes (Cossontvaldes) · 2024-08-20T08:04:28.938Z · LW(p) · GW(p)
And even when the AGI does do work (The Doctor), it’s been given human-like emotions. People don’t want to read a story where the machines do all the work and the humans are just lounging around.
I am taking the opportunity to recommend the culture by Ian M. Banks here is a good entry point to the series, the books can be read in almost any order. It's not like they find no space for human-like actors, but I still think these books show -by being reasonably popular- that there is an audience for stories about civilizations where AGI does all the work.
Of course, your original point still stands if you say "most people" instead.