Parable of the flooding mountain range

post by RorscHak · 2019-03-29T15:07:02.265Z · LW · GW · 9 comments

A mountaineer is hiking in a mountain range. There is a thick fog so he cannot see beyond a few meters.

It is raining heavily and so the mountain range is being flooded, the mountaineer has to climb to a high place so he won’t get washed away.

He will climb towards the highest point in his sight, and if he sees another higher point he will change his direction towards there.

Now the mountaineer is standing on the top of a hill, and to his knowledge every direction is downwards, and there is no higher peak in sight. He sits on the hilltop, anxiously watching the rain and hearing the water raising.

The water floods the hill and drowns him, washing his dead body into the abyss.

Is he on the highest peak of the mountain range? Unlikely

Can he ever get there if he cannot see beyond a few metres? Very unlikely.


A band of mountaineers are hiking in a mountain range. There is a thick fog so they cannot see beyond a few meters.

It is raining heavily and so the mountain range is being flooded, the mountaineers have to climb to a high place so they won’t get washed away.

They elected the most experienced mountaineer as their leader, in the fog he can see a couple metres further than everybody else, and so he is the best guide possible for anyone.

The band all followed him onto a hilltop, every direction is downwards so they stayed there, anxiously watching the rain and hearing the raising waters.

Until the water floods the hilltop and drowns them, washing their dead bodies into the abyss.

This band is functionally the same as a lone mountaineer.


A band of mountaineers are hiking in a mountain range. There is a thick fog so they cannot see beyond a few meters.

It is raining heavily and so the mountain range is being flooded, the mountaineers have to climb to a high place so they won’t get washed away.

They spread out and walked away from each other, then went on to search for a place to stay.

They end up on different hilltops, some higher than others. They individually yet simultaneously watch the rain and hear the water raising, anxiously.

Until the water floods some hills and drowns many mountaineers, washing their corpses into the abyss. A few mountaineers end up on the higher peaks and are left unharmed by the flood.

After the flood recedes and the fog dissipates , they go down to search their fallen friends to mourn and to bury them.

And of course, to scavenge supplies from their dead bodies.

Is this the best strategy for the band, if it wants to maximise the chance of someone surviving the flood?


It's my first time posting here, please give me some support and suggestions if you can.

Ask me any questions you have if my story seems unclear/vague/confusing because it probably is, I'm not a good writer and I didn't really have an idea what I wanted to write. It initially started as an analogy about evolution, but perhaps it also works as a vague discussion about some future choices facing humanity as well.


Comments sorted by top scores.

comment by johnswentworth · 2019-03-29T21:17:07.119Z · LW(p) · GW(p)

Reading this, I figured you were talking about local-descent-type optimization algorithms, i.e. gradient descent and variants.

From that perspective, there's two really important pieces missing from these analogies:

  • The mountaineers can presumably backtrack, at least as long as the water remains low enough
  • The mountaineers can presumably communicate

With backtracking, even a lone mountaineer can do better sometimes (and never any worse) by continuing to explore after reaching the top of a hill - as long as he keeps an eye on the water, and makes sure he has time to get back. In an algorithmic context, this just means keeping track of the best point seen, while continuing to explore.

With backtracking and communication, the mountaineers can each go explore independently, then all come back and compare notes (again keeping track of water etc), all go to the highest point found, and maybe even repeat that process. In an algorithmic context, this just means spinning off some extra threads, then taking the best result found by any of them.

In an evolutionary context, those pieces are probably not so relevant.

Replies from: RorscHak
comment by RorscHak · 2019-03-30T02:39:42.917Z · LW(p) · GW(p)

Yes, those two pieces can change the situation dramatically (and I have tried writing another parable including them, but found it a bit difficult for me)

I'm pondering about what is the best strategy with communication. Initially I thought I can spread them out and each mountaineer knows the location/height of other mountaineers in a given radius (significantly larger than the visibility in the fog) and add that information into their "move towards the greatest height" algorithm. Which might work, but I cannot vigorously show how useful that will be.

Regardless, I think evolution can't get much better than the third scenario, it doesn't seem to backtrack and most likely doesn't communicate.

There is also that my analogy fails to consider that the "environment" changes overtime, so the "mountain landscape" will not stay the same when you come back to a place after leaving it. This probably prevents backtracking, but doesn't change the outcome that you'd most likely be stuck on a hilltop that isn't the optimal.

Replies from: Pattern
comment by Pattern · 2019-03-31T22:22:41.851Z · LW(p) · GW(p)

The environment may change over time, but 1) mountains change slowly, and 2) that's what brains are for. Even if "evolution doesn't pick up on it", how much will the height of a mountain (and which mountain is the tallest) naturally change over the course of your lifetime?

comment by Dagon · 2019-03-29T17:03:42.321Z · LW(p) · GW(p)

Even if you don't have exact values, it's possible to model the distribution of peak heights and flood depths, to determine how many peaks you'd need to see before a given confidence that you're high enough. And then your search mechanism becomes "don't climb a peak entirely - set a path to see as many peaks as possible before committing to one, then climb the best one you know", or if the flood is slow, you might get stuck on a peak during exploration, so it reduces to .

The question of whether it's better for the entire group to take it's best chance on one peak (all live or all die), or whether it's best to spread out, making it almost certain that some will die and others will live is rather distinct from the best search strategy. I weakly believe that there is no preference aggregation which makes sense to treat a "group agency" as a distinct thing from "set of individual agents". So it will depend on the altruism of the individuals whether they want the best chance of individual survival (by following the best searcher) or if they want a lower chance of their survival to get a higher chance that SOMEONE survives.

Replies from: RorscHak
comment by RorscHak · 2019-03-30T03:01:06.812Z · LW(p) · GW(p)

Ah, I never thought about this being a secretary problem.

Well, initially I used it as an analogy for evolution and didn't think too much about memorising/backtracking.

Oh wait, the mountaineer has memory about each peak he saw then he should go back to one of the high peaks he encountered before (assuming the flood hasn't moped the floor yet, which is a given since he is still exploring), there is probably no irrecoverable rejections here like in secretary problem.

The second choice is a strange one. I think the entire group taking the best chance on one peak ALSO maximises the expected number of survivals, together with maximising each individual's chance of survival.

But it still seems that "a higher chance that someone survives" is something that we want to take into the utility calculation when humanity wants to make choices in face of a catastrophes.

For example, if a coming disaster gives us two choices

(a): 50% chance that humans will go extinct, 50% chance nothing happens.

(b): 90% chance that 80% of humans will die.

The number of expected deaths of (b) significantly exceeds the one of (a), and (a) expects a greater number of humans surviving. But I guess many will agree that (b) is the better option to choose.

Replies from: Dagon, TheWakalix
comment by Dagon · 2019-03-30T15:10:24.477Z · LW(p) · GW(p)

The key is that "humanity" doesn't make decisions. Individuals do. The vast majority of individuals care more about themselves than about strangers, or about the statistical future masses. Public debate is mostly about signaling, so will be split between (a) and (b), depending on cultural/political affiliation. Actual behavior is generally selfish, so most will chose (a), maximizing their personal chances.

comment by TheWakalix · 2019-04-03T14:49:16.729Z · LW(p) · GW(p)

Epistemic status: elaborating on a topic by using math on it; making the implicit explicit

From an collective standpoint, the utility function over #humans looks like this: it starts at 0 when there are 0 humans, slowly rises until it reaches "recolonization potential", then rapidly shoots up, eventually slowing down but still linear. However, from an individual standpoint, the utility function is just 0 for death, 1 for life. Because of the shape of the collective utility function, you want to "disentangle" deaths, but the individual doesn't have the same incentive.

Replies from: RorscHak
comment by RorscHak · 2019-04-03T15:48:22.960Z · LW(p) · GW(p)

Oh yes! This can make more sense now.

#humans has a decreasing marginal returns, since really the main concern for #humanity is the ability to recover, and that while increases with #humans it is not linear.

I do think individuals have "some" concerns about whether humanity in general will survive, since all humans still share *some* genes with each individual, the survival and propagation of strangers can still have some utility for a human individual (I'm not sure where am I going here...)

Replies from: TheWakalix
comment by TheWakalix · 2019-04-03T18:55:08.971Z · LW(p) · GW(p)

I agree that #humans has decreasing marginal returns at these scales - I meant linear in the asymptotic sense. (This is important because large numbers of possible future humans depend on humanity surviving today; if the world was going to end in a year then (a) would be better than (b). In other words, the point of recovering is to have lots of utility in the future.)

I don't think most people care about their genes surviving into the far future. (If your reasoning is evolutionary, then read [LW · GW] this [LW · GW] if you haven't already.) I agree that many people care about the far future, though.