How can guesstimates work?

post by jacobjacob · 2019-07-10T19:33:46.002Z · LW · GW · 9 comments

This is a question post.

[Epistemic status: background is very hand-wavy, but I'd rather post a rough question than no question at all. I'm very confident that the two ingredients -- illegible cultural evolution and guesstimation -- are real and important things. Though the relation between the two is more uncertain. I'm not that surprised if my question ends up confused and dissolved rather than solved by answers.]

For a large part of human history, our lives were dictated by cultural norms and processes which appeared arbitrary, yet could have fatal consquences if departed from. (C.f. SSC on The Secret of Our Success [LW · GW], which will be assumed background knowledge for this question.)

Today, we live in a world where you can achieve huge gains if you simply "shut up and multiply". The world seems legible -- I can roughly predict how many planes fly every day by multiplying a handful rough numbers. And the world seems editable -- people who like to cook often improvise: exchanging, adding and removing ingredients. And this seems fine. It certainly doesn't kill them. Hugely succesful companies are built around the principle of "just try things until something breaks and then try again and improve".

I still think there are still large amounts of illegible cultural knowledge encoded in institutions, traditions, norms, etc. But something still seems very different from the horror stories of epistemic learned helplessness Scott shared.

What changed about the world to make this possible? How can guesstimates work?

Some hypotheses (none of which I'd put more than 15% on atm):


answer by habryka · 2019-07-10T23:38:29.903Z · LW(p) · GW(p)

Hmm, I think I disagree with the premise that we've gotten that much better at making guesstimates. My guess is that for most people, their ability to take systematic action in their environment in a way that has direct positive consequences has gotten a lot worse, but the ability of a small group of people to leverage systematic action for really large consequences has gotten a lot better.

My interpretation of Scott's recent posts, in combination with some of the quotes from Vaniver's "Steelmanning Divination" is something like the following:

  • A lot of people need to follow cultural rituals to succeed at their goals, in particular in domains that they don't have time to think about a lot
  • Most of the rituals were created by individuals that did actually understand the real reasons for why certain things had to happen, but the complicated true explanations often couldn't compete with the much simpler but wrong explanations that nevertheless allowed for the derivation of the correct instructions
  • On a societal level guesstimates have worked for a long time and are the source of a lot of our most valuable cultural institutions, and didn't get particularly more effective recently
comment by rk · 2019-07-11T11:00:28.669Z · LW(p) · GW(p)

Most of the rituals were created by individuals that did actually understand the real reasons for why certain things had to happen

This is not part of my interpretation, so I was surprised to read this. Could you say more about why you think this? (Either why you think this being argued for in Vaniver's / Scott's posts or why you believe it is fine; I'm mostly interested in the arguments for this claim).

For example, Scott writes:

How did [culture] form? Not through some smart Inuit or Fuegian person reasoning it out; if that had been it, smart European explorers should have been able to reason it out too.

And quotes (either from Scholar's Stage or The Secret of Our Success):

It’s possible that, with the introduction of rice, a few farmers began to use bird sightings as an indication of favorable garden sites. On-average, over a lifetime, these farmers would do better – be more successful – than farmers who relied on the Gambler’s Fallacy or on copying others’ immediate behavior.

Which, I don't read as the few farmers knowing why they should use bird sightings.

Or this quote from Xunzi in Vaniver's post:

One performs divination and only then decides on important affairs. But this is not to be regarded as bringing one what one seeks, but rather is done to give things proper form.

Which doesn't sound like Xunzi understanding the specific importance of a given divination (I realise Xunzi is not the originator of the divinatory practices)

comment by jacobjacob · 2019-07-11T02:00:18.539Z · LW(p) · GW(p)

Nitpick: "We've gotten much better at making guesstimates" and "Guesstimates have become more effective" are quite different claims, and it's not clear which one(s) you disagree with.

answer by Richard_Ngo (ricraz) · 2019-07-11T23:48:54.035Z · LW(p) · GW(p)

Very interesting question - the sort that makes me swing between thinking it's brilliant and thinking it's nonsense. I do think you overstate your premise. In almost all of the examples given in The Secret of our Success, the relevant knowledge is either non-arbitrary (e.g. the whole passage about hunting seals makes sense, it's just difficult to acquire all that knowledge), or there's a low cost to failure (try a different wood for your arrows; if they don't fly well, go back to basics).

If I engage with the question as posed, though, my primary answer is simply that over time we became wealthy and technologically capable enough that we were able to replace all the natural things that might kill us with whatever we're confident won't kill us. Which is why you can improvise while cooking - all of the ingredients have been screened very hard for safety. This is closely related to your first hypothesis.

However, this still leaves open a slightly different question. The modern world is far too complicated for anyone to understand, and so we might wonder why incomprehensible emergent effects don't render our daily lives haphazard and illegible. One partial answer is that even large-scale components of the world (like countries and companies) were designed by humans. A second partial answer, though, is that even incomprehensible patterns and mechanisms in the modern world still interact with you via other people.

This has a couple of effects. Firstly, other people try to be legible, it's just part of human interaction. (If the manioc could bargain with you, it'd be much easier to figure out how to process it properly.)

Secondly, there's an illusion of transparency because we're so good at and so used to understanding other people. Social interactions are objectively very complicated: in fact, they're "cultural norms and processes which appear arbitrary, yet could have fatal consequences if departed from". Yet it doesn't feel like the reason I refrain from spitting on strangers is arbitrary (even though I couldn't explain the causal pathway by which people started considering it rude). Note also that the space of ideas that startups explore is heavily constrained by social norms and laws.

Thirdly, facts about other humans serve as semantic stop signs. Suppose your boss fires you, because you don't get along. There's a nearly unlimited amount of complexity which shaped your personality, and your boss' personality, and the fact that you ended up in your respective positions. But once you've factored it out into "I'm this sort of person, they're that sort of person", it feels pretty legible - much more than "some foods are eat-raw sorts of foods, other foods are eat-cooked sorts of foods". (Or at least, it feels much more legible to us today - maybe people used to find the latter explanation just as compelling). A related stop sign is the idea that "somebody knows" why each step of a complex causal chain happened, which nudges us away from thinking of the chain as a whole as illegible.

So I've given two reasons for increased legibility (humans building things, and humans explaining things), and two for the illusion of legibility (illusion of transparency, and semantic stop signs). I think on small scales, the former effects predominate. But on large scales, the latter predominate - the world seems more legible than it actually is. For example:

The world seems legible -- I can roughly predict how many planes fly every day by multiplying a handful rough numbers.

Roughly predicting the number of planes which fly every day is a very low bar! You can also predict the number of trees in a forest by multiplying a handful of numbers. This doesn't help you survive in that forest. What helps you survive in the forest is being able to predict the timing of storms and the local tiger population. In the modern world, what helps you thrive is being able to predict the timing of recessions and crime rate trends. I don't think we're any better at the latter two than our ancestors were at the former. In fact, the large-scale arcs of our lives are now governed to a much greater extent by very unpredictable and difficult-to-understand events, such as scientific discoveries, technological innovation and international relations.

In summary, technology has helped us replace individual objects in our environments with safer and more legible alternatives, and the emergent complexity which persists in our modern environments is now either mediated by people, or still very tricky to predict (or both).

answer by ESRogs · 2019-07-10T23:16:08.079Z · LW(p) · GW(p)
We just have better medical and welfare systems which allow people to take more risks.

I would think it's something like this, though I would put it differently: we're not at subsistence anymore. If you're at subsistence, then probably most of what you're doing is just to get by, and if you deviate too much from it, you die and/or fail to reproduce.

Now that we have slack, we can take on bigger risks, speculate, and be creative and enterprising.

answer by romeostevensit · 2019-07-11T07:07:23.574Z · LW(p) · GW(p)

Human civilizations have survivorship bias such that you're seeing the iterated output of those that found parameter insensitive parts of critical distributions. Exogenous shocks are common, and civs that weren't metastable disappeared. This happened on both the cultural and genetic level (you are already a civilization of trillions of cells). As the pace of change accelerates things might get more dangerous as people push farther/harder within the span of a single lifetime before natural self corrections kick in (four horsemen). The current trend could be seen as the human race essentially tinkering with cancer (runaway, uncontrollable growth) to see if it will grant us immortality before it kills us.

answer by Matt Goldenberg (mr-hire) · 2019-07-10T22:34:01.800Z · LW(p) · GW(p)

Hypothesis: Guesstimates have always worked on transparent [LW · GW]and opaque domains [LW · GW]. As communication and reasoning tools have become better over time, more domains have become transparent and opaque. What used to have been encoded by culture can now be encoded in big data.


As communication and reasoning tools have gotten better, things have become more intertwined in complex, complicated, and unexpected ways. This means that the things we do can have unexpected consequences, and that distributions we're shutting up and multiplying in can be effected at unexpected times in unexpected ways. This makes more domains Knightian [LW · GW].

In other words, we have the illusion of more transparency, but we actually have more Knightian environments.

Hypothesis: Smoothing out knightian environments to make them more predictable in the short term makes them more suspectible to large black swans [LW · GW]. The smoothing behavior you noted above may actually contribute to this illusion of transparency.

answer by Aleksi Liimatainen · 2019-07-11T17:33:34.522Z · LW(p) · GW(p)

I think it's some mix of genuine improvement and legibility bias.

We really have learned more about the world and made many aspects we care about more legible. On the other hand, we also tend to focus on what we see and ignore what we don't, so improved legibility in some areas may make us more likely to pay attention to those at the expense of others.

Things get extra tricky if we optimize too hard on legible metrics. To the extent illegible things matter, we can goodhart and incur illegible technical debt that's masked in the short run by legible improvements.


Comments sorted by top scores.

comment by jmh · 2019-07-11T12:55:31.279Z · LW(p) · GW(p)

This statement made me wonder: " Things are generally closer to optimal equilibria, and equilibria are more legible/predictable than non-quilibria."

And I might be thinking incorrectly myself here as well as not fully following your though.

Why would the equilibria position be more predictable than off equilibria? One thought might be that we have this sense of balance so in an equilibrium we feel that it right (and using "feel" is probably a poor word choice but little time to find the right word). But that doesn't mean we can then make any types of predictions.

The thought that came to me was more along the idea, if we generally understand where the equilbirum position should be then we have some idea of where we should go. (Think standard econ theory of prices as signals and arbitrage to equilibrate across markets).

However, if we are at the equilibrium position it might not be clear what happens if we do something we want to do but have never done before. Will that be consistent with equilibrium or disruptive. I think this does fit with the whole "guestimate" approach to predication and forecasting. Perhaps your view on more predictability then comes from having the equilibrium as the initial position you start from. But what if equilibrium is more like a place where information is actually lost?