What are some low-information priors that you find practically useful for thinking about the world?

post by Linch · 2020-08-07T04:37:04.127Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    11 Yoav Ravid
    8 Mark Xu
    8 brook
    1 Avi Weiss
    1 Slider
None
1 comment

cross-posted on the EA Forum [EA · GW]

I'm interested in questions of the form, "I have a bit of metadata/structure to the question, but I know very little about the content of the question (or alternatively, I'm too worried about biases/hacks to how I think about the problem or what pieces of information to pay attention to). In those situations, what prior should I start with?"

I'm not sure if there is a more technical term than "low-information prior."

Some examples of what I found useful recently:

1. Laplace's Rule of Succession, for when the underlying mechanism is unknown.

2. Percentage of binary questions that resolves as "yes" on Metaculus. It turns out that of all binary (Yes-No) questions asked on the prediction platform Metaculus, 29% of them resolved yes. This means that even if you know nothing about the content of a Metaculus question, a reasonable starting point for answering a randomly selected binary Metaculus question is 29%.

In both cases, obviously there are reasons to override the prior (for example, you can arbitrarily flip all questions on Metaculus such that your prior is now 71%). However (I claim), having a decent prior is nonetheless useful in practice, even if it's theoretically unprincipled.

I'd be interested in seeing something like 5-10 examples of low-information priors as useful as the rule of succession or the Metaculus binary prior.

Answers

answer by Yoav Ravid · 2020-08-07T18:05:58.526Z · LW(p) · GW(p)

The Lindy effect (or Lindy's Law).

The Lindy effect is a theory that the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age, so that every additional period of survival implies a longer remaining life expectancy. Where the Lindy effect applies, mortality rate decreases with time.

Example: you have two books to choose from (assuming both seem equally interesting), and you don't know much information about them except how long they've been in print. The first one came out this year, and the other one has been in print for 40 years. 

Using Lindy you can expect the first book's sales to drop either this year or the next one, and you can expect the latter to stay in print for about 40 more years. in other words, the older book is likely to be more relevant, and so that's the one you'll choose.
 

I suggest Nassim Taleb's 'Antifragile' if you wish to read more about it.

comment by DirectedEvolution (AllAmericanBreakfast) · 2020-08-08T07:21:25.216Z · LW(p) · GW(p)

The Lindy Effect gives no insight about which of the two books will be more “relevant“. For example, you could be comparing two political biographies, one on Donald Trump and the other on Jimmy Carter. They might both look equally interesting, but the Trump biography will make you look better informed about current affairs.

Choosing the timely rather than the timeless book is a valid rule. There‘ll always be time for the timeless literature later but the timely literature gives you the most bang for your buck if you read it now.

The Lindy Effect only tells you which of the two books is more likely to remain in print for another 40 years. It doesn’t even give you insight on how many total copies will be sold of each book. Maybe one will sell a million copies this year, 1,000 the next, and be out of print in two years. The other will sell a steady 10,000 copies per year for 40 years. The first one still will outsell it over that period of time.

What I find frustrating about the Lindy Effect, and other low-info priors like Chesterton’s Fence, is the way they get spun into heuristics for conservatism by conflating the precise claim they make with other claims that feel related but really aren’t.

Replies from: quanticle
comment by quanticle · 2020-08-08T18:45:24.056Z · LW(p) · GW(p)

There‘ll always be time for the timeless literature later but the timely literature gives you the most bang for your buck if you read it now.

That's not true, because one's lifespan is limited. If you're constantly focusing on the timely, you in fact will not have time for the timeless.

answer by Mark Xu · 2020-08-07T18:20:28.387Z · LW(p) · GW(p)

This is personal to me, but I once took a class at school where all the problems were multiple choice, required a moderate amount of thought, and were relatively easy. I got 1/50 wrong, giving me a 2% base rate for making the class of dumb mistakes like misreading inequalities or circling the wrong answer.

This isn't quite a meta-prior, but it seemed sort of related?

comment by Linch · 2020-08-08T09:14:55.096Z · LW(p) · GW(p)

I think my base rate for basic comprehension failures is at a similar level!

answer by brook · 2020-08-07T16:55:44.725Z · LW(p) · GW(p)

I'd imagine publication bias priors are helpful, especially with increasing specificity of research area, and especially where you can think of any remote possibility for interference.

Just as an example I'm familiar with (note this is probably a somewhat more extreme example than for most research areas due to the state of pharmacological research): If you see 37 RCTs in favour of a given drug, and 3 that find no significant impact (i.e. 93% in favour), it is not unfounded to assume that the trials actually performed are roughly equal in favour and against, and that there may be a missing 34-odd studies.

A 2009 analysis found that this was almost exactly the case (the studies registered were 36:38 in favour of the drug; one positive RCT went missing before publication. Along with twenty-two non-significant studies that were missing altogether, and a further 11 which were so poorly analysed as to appear significant.

(Bad Pharma, by Ben Goldacre, is a pretty sound resource for this topic in general)

comment by Linch · 2020-08-08T09:07:42.065Z · LW(p) · GW(p)

Wow thank you! I found this really helpful.

answer by Avi (Avi Weiss) · 2020-08-16T07:02:07.954Z · LW(p) · GW(p)

One idea that comes to mind is that the surface-level information sources (e.g. news articles) are often *'correct' *on a basic level, but really more like 'yes, but it's complicated' on a deeper level.

The best illustration of this is if you've ever seen a surface-level description of something you know about at a deep level, and you realise how wrong it is, or at least how much nuance it's missing. The next step is to realise that it's like that with everything - i.e. all the things you're not an expert on.

answer by Slider · 2020-08-09T15:03:10.046Z · LW(p) · GW(p)

Some geography documents referred that in Japan if you are prosecuted for a crime you are found guilty 97% of the time. In the first way of telling about a particular case this works in the opposite way that this distribution tells less than a prosecution in a random country. Then in a second way how this very limited factoid gives reason to suspect that something is very amiss with the system.

The documentary put forth that Japanese hate for the state to be proven wrong and go to inappropriate lengths to avoid such flows of events. It also feels that culture has greater weigth for the gravity of not fitting in, so a lot of the conflict resolving might be done "informally" before it becomes police business. With active witchhunters the official officials only do the most extreme cases or the most excusable gray area usages are being actively hidden if they are otherwise socially desirable.

Probably if one were interested in tweaking the system there would have to be details on who has the authority do what based on what level of proof. And the case that the system is working correctly could have many details to great length to seem okay. And I would guess that true progress would be a very slippery and hard to detect and very resistant to trivial solution attempts. Yet the case that there is something to be found seems pretty strong.

comment by ChristianKl · 2020-08-09T22:20:34.084Z · LW(p) · GW(p)

Where did you get the 97% number from? https://www.youtube.com/watch?v=OINAk2xl8Bc suggests that 60% of those who are investigated for a crime by the police don't get charged with the crime. 

Replies from: Slider
comment by Slider · 2020-08-10T16:19:47.312Z · LW(p) · GW(p)

My memory fails me, but it was a documentary bit on youtube where the narrator was on a japanese street and I might have misremembered the exact number.

The linked source has the same number even higher at over 99%. Choosing a handy solid factoid and building a fluffy narrative around it is a known way to lie with statistics. And the linked source attributes the criminal system as a whole being not that particularly harsh by arrestors letting people away quite easily.

It is low amount of information so in that sense it is not surprising that it isn't a satisfactory description.

To me it would seem weird if a procecutor had ample evidence but didn't think the person should be convicted and thus doesn't push the case. I am more used to the trial, jury or judge to argue reasonableness. To treat first timers by having their sentence be conditional rather than police letting people walk until they start to remember their faces.

Replies from: Avi Weiss
comment by Avi (Avi Weiss) · 2020-08-16T06:56:55.591Z · LW(p) · GW(p)

This source gives the conviction rate a range of 83.3% to 97.7% depending on the level of court and the type of charge.

1 comment

Comments sorted by top scores.

comment by Mark Xu (mark-xu) · 2020-08-07T18:22:07.486Z · LW(p) · GW(p)

Gwern's essay about how everything is correlated seems related/relevant: https://www.gwern.net/Everything