Why it took so long to do the Fermi calculation right?

post by Jan_Kulveit · 2018-07-02T20:29:59.338Z · LW · GW · 20 comments

Contents

20 comments

This is a meta-level followup to an object level post about Dissolving the Fermi Paradox. [LW · GW]

The basic observation of the paper is that when the statistics is done correctly to represent realistic distributions of uncertainty, the paradox largely dissolves.

The correct statistics is not that technically difficult: instead of point estimates, just take the distributions, reflecting the uncertainty (implied in the literature!)

There is sizeable literature about the paradox, stretching several decades. Just Wikipedia lists 22 hypothetical explanations, and it seems realistic, at least several hundred researchers spent some serious effort thinking about the problem.

It seems to me really important to reflect on this.

What's going on, why this inadequacy? (in general research)

And more locally, why did not this particular subset of the broader community, priding itself on use of Bayesian statistics, notice earlier?

(I have some hypotheses, but it seems better to just post it as an open-ended question)

20 comments

Comments sorted by top scores.

comment by Jeff Rose · 2018-07-04T18:08:47.725Z · LW(p) · GW(p)

The more interesting question is where else do we see something similar occurring?

For example, historically income in retirement was usually discussed in terms of expected value. More recently, we've begun to see discussions about retirement focusing on the probability of running out of money. Are there other areas where people focus on expected outcomes as opposed to the probability of X occurring?

comment by ryan_b · 2018-07-04T20:28:16.695Z · LW(p) · GW(p)

I don't find it surprising that it took this long.

Speaking only for myself, when I encounter a superior technique I almost always fail to go back through my life and re-evaluate everything according the new technique.

Thinking about the research community, I have no reason to expect the same mechanism does not apply. I also note that only recently did the replication crisis, and the discussion about standardizing on better statistical methods, seem to hit saturation.

I feel like I have seen other papers that revisit old results with a new technique, always with some new insight to offer. This causes me to suspect that the same positive-results bias that usually affects publication still affects revisiting old equations.

It would be a good idea to systematically update all point estimates with distributions, but I don't see any reason to expect that this was already happening.

Tangentially related: this seems like the kind of thing that language AI and theorem-provers would be pretty good at - having the machine automatically review the literature and apply the new technique instead of the old technique.

comment by Mitchell_Porter · 2018-07-02T23:22:27.215Z · LW(p) · GW(p)

Doesn't this paper boil down to "Some factors in the Drake equation are highly uncertain, and we don't see any aliens, so those probabilities must be small after all?"

Replies from: Vaniver, daozaich, cousin_it
comment by Vaniver · 2018-07-02T23:59:18.122Z · LW(p) · GW(p)

Not quite. The mean number of aliens to see is basically unchanged--the main claim that the paper is making is that a very high probability of 0 aliens is consistent with uncertainty ranges that people have already expressed, and thus with the high mean number of aliens that people would have expected to see before observations.

Replies from: Nisan
comment by Nisan · 2018-07-06T21:53:29.096Z · LW(p) · GW(p)

I'd like to rescue/clarify Mitchell's summary. The paper's resolution of the Fermi paradox boils down to "(1) Some factors in the Drake equation are highly uncertain, and we don't see any aliens, so (2) one or more of those factors must be small after all".

(1) is enough to weaken the argument for aliens, to the point where there's no paradox anymore. (2) is basically Section 5 from the paper ("Updating the factors").

The point you raised, that "expected number of aliens is high vs. substantial probability of no aliens" is an explanation of why people were confused.

I'm making this comment because if I'm right it means that we only need to look for people (like me?) who were saying all along "there is no Fermi paradox because abiogenesis is cosmically rare", and figure out why no one listened to them.

Replies from: Vaniver, avturchin
comment by Vaniver · 2018-07-10T18:36:57.536Z · LW(p) · GW(p)
The point you raised, that "expected number of aliens is high vs. substantial probability of no aliens" is an explanation of why people were confused.

Right, I think it's important to separate out the "argument for X" and the "dissolving confusions around X" as the two have different purposes.

I'm making this comment because if I'm right it means that we only need to look for people (like me?) who were saying all along "there is no Fermi paradox because abiogenesis is cosmically rare", and figure out why no one listened to them.

I think the important thing here is the difference between saying "abiogenesis is rare" (as an observation) and "we should expect that abiogenesis might be rare" (as a prediction) and "your own parameters, taken seriously, imply that we should expect that abiogenesis might be rare" (as a computation). I am not aware of papers that did the third before this, and I think most claims of the second form were heard as "the expected number of aliens is low" (which is hard to construct without fudging) as opposed to "the probability of no aliens is not tiny."

Replies from: Douglas_Knight
comment by Douglas_Knight · 2018-07-17T18:38:27.943Z · LW(p) · GW(p)

But this paper does not talk about "your own parameters." The parameters it uses are the range of published parameters. Saying that people should have used that range is exactly the same as saying that people should not have ignored the extremists. (But I think it's just not true that people ignored the extremists.)

comment by avturchin · 2019-01-01T11:10:57.719Z · LW(p) · GW(p)

If interstellar panspermia is possible, "abiogenesis is cosmically rare" is not an explanation, and our Galaxy could be populated by aliens of approximately our civilisational age and the same basic genetic code.

Replies from: Nisan
comment by Nisan · 2019-01-10T17:25:06.596Z · LW(p) · GW(p)

Good point. In that case the Drake equation must be modified to include panspermia probabilities and the variance in time-to-civilization among our sister lineages. I'm curious what kind of Bayesian update we get on those...

Replies from: avturchin
comment by avturchin · 2019-01-10T19:36:32.146Z · LW(p) · GW(p)

In fact, not a mine idea, I read an article about it by Panov - https://www.sociostudies.org/almanac/articles/prebiological_panspermia_and_the_hypothesis_of_the_self-consistent_galaxy_origin_of_life/

comment by daozaich · 2018-07-03T18:52:35.667Z · LW(p) · GW(p)

No. It boils down to the following fact: If you take given estimates on the distribution of parameter values at face value, then:

(1) The expected number of observable alien civilizations is medium-large (2) If you consider the distribution of the number of alien civs, you get a large probability of zero, and a small probability of "very very many aliens", that integrates up to the medium-large expectation value.

Previous discussions computed (1) and falsely observed a conflict with astronomical observations, and totally failed to compute (2) from their own input data. This is unquestionably an embarrassing failure of the field.

comment by cousin_it · 2018-07-03T09:01:41.805Z · LW(p) · GW(p)

It's a bit more than that. I think Jan's comment [LW(p) · GW(p)] is the best summary.

Replies from: patrick-cruce
comment by MrFailSauce (patrick-cruce) · 2018-07-05T14:43:06.467Z · LW(p) · GW(p)

I’m still not seeing a big innovation here. I’m pretty sure most researchers who look at the Drake equation think “huge sensitivity to parameterization.”

If we have a 5 parameter drake equation then number of civilizations scales with X^5, so if X comes in at 0.01, we’ve got a 1e-10 probability of detectable civilization formation. But if we’ve got a 10 parameter Drake equation and X comes in at 0.01 then it implies a 1e-20 probability. (extraordinary smaller)

So yes, it has a a huge sensitivity, but it is primarily a constructed sensitivity. All the Drake equation really tells us is that we don’t know very much and it probably won’t be useful until we can get N above one for more of the parameters.

Replies from: gbear605
comment by gbear605 · 2018-07-06T00:12:11.113Z · LW(p) · GW(p)

The difference is that before people looked at the Drake equation, and thought that even with the uncertainty, there was a very low probability of no aliens, and this corrects that assumption.

comment by Vaniver · 2018-07-02T21:57:01.451Z · LW(p) · GW(p)

Note that the conclusion of the toy model rests not on "we did the 9-dimensional integral and got a very low number" but "we did Monte Carlo sampling and ended up with 21%"--it seems possible that this might not have been doable 30 years ago, but perhaps it was 20 years ago. (Not Monte Carlo sampling at all--that's as old as Fermi--but being able to do this sort of numerical integration sufficiently cheaply.)

Also, the central intuition guiding the alternative approach is that the expectation of a product is the product of the expectation, which is actually true. The thing that's going on here is elaborating on the generator of P(ETI=0) in a way that's different from "well, we just use a binomial with the middle-of-the-pack rate, right?". This sort of hierarchical modeling of parameter uncertainties is still fairly rare, even among professional statisticians today, and so it's not a huge surprise to me that the same is true for people here. [To be clear, the alternative is picking the MLE model and using only in-model uncertainty, which seems to be standard practice from what I've seen. Most of the methods that bake in the model uncertainty are so-called "model free" methods.]

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2018-07-03T04:21:11.740Z · LW(p) · GW(p)

Note that the conclusion of the toy model rests not on “we did the 9-dimensional integral and got a very low number” but “we did Monte Carlo sampling and ended up with 21%“--it seems possible that this might not have been doable 30 years ago, but perhaps it was 20 years ago. (Not Monte Carlo sampling at all—that’s as old as Fermi—but being able to do this sort of numerical integration sufficiently cheaply.)

I'm quite sure doing this is really cheap even with hardware available 30 years ago. Taking a single sample just requires sampling 6 uniform values and 1 normal value, adding these, and checking whether this is less than a constant. Even with 1988 hardware, it should be possible to do this >100 times per second on a standard personal computer. And you only need tens of thousands of samples to get a probability estimate that is almost certainly accurate to within 1%.

comment by Douglas_Knight · 2018-07-04T16:16:51.339Z · LW(p) · GW(p)

Did anyone make a mistake? Did anyone ever consider the "Fermi paradox" a paradox in need of dissolution? Can you point to anyone making any argument that would be improved by this analysis? This community has generally focused on the "Great Filter" framing of the argument, which puts weight on multiple hypotheses, even if not explicit weight.

All this calculation says is that some people say that life is difficult. Did anyone ever say that they knew with great certainty that life is easy? On the contrary, many people have said that it is the key parameter and looking for traces of life on other planets will shed light on the question. Other people have said that it is important to pay attention to panspermia, because the presence of panspermia allows the possibility that life is difficult and happened only once in the universe, and yet spread to many planets, requiring another filter.

(The only paradox I can imagine is: if we don't see any life, then life isn't out there (perhaps first asserted by Hart 1975), so life is practically impossible, so we're practically impossible; So why do we exist? This paradox is resolved by anthropic update (Carter 1983). The phrase "Fermi paradox" only appears around 1975, but I'm not sure that Hart or anyone else reached this paradoxical conclusion. In fact, lots of people complained that it's not a paradox.)

comment by Richard_Kennaway · 2018-07-03T08:25:23.553Z · LW(p) · GW(p)

Perhaps people confuse "expected value" with "the value you can expect to see?" Even people who know the distinction and can articulately explain it?

Replies from: vsm
comment by vsm · 2018-07-04T11:26:06.269Z · LW(p) · GW(p)

That might explain why many individual researchers failed, but it can't be common enough to filter out everyone thinking about the problem except SDO. To see how many researchers we would expect to find this solution, we must multiply our estimates of the number thinking about it, by the fraction of those who know about the correct statistical technique of using distributions, multiplied by the odds they would apply this technique, do it correctly, and consider the result worth publishing.

N=R*f(s)*f(a)*f(c)*f(p)

Using personal estimates I obtained a result of N=2.998, close to the observed number of publishers of the paper

Replies from: gjm
comment by gjm · 2018-07-04T15:12:51.394Z · LW(p) · GW(p)

Tut tut tut! Instead of just multiplying together those factors, you need to consider the probability distribution on each one and estimate the resulting probability distribution of N. Most of the distribution will probably have smaller N than your point estimate.

[:-)]