Dissolving the Fermi Paradox, and what reflection it provides

post by Jan_Kulveit · 2018-06-30T16:35:35.171Z · LW · GW · 22 comments

This is a link post for https://arxiv.org/abs/1806.02404

Contents

22 comments

While the argument was posted on LessWrong previously [LW · GW], now it has the neat form of a paper on arXive by Anders Sandberg, Eric Drexler and Toby Ord

TL;DR version: the use of Drake-like equations, with point estimates of highly uncertain parameters, is wrong. Extant scientific knowledge corresponds to uncertainties that span multiple orders of magnitude.

When the statistics is done correctly to represent realistic distributions of uncertainty in the literature, "people who take the views of most members of the research community seriously should ascribe something like a one in three chance to being alone in the galaxy and so should not be greatly surprised by our lack of evidence of other civilizations. The probability of N <10^−10 (such that we are alone in the observable universe) is 10%. "

From the conclusions, when the priors are updated

When we update this prior in light of the Fermi observation, we find a substantial probability that we are alone in our galaxy, and perhaps even in our observable universe (53%–99.6% and 39%–85% respectively). ’

22 comments

Comments sorted by top scores.

comment by paulfchristiano · 2018-07-01T22:58:29.270Z · LW(p) · GW(p)

If asked to bet about the probability of alien life (with payoffs measured in pleasure rather than dollars), most people would recommend making an anthropic update. That implies a much more likely future filter, as Katja has argued, and the best guess is then that we are in a universe with large amounts of life, and that we are overwhelmingly likely to soon die.

(Of course, taking this line of argument to its extreme, we are even more likely to be in a simulation.)

Action-wise, the main upshot is that we ought to be much more interested in averting apparently-insurmountable local risks than we otherwise would be. For example, one might be tempted to simply write off worlds in which there are incredibly potent information hazards that almost always end civilizations at a certain stage of development, since that seems like a hopeless situation. But the anthropic update suggests that such situations contain so many observers like us that it can roughly cancel out the hopelessness.

More precisely, the great filter argument suggests that increasing your survival probability by 10% in doomed worlds is actually very good, same order of magnitude as decreasing risk by 10% in a "normal" world with doom probabilities <50%.

(The importance of coping with nearly-certain doom still depends on the probability that we assign to settings of background variables implying nearly-certain doom. That probability seems quite low to me, since it's easy to imagine worlds like ours with strong enough world government that they could cope with almost arbitrary technological risks.)

Replies from: Jan_Kulveit
comment by Jan_Kulveit · 2018-07-02T10:55:28.802Z · LW(p) · GW(p)

My intuition is people should actually bet on current anthropic reasoning less than they do. The reason is it is dangerously simple to construct simple examples with some small integer number of universes. I believe there is a significant chance these actually do not generalize to the real system in some non-obvious way.

One of the more specific reasons why I have this intuition is, it is actually quite hard to do any sort of "counting" of observers even in the very non-speculative world of quantum mechanics. When you go more in the direction of Tegmark's mathematical universe, I would expect the problem to get harder.

comment by MrFailSauce (patrick-cruce) · 2018-06-30T19:36:23.841Z · LW(p) · GW(p)

I’m not sure I understand why they’re against point estimates. As long as the points match the mean of our estimates for the variables, then the points multiplied should match the expected value of the distribution.

Replies from: Jan_Kulveit, roystgnr
comment by Jan_Kulveit · 2018-06-30T23:18:49.967Z · LW(p) · GW(p)

Because people draw incorrect conclusions from the point estimates. You can have high expected value of the distribution (e.g. "millions of civilizations") while at the same time having big part of the probability mass on outcomes with just one civilization, of few civilizations far away.

Replies from: Sniffnoy
comment by Sniffnoy · 2018-07-04T04:55:50.629Z · LW(p) · GW(p)

I think the real point here (as I've commented elsewhere) isn't that using point estimates is inherently a mistake, it's that the expected value is not what we care about. They're valid for that, but not for the thing we actually care about, which is P(N=0).

Replies from: Douglas_Knight
comment by Douglas_Knight · 2018-07-04T18:25:25.206Z · LW(p) · GW(p)

I'm skeptical that anyone ever made that mistake. Can you point to an example?

The paper doesn't claim anyone did, does it?

Replies from: Sniffnoy
comment by Sniffnoy · 2018-07-06T13:43:47.387Z · LW(p) · GW(p)

Made what mistake, exactly?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2018-07-08T01:55:18.160Z · LW(p) · GW(p)

What do you mean by "real point"? Don't you mean that the point of the paper is that someone makes a particular mistake?

I mean the mistake of computing expected number rather than probability. I guess the people in the 60s, like Drake and Sagan probably qualify. They computed an expected number of planets, because that's what they were interested in, but were confused because they mixed it up with probability. But after Hart (1975) emphasizes the possibility that there is no life out there, people ask the right question. Most of them say things like "Maybe I was wrong about the probability of life." That's not the same as doing a full bayesian update, but surely it counts as not making this mistake.

It's true that Patrick asserts this mistake. And maybe the people making vague statements of the form "maybe I was wrong" are confused, but not confused enough to make qualitatively wrong inferences.

Replies from: Sniffnoy
comment by Sniffnoy · 2018-07-08T08:10:34.294Z · LW(p) · GW(p)

Huh, interesting. I have to admit I'm not really familiar with the literature on this; I just inferred this from the use of point estimates. So you're saying people recognized that the quantity to focus on was P(N>0) but used point estimates anyway? I guess what I'm saying is, if you ask "why would they do that", I would imagine the answer to be, "because they were still thinking of the Drake equation, even though it was developed for a different purpose". But I guess that's not necessarily so; it could just have been out of mathematical convenience...

Replies from: Douglas_Knight
comment by Douglas_Knight · 2018-07-17T18:43:29.899Z · LW(p) · GW(p)

Definitely mathematical convenience. In many contexts people do sensitivity analysis instead of bayesian updates. It is good to phrase things as bayesian updates, if only as a different point of view, but when that is the better thing to do (which in this case I do not believe), trumpeting it as right and the other method as wrong is the worst kind of mathematical triumphalism that has destroyed modern science.

comment by roystgnr · 2018-07-05T22:00:57.736Z · LW(p) · GW(p)

Not quite. Expected value is linear but doesn't commute with multiplication. Since the Drake equation is pure multiplication then you could use point estimates of the means in log space and sum those to get the mean in log space of the result, but even then you'd *only* have the mean of the result, whereas what would really be a "paradox" is if turned out to be tiny.

Replies from: Sniffnoy
comment by Sniffnoy · 2018-07-06T13:43:41.941Z · LW(p) · GW(p)

The authors grant Drake's assumption that everything is uncorrelated, though.

Replies from: roystgnr
comment by roystgnr · 2018-07-20T18:52:29.813Z · LW(p) · GW(p)

You don't need any correlation between and to have . Suppose both variables are 1 with probability .5 and 2 with probability .5; then their mean is 1.5, but the mean of their products is 2.25.

Replies from: Sniffnoy
comment by Sniffnoy · 2018-07-21T06:51:30.541Z · LW(p) · GW(p)

Indeed, each has a mean of 1.5; so the product of their means is 2.25, which equals the mean of their product. We do in fact have E[XY]=E[X]E[Y] in this case. More generally we have this iff X and Y are uncorrelated, because, well, that's just how "uncorrelated" in the technical sense is defined. I mean if you really want to get into fundamentals, E[XY]-E[X]E[Y] is not really the most fundamental definition of covariance, I'd say, but it's easily seen to be equivalent. And then of course either way you have to show that independent implies uncorrelated. (And then I guess you have to do the analogues for more than two, but...)

Replies from: roystgnr
comment by roystgnr · 2018-08-20T20:13:44.871Z · LW(p) · GW(p)

Gah, of course you're correct. I can't imagine how I got so confused but thank you for the correction.

comment by habryka (habryka4) · 2018-07-01T18:15:18.109Z · LW(p) · GW(p)

Removed from the frontpage for now, since we try to keep frontpage discussion free from being primarily about the rationality community and it's specific structure. I would recommend putting the last section into its own post, which is then on your personal blog, and then I am happy to promote this to the frontpage.

Replies from: Jan_Kulveit, jessica.liu.taylor, Benito
comment by Jan_Kulveit · 2018-07-01T19:26:22.015Z · LW(p) · GW(p)

Done. Note it was not about the rationality community, but about the broader set of people thinking about this problem.

For reference

What else to notice?

On meta level, it seems to me seriously important to notice that it took so long until some researchers noticed and did the statistics right. Meanwhile, lots of highly speculative mechanisms resolving the largely non-existent paradox were proposed.This may indicate something important about the community. As an example: may there be a strong bias for searching for grand, intellectually intriguing solutions?

comment by jessicata (jessica.liu.taylor) · 2018-07-02T06:35:27.299Z · LW(p) · GW(p)

If an intellectual community suppresses attempts to promote its object-level epistemological failures to attention and cause appropriate meta-level updates to happen, then it's going to stop having an epistemology before long.

Replies from: Jan_Kulveit, Benito
comment by Ben Pace (Benito) · 2018-07-02T18:28:41.294Z · LW(p) · GW(p)

That's certainly true and a problem. If you have some ideas about how to avoid it (in this case or more generally) I'd be interested to read them; feel free to post in meta [? · GW] with some thoughts/ideas, or write them as comments in the last meta thread [LW · GW] on this topic.

comment by Ben Pace (Benito) · 2018-07-01T19:13:47.850Z · LW(p) · GW(p)

My bad, I just read the first four paragraphs and then moved it to frontpage. Will take this as data that I should read more carefully before promoting.

comment by Aiyen · 2018-06-30T21:06:55.490Z · LW(p) · GW(p)

Possibility-if panspermia is correct (the theory that life is much older than Earth and has been seeded on many planets by meteorite impacts), then we might not expect to see other civilizations advanced enough to be visible yet. If evolving from the first life to roughly human levels takes around the current lifetime of the universe, rather than of the Earth, not observing extraterrestrial life shouldn't be surprising! Perhaps the strongest evidence for this is that the number of codons in observed genomes over time (including as far back as the Paleozoic) increases on a fairly steady logarithmic trend, which extrapolates back to shortly after the birth of the universe.