Anthropics and Fermi

post by Stuart_Armstrong · 2018-06-20T13:04:59.080Z · LW · GW · 20 comments

Contents

  Proper scoring rules
  Summing over different agents
  Reference classes and linked decisions
  Boltzmann brains and simulations
None
20 comments

tl;dr There is no well-defined "probability" of intelligent life in the universe. Instead, we can use proper scoring functions to penalise bad probability estimates. If we average scores across all existent observers, we get SSA-style probability estimates; if we sum them, we get SIA-style ones.

When presenting "anthropic decision theory" (the anthropic variant of UDT/FDT), I often get the response "well, that's all interesting, but when we look out to the stars with telescopes, probes, what do we really expect to see?" And it doesn't quite answer the question to say that "really expect to see is incoherent".

So instead of evading the question, let's try and answer it.

Proper scoring rules

Giving the best guess about the probability of , is the same as maximising a proper scoring rule conditional on . For example, someone can be asked to name a , and they will get a reward of , where is the indicator variable that which is if happens and if it doesn't.

Using a logarithmic proper scoring rule, Wei Dai demonstrated [LW · GW] that an updatless agent can behave like an updating one.

So let's apply the proper scoring rule to the probability that there is an alien civilization in our galaxy. As above, you guess , and are given if there is an alien civilization in our galaxy, and if there isn't.

Summing over different agents

But how do we combine estimates from different agents? If you're merely talking about probability - there are several futures you could experience, and you don't know which yet - then you simply take an expectation over these.

But what about duplication, which is not the same as probability [LW · GW]? What if there are two identical versions of you in the universe, but you expect them to diverge soon, and maybe one will find aliens in their galaxy while the other will not?

One solution is to treat duplication as probability. If your two copies diverge, that's exactly the same as if there was a 50-50 split into possible futures. In this case, the total score is the average of all scores in this one universe. In that case, one should use SSA-style probability, and update one's estimates using that.

Or we could treat duplication as separate entities, and ensure that as many as possible are as correct as possible. This involves totalling up the scores in the universe, and so we use SIA-style probability.

In short:

Thus the decision between SSA-style and SIA-style probabilities, is the decision as to which summed proper scoring function one tries to maximise.

So, which of these approaches is correct? Well, you can't say from intrinsic factors. How do you know that any probability you utter is correct? Frequentists talk about long-run empirical frequencies, while Bayesians allow themselves to chunk a lot of this empirical data into the same category (your experience of people in the construction industry is partially applicable to academia). But, all in all, both are correcting their estimates according to observations. And the two scoring totals are just two ways of correcting this estimate - neither is better than the other.

Reference classes and linked decisions

I haven't yet touched upon the reference class issue. Independently of what we choose to sum over - the scores of all human estimates, all conscious entities, all humans subjectively indistinguishable from me - by choosing our own estimates, we are affecting the estimates of those whose estimates are 'linked' with ours (in the same way that our decisions are linked with those of identical copies in the Prisoner's Dilemma). If we total up the scores, then as long as the summing includes all 'linked' scores, then it doesn't matter how many other scores are included in the total: that's just an added constant, fixed in any given universe, that we cannot change. This is the decision theory version of "SIA doesn't care about reference classes".

If we are averaging, however, then it's very important which scores we use. If we have large reference classes, then the large amount of other observers will dilute the effect of linked decisions. Thus universes will get downgraded in probability if they contain a higher proportion of non-linked estimates to linked ones. This is the decision theory version of "SSA is dependent on your choice of reference classes".

However, unlike standard SSA, decision theory has a natural reference class: just use the class of all linked estimates.

Boltzmann brains and simulations

Because the probability is defined in terms of a score in the agent's "galaxy", it makes sense to exclude Boltzmann brains from the equation, as their entire beliefs are wrong - they don't inhabit the galaxy they believe they are in, and their believed reality is entirely wrong. So from a decision theoretic perspective, so that the scoring rule makes sense, we should exclude them.

Simulations are more tricky, because they may discovered simulated aliens within their simulated galaxies. If we have a well defined notion of simulation - I'd argue that in general, that term is ill-defined - then we can choose to include or not include that in the calculation, and both estimates would makes perfect sense.

20 comments

Comments sorted by top scores.

comment by cousin_it · 2018-06-20T16:16:10.276Z · LW(p) · GW(p)

Designate some random penny in your wallet as your "lucky coin". Assume that it's biased 99:1 in favor of heads. Spend a day making decisions according to that theory, i.e. walking around and offering people bets on the coin. Note the actual frequency of heads and the amount of money you lose. Repeat until you're convinced that "what will I observe?" isn't fully captured by "how am I making decisions?"

Replies from: Stuart_Armstrong, Chris_Leong
comment by Stuart_Armstrong · 2018-06-21T08:45:31.173Z · LW(p) · GW(p)

You can get probabilities from decisions by maximising a proper scoring function for estimates of expectation of an event happening. It works in all cases that probability does. A broken prior will break both probabilities and decision theory.

In the case of anthropics, the probability breaks - as the expectation of an event isn't well defined across duplicates - while decision theory doesn't.

Replies from: cousin_it
comment by cousin_it · 2018-06-21T09:35:25.175Z · LW(p) · GW(p)

In my scenario there are two things that can be called "probability":

1) The 99:1 odds that you use for decisions. We know how this thing works. You've shown that it doesn't work in anthropic situations.

2) The 50:50 odds that you observe regardless of your decisions. Nobody knows how this thing works. You haven't shown anything about it in anthropic situations.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-06-21T10:52:33.093Z · LW(p) · GW(p)

If you start with a prior on the coin being 99:1, then no amount of observations will persuade you otherwise. If you start with a prior that is more spread out across the possible biases of the coin - even if it's 99:1 in expectation - then you can update from observations.

Decision theory proceeds in exactly the same way; decision theory will "update" towards 50:50 unless it starts with a broken prior.

So essentially there are three things: decision theory, utility, and priors. Using those, you can solve all problems, without needing to define anthropic probabilities.

Replies from: cousin_it
comment by cousin_it · 2018-06-21T12:09:15.175Z · LW(p) · GW(p)

You can solve all problems, except the ones you care about :-) Most human values seem to be only instrumentally about world states like in UDT, but ultimately about which conscious experiences happen and in what proportions. If you think these proportions come from decision-making, what's the goal of that decision-making?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-06-22T11:13:13.351Z · LW(p) · GW(p)

It comes back to the same issue again: do you value exact duplicates having exactly the same experience as a sum, or is that the same as one copy having it (equivalent to an average)? Or something in between?

Replies from: cousin_it
comment by cousin_it · 2018-06-23T09:00:39.510Z · LW(p) · GW(p)

Let's see. If I had memories of being in many anthropic situations, and the frequencies in these memories agreed with SIA to the tenth decimal place, I would probably value my copies according to SIA. Likewise with SSA. So today, before I have any such memories, I seem to be a value-learning agent who's willing to adopt either SIA or SSA (or something else) depending on future experiences or arguments.

You seem to be saying that it's better for me to get rid of value learning, replacing it with some specific way of valuing copies. If so, how should I choose which?

Edit: I've expanded this idea to a post [LW · GW].

comment by Chris_Leong · 2018-06-22T05:40:34.628Z · LW(p) · GW(p)

I don't quite understand the point you are making. If I'm following correctly, he is arguing that the probability estimate depends on whether:

a) We use the self-indicative assumption and ignore duplication issues

b) We use the self-selecting assumption and argue for using a particular reference class

He isn't arguing that we can randomly imagine any assumptions that we feel like.

comment by Chris_Leong · 2018-06-22T05:50:47.420Z · LW(p) · GW(p)

Thanks, for posting this! Funnily enough I was about to write up a similar post myself. I may still do so as I was intending to take a different route and link it to the Tuesday Problem, expanding on what I already wrote about it in this post [LW · GW]. The reason why I believe that link is insightful is that it raises some of the same issues with reference classes, without a need for any anthropics or any indexicals.

comment by avturchin · 2020-03-03T12:08:25.419Z · LW(p) · GW(p)
In short:
SSA: in every universe, the average score is as good as can be.
SIA: for every observer, the score is as good as can be.

After rereading, it occurred to me that the difference could be illustrated by the Sleeping beauty example, where she earns money after many runs of the experiment if she correctly predicts the outcome. In SIA setup, she gets money every time she is right, so she earns both on Monday and Tuesday and thus eventually she will collect more and more. In SSA setup, she earns money only if there is a correct prediction of coin for the whole experiment, not for separate days, and she doesn't earn money on average.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2020-03-03T12:38:11.067Z · LW(p) · GW(p)

Yep ^_^

See 3.1 in my old tech report: https://www.fhi.ox.ac.uk/wp-content/uploads/Anthropic_Decision_Theory_Tech_Report.pdf

Replies from: avturchin
comment by avturchin · 2020-03-03T18:52:16.061Z · LW(p) · GW(p)

I have an idea about non-fiction example of sleeping beauty - do you think it will be a correct implementation:

A person is in a room with many people. He flips a coin, and if it is heads he asks one random person to guess is it head or tails. If it is tails, he asked two random people the same question. Other people can't observer how many peoples were asked.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2020-03-03T21:36:16.182Z · LW(p) · GW(p)

See https://www.lesswrong.com/posts/YZzoWGCJsoRBBbmQg/solve-psy-kosh-s-non-anthropic-problem [LW · GW]

You're rediscovering some classics ^_^

That problem addresses some of the issues in anthropic reasoning - but not all.

comment by Shmi (shminux) · 2018-06-20T23:11:38.793Z · LW(p) · GW(p)

Tangential: my personal favorite resolution of the Fermi paradox is that we would not know alien life if it stared us in the face. Literally. If you take something like stars, they satisfy most definition of what constitutes life, as long as one removes the "biological" qualifier. A more dramatic way to phrase it would be

There is no life in the Universe. Not even on Earth. Life is not a thing.

Because there is nothing special about the Earth, from the point of view of physical laws to which we all are reducible.

Replies from: Chris_Leong, ofer
comment by Chris_Leong · 2018-06-22T05:42:57.888Z · LW(p) · GW(p)

"There is no life in the Universe. Not even on Earth. Life is not a thing" - That seems to just be a word game which can't account for the lack of certain observations. The map is not the territory.

comment by Ofer (ofer) · 2018-06-21T13:52:36.665Z · LW(p) · GW(p)

Do you mean that perhaps (some?) stars are "intelligent" in some way that we're not aware of? (i.e. some cool computation is taking place within them?)

Replies from: shminux
comment by Shmi (shminux) · 2018-06-21T16:37:46.729Z · LW(p) · GW(p)

I mean life in the general sense, as discussed, for example here:

Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells – the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Adaptation: the ability to change over time in response to the environment. This ability is fundamental to the process of evolution and is determined by the organism's heredity, diet, and external factors.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.

If you take a galaxy as a collection of stars, you can easily identify all of those in that system.

Replies from: ofer
comment by Ofer (ofer) · 2018-06-21T17:57:35.143Z · LW(p) · GW(p)

Nice :)

But how does this resolve the Fermi paradox? It's still a mystery that we exist and don't see intelligent agents pursuing convergent instrumental goals anywhere in the observable universe.

Replies from: shminux
comment by Shmi (shminux) · 2018-06-22T04:06:10.941Z · LW(p) · GW(p)

How do you know that we do not see them? Or it. We do not know what those instrumental goals might be for these hypothetical agents. Maybe stars are intelligent black hole maximizers.

comment by avturchin · 2018-06-22T10:52:39.595Z · LW(p) · GW(p)

Minor nitpick: There should be a class of Boltzmann brains who know (or suspect) that they are Boltzmann brains, so their believes are not entirely wrong.