Future Filters [draft]

post by snarles · 2011-05-16T12:42:02.466Z · LW · GW · Legacy · 12 comments

Contents

  "Dr. Evil's Machine"
  Future Filter Model I
None
12 comments

See Katja Grace's article: http://hplusmagazine.com/2011/05/13/anthropic-principles-and-existential-risks/

There are two comments I want to make about the above article.

First: the resolution to God's Coin Toss seems fairly straightforward.  I argue that the following scenario is formally equivalent to 'God's Coin Toss'

"Dr. Evil's Machine"

Dr. Evil has a factory for making clones.  The factory has 1000 separate identical rooms.  Every day, a clone is produced in each room at 9:00 AM.  However, there is a 50% chance of malfunction, in which case 900 of the clones suddenly die by 9:30 AM, the remaining 100 are healthy and notice nothing.  At the end of the day Dr. Evil ships off all the clones which were produced and restores the rooms to their original state.

You wake up at 10:00 AM and learn that you are one of the clones produced in Dr. Evil's factory, and your learn all of the information above.  What is the probability that that the machine malfunctioned today?

In the second reformulation, the answer is clear from Bayes' rule.  Let P(M) be the probability of malfunction, and P(S) be the probability that you are alive at 10:00 AM.  From the information given, we have

P(M) = 1/2

P(~M) = 1/2

P(S|M) = 1/10

P(S|~M) = 1

Therefore,

P(S) = P(S|M) P(M) + P(S|~M)P(~M) = (1/2)(1/10) + (1/2)(1) = 11/20

P(M|S) = P(S|M) P(M)/P(S) = (1/20)/(11/20) = 1/11

That is, given the information you have, you should conclude that the probability that the machine malfunctioned is 1/11.

 

 

The second comment concerns Grace's reasoning about future filters.

I will assume that the following model is a fair representation of Grace's argument about relative probabilities for the first and second filters.

Future Filter Model I

Given: universe with N planets, T time steps. Intelligent life can arise on a planet at most once.

At each time step:

  1. each surviving intelligent species becomes permanently visible to all other species with probability c (the third filter probability)
  2. each surviving intelligent species self-destructs with probability b (the second filter probability)
  3. each virgin planet produces an intelligent species with probability a (the first filter probability)

Suppose N=one billion, T=one million.  Put uniform priors on a, b, c, and the current time t (an integer between 1 and T).

Your species appeared on your planet at unknown time step t_0.  The current time t is also unknown.  At the current time, no species has become permanently visible in the universe.  Conditioned on this information, what is the posterior density for first filter parameter a?


 

12 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2011-05-16T19:04:48.821Z · LW(p) · GW(p)

In the second reformulation, the answer is clear from Bayes' rule.

But the relevant event structure is not clear. It's easy to do the math, but it's not clear which math should be done. The discussions of Sleeping Beauty a few months back (I think it was) should've made it clear that there is little point in postulating probabilities (cousin_it might have a citation ready, I remember he made this point a few times), because it's mostly a dispute about definitions (of random variables, etc.).

Instead, one should consider specific decision problems and ask about what decisions should be made. Figuring out the decisions might even involve calculating probabilities, but these would be introduced for a clear purpose, so that it's not merely a matter of definitions and there's actually a right answer, in the context of a particular method for solving a particular decision problem. While solving different decision problems, we might even encounter different "contradictory" probabilities associated with the same verbal specifications of events.

Replies from: Manfred
comment by Manfred · 2011-05-16T20:12:33.108Z · LW(p) · GW(p)

Considering it as a decision problem is a particular side in the definition/axiom dispute - a side that also corresponds with requiring the probabilities be the frequencies - i.e. if you use the other definitions the probabilities will not be frequencies. So I think the resolution to Sleeping Beauty is even stronger - there is a right side, and a right way to go about the problem.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-16T20:51:40.499Z · LW(p) · GW(p)

Considering it as a decision problem is a particular side in the definition/axiom dispute

Considering what as a decision problem? As formulated, we are not given one.

Replies from: Manfred
comment by Manfred · 2011-05-16T22:50:18.677Z · LW(p) · GW(p)

Exactly! :P

Assigning constant rewards for correct answers can be compared with assigning constant rewards to each person at the end of the experiment, and these options are (I think) isomorphic to the two ways to look at the problem through probability - the fact that the choice seems more intuitive through the lens of decision theory is a fact about our brains, not the problem.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-16T22:58:23.368Z · LW(p) · GW(p)

You've just shifted the definitional debate to deciding which decision problem to use, which was not my suggestion.

Replies from: Manfred
comment by Manfred · 2011-05-16T23:26:58.289Z · LW(p) · GW(p)

But I claim it is an inevitable consequence of your suggestion, since the same sort of arguments that might be made about which way of calculating the probability can be made about which utility problem to solve, if you're doing the same math. Or put another way, you can take the decision-theory result and use it to calculate the rational probabilities, so any stance on using decision theory is a stance on probabilities (if the rewards are fixed).

I think the problem just looks so obvious to us when we use decision theory that we don't connect it to the non-obvious-seeming dispute over probabilities.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-16T23:37:41.082Z · LW(p) · GW(p)

Again, I didn't suggest trying to reformulate a problem as a decision problem as a way of figuring out which probability to assign. Probability-assignment is not an interesting game. My point was that if you want to understand a problem, understand what's going on in a given situation, consider some decision problems and try to solve them, instead of pointlessly debating which probabilities to assign (or which decision problems to solve).

Replies from: Manfred
comment by Manfred · 2011-05-17T00:00:29.126Z · LW(p) · GW(p)

Oh, so you don't think that viewing it as a decision problem clarifies it? Then choosing a decision problem to help answer the question doesn't seem any more helpful than "make your own decision on the probability problem," since they're the same math. This then veers toward the even-more-unhelpful "don't ask the question."

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-17T00:16:41.050Z · LW(p) · GW(p)

Then choosing a decision problem to help answer the question doesn't seem any more helpful than "make your own decision on the probability problem," since they're the same math.

It's not intended to help with answering the question, no more than dissolving any other definitional debate helps with determining which definition is the better. It's intended to help with understanding of the thought experiment instead.

Replies from: Manfred
comment by Manfred · 2011-05-17T02:24:54.926Z · LW(p) · GW(p)

Changing the labels on the same math isn't "dissolving" anything, as it would if probabilities were like the word "sound." "Sound" goes away when dissolved because it's subjective and dissolving switches to objective language. Probabilities are uniquely derivable from objective language. Additionally there is no "unaskable question," at least in typical probability theory - you'd have to propose a fairly extreme revision to get a relevant decision theory answer to not bear on the question of probabilities.

comment by MrMind · 2011-05-17T07:05:21.285Z · LW(p) · GW(p)

The Future Filter Model I strikingly resembles a hidden Markov model, in which each stage of the hidden chain is a filter and the "observables" are the detectable trace of a civilization...

comment by snarles · 2011-05-16T23:40:34.587Z · LW(p) · GW(p)

I trailed off at the end of the post because I came up with a different model: http://lesswrong.com/lw/5q1/colonization_models_a_programming_tutorial_part_12/