Posts

Comments

Comment by agai on Your existence is informative · 2020-01-22T01:41:48.860Z · LW · GW

Now, to restate the original "thing" we were trying to honestly say we had a prior for:

Suppose you know that there are a certain number of planets, N. You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on a given arbitrary planet. If Q is false, you put a low probability on this. You have a prior probability for Q.

Does this work, given this and our response?

We do not actually have a prior for Q, but we have a rough prior for a highly related question Q', which can be transformed likely fairly easily into a prior for Q using mechanical methods. So let's do that "non-mechanically" by saying:

  1. If we successfully generate a prior for Q, that part is OK.
  2. If Q is false, (::<- previously transformed into the more interesting and still consistent with a possible meaning for this part question "if Q is not-true") :: use "If Q is not-true" as this proposition, then it is OK. But also consider the original meaning of "false" meaning "not true at all" meaning logically has 0 mass assigned to it. If we do both, this part is OK.
  3. If Q is true, you put a high probability on life forming on a given arbitrary planet. This was the evidential question which we said the article was not mainly about, so we would continue reasoning from this point by reading the rest of the article until the next point at which we expect an update to occur to our prior; however if we do the rest of the steps (1-end here) then these updates can be relatively short and quick (as it's just, in a technical sense, a multiplication. This can definitely be done by simultaneous (not just concurrent) algorithms). ::-> (predicted continuation point)
  4. You are unsure about the truth of a statement Q. OK.
  5. Suppose you know that there are a certain number of planets, N. This directly implies that we are only interested in the finite case and not the infinite case for N. However, we may have to keep the infinite case in mind. OK.

-> [Now to be able to say "we have a prior," we have to write the continuation from 3. until the meaning of both

  1. "what the article is about" becomes clear (so we can disambiguate the intended meanings of 1-5 and re-check)
  2. We recognise that the prior can be constructed, and can roughly describe what it would look like.

From our previous response, our prior for finite N and a small amount of evidence was that "life is unlikely" (because there were two separate observationally consistent ways which resulted in 'the same' answer in some equivalence or similarity sense). For infinite N, it looks like "n is of lower dimension in some way" (dimension here meaning "bigness") than N.

Now we have a prior for both, so we can try to convert back to a prior for the original proposition Q, which was:

You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on a given arbitrary planet. If Q is false, you put a low probability on this.

Our prior is that Q is false.]

In retrospect, the preceding (now in square brackets, which were edited in) could be considered a continuation of 3. So we are OK in all 5 ways, and we have a prior, so we can continue responding to the article.

(To be continued in reply)

Comment by agai on LW For External Comments? · 2020-01-12T07:37:07.298Z · LW · GW

am maybe too enthusiastic in general for things being 'well organized'.

I don't think so. :)

Comment by agai on eigen's Shortform · 2020-01-12T07:15:15.546Z · LW · GW

Comment removed for posterity.

Comment by agai on How was your decade? · 2019-12-29T06:53:43.353Z · LW · GW

Comment removed for posterity.

Comment by agai on How was your decade? · 2019-12-29T06:38:32.818Z · LW · GW

Comment removed for posterity.

Comment by agai on Moloch Hasn’t Won · 2019-12-29T06:16:09.957Z · LW · GW

Yes. Although Moloch is "kind of" all powerful, there are also different levels of "all powerful" so there can be "more all powerful" things. :)

Comment by agai on Moloch Hasn’t Won · 2019-12-29T06:14:47.605Z · LW · GW

Would you be able to expand on those? I thought they were quite apt.

Comment by agai on Moloch Hasn’t Won · 2019-12-29T06:14:09.568Z · LW · GW

They both exist in different realms, however Elua's is bigger so by default Elua would win, but only if people care to live more in Elua's realm than Moloch's. Getting the map-territory distinction right is pretty important I think.

Comment by agai on Moloch Hasn’t Won · 2019-12-29T06:12:01.281Z · LW · GW

Accidents, if not too damaging, are net positive because they allow you to learn more & cause you to slow down. If you are wrong about what is good/right/whatever, and you think you are a good person, then you'd want to be corrected. So if you're having a lot of really damaging accidents in situations where you could reasonably be expected to control, that's probably not too good, but "reasonably be expected to control" is a very high standard. What I'm very explicitly not saying here is that the "just-world" hypothesis is true in any way; accidents *are* accidents, it's just that they can be net positive.

Comment by agai on Moloch Hasn’t Won · 2019-12-29T06:08:44.926Z · LW · GW

It's more effective to retain more values since physics is basically unitary (at least up to the point we know) so you'll have more people on your side if you retain the values of past people. So we'd be able to defeat this Moloch if we're careful.

Comment by agai on Moloch Hasn’t Won · 2019-12-29T06:06:12.541Z · LW · GW

Yeah, so, this is a complex issue. It is actually true IMO that we want fewer people in the world so that we can focus on giving them better lives and more meaningful lives. Unfortunately this would mean that people have to die, but yeah... I also think that cryogenics doesn't really make it much easier/hard to revive people, I would say either way you pretty much have to do the work of re-raising them by giving them the same experiences...

Although now I think about it there was a problem about that recently where I thought of a way to just "skip to the end state" given a finite length and initial state, the problem is we'd need to be able to simulate the entire world up to the end of the person's life. So I guess yeah that's why I don't think cryonics is too important except for research purposes and I guess motivating people to put their efforts into power efficiency, insulation, computation, materials technology etc. So it is useful in that sense probably more than just burying people but in the sense of "making it easier to bring them back alive," not really. Also sort of means having fewer people makes it more likely we can have more than a few seconds where no one dies, which would be nice for later.

In terms of numbers, "fewer" I'm thinking like 3-6 billion still, and maybe population will still keep increasing and our job will just be harder, which is annoying, but yeah. I would say don't have kids if you don't think the world is actually getting better is a good idea, particularly if you want to make it easier for later people to potentially bring back the people you care about that are already dead.

Life *extension* and recovery etc on the other hand is a *much, much easier* problem. I'm super interested in the technical aspects of this right now although the things I think will probably be substantially different from many people.

Basically in summary I agree with your post. :)

Comment by agai on Your existence is informative · 2019-12-28T19:54:10.691Z · LW · GW

My response to this would be:

  1. This is a very good argument/summary of arguments/questions
  2. I would analyse this in sequence (like, taking quotes in order) and then recursively go back to relook at the initial state of understanding to see if it's at least consistent. If it isn't, serious updates to worldview might have to occur.
  3. These can be deferred and interleaved concurrently with other either more-interesting or higher-priority updates. Deferral can work by "virtualising" the argument as a "suppose (this: ... )" question.

From 2: (now 2 layers of indirection to avoid updating on my own argument until later):

Suppose you know that there are a certain number of planets, N. You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on a given arbitrary planet. If Q is false, you put a low probability on this. You have a prior probability for Q.

Here I stop and summarise:

Suppose (there are) N planets (called "N" in totality). Q can be true or not-true. If Q, observe life. If not-Q, observe no-life. <-: already falsified by evidence. :-> if not-Q, "small finite number compared to N" of planets with life. :: Q cannot be false. Question: Can Q be P=1? Yes, as P=1 is just a logical, technical criterion and not necessarily relevant to a real world except in theory. Can Q be logically true? No, as that excludes the nuance of "how many planets out of N have life" which is the entire interesting part of the question. -> using "there is actually a difference between 'logical certainty' and 'P=1'.".

So the question so far is to construct a prior CDF based on the previously quoted text. Since N is a finite, specific number of planets, this could be done by exhaustively checking each case, in each case for each N. Suppose N=1. Done.

Suppose N=2. Then n (number of planets) = either 1 or 2. Is it 1? yes. Is it 2? life has been observed on comets. Therefore likely to be 2, if N were to be "much larger" than 1. If N=2 then either the comets came from the 1 other planet or from the 1 planet already with life. A priori much more likely that N=1 in this case, given that life is observed to be a surprisingly rare phenomenon, however there must be some probability mass assigned to the idea that n=N=2, given that our previously described reasoning has some relevance to the question we are actually interested in, which is "N = some very large number roughly about the size of the number of planets we observe to be probably there in the universe or something".

Suppose N is some large number compared to 1, 2, etc. Either N is prime or it can be divided by some factors up to (sqrt N). Either way, it can be "added up to" by using only 1, 2, 3, and 4, or some other small subset of numbers less than 10 like 6, 5, 2. ::<- implicitly defines subtraction.:: <->. If division is also allowed as an operation, then N either has a prime factorisation which can be calculated fairly straightforwardly or is prime.

If N is prime, then we should only use linear operations to obtain our probability distribution. If it is not prime, we may use nonlinear methods in addition. Either way, we can use both, and concurrently run the calculation to see whether some specific N is prime. Or we may choose a large N directly which has been already shown to be prime or not prime. Suppose N is 10,000,000,000,000,000,000,000,000. This is known to be not prime, and would likely be considered large compared to 1, 2, etc. We may also choose N as "some prime number close to about that value" and then apply only the linear part of the logic, and this would give us a close estimate for that N, which we can then apply some form of insertion sort/expansion/contraction/interpolation using all the tools available to us in accordance with the rules of probability theory to obtain a "best estimate" for the prime N which doesn't require much extra calculation, and is likely good enough for cosmological estimates. See https://xkcd.com/2205/. Remember that after obtaining this prior we can "just multiply" to update it based on further observations. This is probably why it's a good idea to get the prior from a very small number of observations if possible.

...

Now that we have worked out how we would calculate an example, it is not necessary to do so yet as this can be done after (and indeed, should be) writing down the full response, because it may turn out not to be necessary to answer the actual, quoted question which this article is about.


So what is "the rough shape" of our prior, given the response I myself have written so far?

Well, if the stated observations:

  1. N is large compared to 1, 2 etc. (at least on the order of 10^25)
  2. n is not 0
  3. "You have a prior probability for Q"
  4. 'Comets aren't all generated from our planet originally, and (at least precursors to) life have been observed on comets'
  5. The question we are interested in answering isn't about cosmology but whether "Your existence is informative"

are taken as a starting point, then we can make a rough prior for Q, which is roughly that "n is small compared to N." This is equivalent to saying that "life is unlikely" as there are much more big numbers than small numbers, and on a uniform distribution n would likely be not small (ie, within a few orders of magnitude) compared to N. "What does the evidence say about whether life is unlikely?" is now a relevant question for our larger question of the informativeness of the original question about Q.

Separately, N may not be finite, and we are interested in the actual question of the article in this case too. So we're not actually that interested in the previous stuff, but as a prior we have that "life is unlikely even for infinite N" but that would still mean that for infinite N there would be an infinity of life.

It seems more important, by the numbers, to consider first the case of infinite N, which I will do in a reply.

Comment by agai on [deleted post] 2019-12-27T21:21:41.371Z

Comment removed for posterity.

Comment by agai on NaiveTortoise's Short Form Feed · 2019-12-27T21:18:37.503Z · LW · GW

Comment removed for posterity.

Comment by agai on [deleted post] 2019-12-27T21:05:59.720Z

I have two default questions when attempting to choose between potential actions: I ask both "why" and "why not?".

Comment by agai on [deleted post] 2019-12-26T12:28:38.385Z

Comment removed for posterity.

Comment by agai on A Critique of Functional Decision Theory · 2019-12-25T04:55:06.526Z · LW · GW

Okay, because I'm bored and have nothing to do, and I'm not going to be doing serious work today, I'll explain my reasoning more fully on this problem. As stated:

You face two open boxes, Left and Right, and you must take one of them. In the Left box, there is a live bomb; taking this box will set off the bomb, setting you ablaze, and you certainly will burn slowly to death. The Right box is empty, but you have to pay $100 in order to be able to take it. 
A long-dead predictor predicted whether you would choose Left or Right, by running a simulation of you and seeing what that simulation did. If the predictor predicted that you would choose Right, then she put a bomb in Left. If the predictor predicted that you would choose Left, then she did not put a bomb in Left, and the box is empty. 
The predictor has a failure rate of only 1 in a trillion trillion. Helpfully, she left a note, explaining that she predicted that you would take Right, and therefore she put the bomb in Left. 
You are the only person left in the universe. You have a happy life, but you know that you will never meet another agent again, nor face another situation where any of your actions will have been predicted by another agent. What box should you choose?

Without reference to any particular decision theory, let's look at which is the actually correct option, and then we can see which decision theory would output that action in order to evaluate which one might "obtain more utility."

The situation you describe with glass windows is a completely different problem and has a possibly different conclusion, so I'm not going to analyse that one.

Given in the problem statement we have:

We have the experiential knowledge that we are faced with two open boxes. We have the logically certain knowledge that: 1. we must take one of them; 2. in the Left box is a live bomb; 3. taking Left will set off the bomb, which will then set you on fire and burn you slowly to death; 4. That Right is empty, but you have to pay $100 in order to be able to take it.

This is an impossible situation, in that no actual agent could actually be put in this situation. However, so far, the implications are as follows:

Our experiential knowledge may be incorrect. If it is, then the logically certain knowledge can be ignored because it is as if we have a false statement as the precondition for the material conditional.

If it isn't, then the logical implication goes:

We must take at least one box. The Left box, which is understood to mean the box experientially to the left of us when we "magically appeared" at the start of this problem rather than any box which we may put on our left for example by walking around to the other side of whatever thing may be containing the boxes (henceforth referred to as just Left and symmetrically Right), contains a live bomb, which is understood to mean a bomb which when some triggering process happens will by default explode, provided that process works correctly.
We are given the logically certain future knowledge that this process will work correctly, and the bomb will explode and burn us slowly to death, if and only if we take the Left box.
We also have the logically certain current knowledge that the Right box is empty, but the "triggering process" that allows us to take it as an option has a prerequisite of "giving up $100," whatever that means.

Okay so far.

A long-dead predictor predicted whether you would choose Left or Right, by running a simulation of you and seeing what that simulation did. If the predictor predicted that you would choose Right, then she put a bomb in Left. If the predictor predicted that you would choose Left, then she did not put a bomb in Left, and the box is empty. 
The predictor has a failure rate of only 1 in a trillion trillion. Helpfully, she left a note, explaining that she predicted that you would take Right, and therefore she put the bomb in Left.

We will not assume that this "predictor" was any particular type of thing; in particular, it need not be a person.

In order to make the problem as written less confusing, we will assume that "Left" and "Right" for the predictor refer to the same things I've explained above.

Since there is no possible causal source for this information, as we have been magically instantiated into an impossible situation, the above quoted knowledge must also be logically certain.

Now, in thinking through this problem, we may pause, and reason that this predictor seems helpful; in that it deliberately put the bomb in the box which it, to its best judgement, predicted we would not take. Further, given in the problem statement, there is the sentence "Helpfully, she left a note, explaining that she predicted that you would take Right, and therefore she put the bomb in Left."

This is strong evidence that the predictor, which recall we are not assuming is any type of particular thing (but this very fact doesn't exclude the possibility that it is an agent or person-like being), is behaving as if it were helpful. So by default, we would like to trust the predictor. So when our logically certain knowledge says that she has a failure rate of 1 in 1,000,000,000,000,000,000,000,000, we would prefer to trust that as being correct.

Now, previously we found the possibility that our experiential knowledge may be incorrect; that is, we may not in fact be faced with two open boxes, even though it looks like we are; or the boxes may not contain a bomb/be empty, or some other thing. This depends on the "magically placed" being's confidence in the ability of itself to make inferences based on the sensory information it receives.

What we observe, from the problem statement, is that there does appear to be a bomb in the Left box, and that the Right box does appear to be empty. However, we also observe that we would prefer this observation to be wrong such that our logical knowledge that we must take a box is incorrect. Because if we can avoid taking either box, then there is no risk of death, nor of losing $100.

By default, one would wish to continue a "happy life," however in the problem statement we are given that we will never see another agent again. The prediction that a rational agent can make from this is that their life will eventually become unhappy, because happiness is known to be a temporary condition; and other agents can be physically made from natural resources given enough time; and therefore there are limited physical resources and/or time such that there is not enough to make another agent.

Making another agent when you are the only agent in existence is probably one of the hardest possible problems, but nevertheless, if you cannot do it, then you can predict that you will eventually run out of physical resources and time no matter what happens, and therefore you are in a finite universe regardless of anything else.

Since you have definitively located yourself in a finite universe; and you also have the logically certain knowledge that the simulator/predictor is long-dead and appears to be helpful, this is logically consistent as a possible world-state.

Now we have to reason about whether the experiential evidence we are seeing has a chance of more than 1,000,000,000,000,000,000,000,000 of being correct. We know how to do this: just use probability theory, which can be reduced to a mechanical procedure.

However, since we have limited resources, before we actually do this computation, we should reason about what our decision would be in each case, since there are only three possibilities:

1. the experiential evidence is less likely, in which case the simulator probably hasn't made an error, but the first part of the logically certain knowledge we have can be ignored.

2. or the experiential evidence is more likely, in which case it's possible that the simulator made a mistake, and although it appears trustworthy, we would be able to say that its prediction may be wrong and (separately), perhaps its note was not helpful.

3. They are exactly equally likely, in which case we default to trusting the simulator.

In each case, what would be our action?

1. In this case, the logically certain knowledge we have that we must choose one of the boxes can be ignored, but it may still be correct. So we have to find some way to check independently whether it might be true without making use of the logically certain knowledge. One way is to take the same action as option two; in addition you can split the propositions in the problem statement into atoms and take the power set and consider the implications of each one. The total information obtained by this process will inform your decision. However, logical reasoning is just another form of obtaining evidence for non-logically-omniscient agents and so in practice this option reduces to exactly the same set of possible actions as option 2. following:

2. In this case, all we have to go on is our current experiential knowledge, because the source of all our logically certain knowledge is the simulator, and since in this branch the experiential knowledge is more likely, the simulator is more likely to have made a mistake, and we must work out for ourselves what the actual situation is.

Depending on the results of that process, you might

1. Just take right, if you have $100 on you and you observe you are under coercion of some form (including "too much" time pressure; ie, if you do not have enough time to come to a decision)

2. Take neither box, because both are net negative

3. Figure out what is going on and then come back and potentially disarm the bomb/box setup in some way. Potentially in this scenario (or 2) you may be in a universe which is not finite and so even if you observe you are completely alone, it may be possible to create other agents or to do whatever else interests you and therefore have whichever life you choose for an indefinitely long time.

4. Take left and it does in fact result in the bomb exploding and you painfully dying, if the results of your observations and reasoning process output that this is the best option for some reason.

5. Take left and nothing happens because the bomb triggering process failed, and you save yourself $100.

For the purposes of this argument, we don't need to (and realistically, can't) know precisely which situations would cause outcome 4 to occur, because it seems extremely unlikely that any rational agent would deliberately produce this outcome except if it had a preference for dying painfully.

Trying to imagine the possible worlds in which this could occur is a fruitless endeavour because the entire setup is already impossible. However you will notice that we have already decided in advance on a small number of potential actions that we might take if we did find ourselves in this impossible scenario.

That in itself substantially reduces the resources required to make a decision if the situation were to somehow happen even though it's impossible - we have reduced the problem to a choice of 5 actions rather than infinite, and also helped our (counterfactual self, in this impossible world) make their choice easier.

3. Case three (exactly equal likelihood) is the same action as case 1 (and hence also case 2), because the bomb setup gives us only negative utility options and the simulator setup has both positive and negative utility options, so we trust the simulator.

Now, the only situation in which right is ever taken is if the simulator is wrong and you are under coercion.

Since in the problem statement it says:

You are the only person left in the universe. You have a happy life, but you know that you will never meet another agent again, nor face another situation where any of your actions will have been predicted by another agent. What box should you choose?

then by the problem definition, this cannot be the case except if you do not have adequate time to make a decision. So if you can come to a decision before the universe you are in ends, then Right will never be chosen, because the only possible type of coercion (since there are no other agents) is inadequate time/resources. If you can't, then you might take right.

However, you can just use FDT to make your decision near-instantly, since this has already been studied, and it outputs Left. Since this is the conclusion you have come to by your chain of reasoning, you can pick left.

But it may still be the case, independently of both of these things, (since we are in an impossible world), that the bomb will go off.

So for an actual agent, the actual action you would take can only be described as "make the best decision you can at the time, using everything you know."

Since we have reasoned about the possible set of actions ahead of time, we can choose from the (vaguely specified) set of 5 actions above, or we can do something else; given that we know about this reasoning that we have already performed and if actually placed in the situation we would have more evidence which could inform our actions.

However, the set of 5 actions covers all the possibilities. We also know that we would only take right if we can't come to a decision in time, or if we are under coercion. In all other cases we prefer to take Left or take neither, or do something else entirely.

Since there are exactly two possible worlds under which we take Right, and an indefinitely large number in which we take Left, the maximum-utility option is outputted correctly by FDT which is to take Left.

Comment by agai on eigen's Shortform · 2019-12-24T19:03:14.056Z · LW · GW

Comment removed for posterity.

Comment by agai on A Critique of Functional Decision Theory · 2019-12-24T08:51:47.263Z · LW · GW

Comment removed for posterity.

Comment by agai on A Critique of Functional Decision Theory · 2019-12-24T08:03:46.745Z · LW · GW

So, this is an interesting one. I could make the argument that UDT would actually suggest taking the opposite of the one you like currently.

It depends on how far you think the future (and yourself) will extend. You can reason that if you were to like both hummus and avocado, you should take both. The problem as stated doesn't appear to exclude this.

If you know the information observed about humans that we tend to get used to what we do repeatedly as part of your prior, then you can predict that you will come to like (whichever of avocado or hummus that you don't currently like), if you repeatedly choose to consume it.

Then since there's no particular reason why doing this would make you later prefer the other option less (and indeed, a certain amount of delayed gratification can increase later enjoyment), in order to achieve the most total utility you would take either both together if you predicted you would like that more at the immediate decision point, or if you are indifferent between both and the unappealing one, then you should take only the unappealing one because doing that more often will allow you to later obtain more utility.

I think this would be the recommendation of UDT if the prior were to say that you would face similar choices to this one "sufficiently often".

This is why, for example, I almost always eat salads/greens or whichever part of a meal is less appealing before the later, more enjoyable part - you get more utility both immediately (over the course of the meal) and long term by not negatively preferring the unappealing food option so much.

Comment by agai on A Critique of Functional Decision Theory · 2019-12-24T07:19:17.152Z · LW · GW

Look, I never said it wasn't a serious attempt to engage with the subject, and I respect that, and I respect the author(s).

Let me put it this way. If someone writes something unintentionally funny, are you laughing at them or at what they wrote? To me there is a clear separation between author and written text.

If you've heard of the TV show "America's Funniest Home Videos", that is an example of something I don't laugh at, because it seems to be all people getting hurt.

If someone was truly hurt by my comment then I apologise. I did not mean it that way.

I still stand by the substance of my criticism though. The fact that I was amused has nothing to do with whether what I wrote was genuine - it was. It's sort of... Who is at fault when someone misinterprets your tone online? I don't think either party can really have a strong claim, because writing is extremely hard to get the tone right and then as a reader you don't know the person who wrote it either, so you could have totally different expectations of what the author of a piece of writing is thinking. Not to mention online you're very likely to be from different countries and cultural backgrounds, who have different norms.

As a further apology, I am very very unlikely to write any more detail on this unless the original article author messages me to ask me for it.

Comment by agai on A Critique of Functional Decision Theory · 2019-12-23T23:28:02.612Z · LW · GW

So, some very brief comments then I'm off to do some serious writing.

This article was hilarious. The criticisms of FDT are reliably way off base, and it's clear that whoever had these didn't bother to look up the historical context for these decision theories.

A quick example. In the bomb hypothetical, the recommendation of FDT is obviously correct. Why? The fact that the predictor left a "helpful" note tells you absolutely nothing. I think people are assuming that "helpful" = honest or something; anyway the correct thing to do regarding the note is to completely ignore it because you have no idea, and can't ask (since the predictor is long gone) what it meant by its "helpful" note. This is the recommendation of UDT as I understand it; it's possible that FDT is general enough to contain as a subset UDT but even if not, as real agents we can just pick the appropriate decision theory (or lack of one) to apply in each situation.

With this view, it's extremely clear that you should pick Left which gives you the same trillion-trillion-to-one chance of survival, and doesn't cost anything. A decision theory that is to be used by actual agents shouldn't be vulnerable to mugging of any kind, counterfactual or Pascal's Wager-ish.

Most of the criticism is either completely wrong or misses the point in ways similar to this. I would explain point by point but I think I've taken what amusement I shall out of this.

Comment by agai on The Failures of Eld Science · 2019-12-21T22:07:49.476Z · LW · GW

Comment removed for posterity.