Dead men tell tales: falling out of love with SIA
post by Stuart_Armstrong · 2011-02-18T14:10:04.187Z · LW · GW · Legacy · 24 commentsContents
24 comments
SIA is the Self Indication Assumption, an anthropic theory about how we should reason about the universe given that we exist. I used to love it; the argument that I've found most convincing about SIA was the one I presented in this post. Recently, I've been falling out of love with SIA, and moving more towards a UDT version of anthropics (objective probabilities and total impact of your decision being of a specific type, including in all copies of you and enemies with the same decision process). So it's time I revisit my old post, and find the hole.
The argument rested on the plausible sounding assumption that creating extra copies and killing them is no different from if they hadn't existed in the first place. More precisely, it rested on the assumption that if I was told "You are not one of the agents I am about to talk about. Extra copies were created to be destroyed," it was exactly the same as hearing "Extra copies were created to be destroyed. And you're not one of them."
But I realised that from the UDT/TDT perspective, there is a great difference between the two situations, if I have the time to update decisions in the course of the sentence. Consider the following three scenarios:
- Scenario 1 (SIA):
Two agents are created, then one is destroyed with 50% probability. Each living agent is entirely selfish, with utility linear in money, and the dead agent gets nothing. Every survivor will be presented with the same bet. Then you should take the SIA 2:1 odds that you are in the world with two agents. This is the scenario I was assuming.
- Scenario 2 (SSA):
Two agents are created, then one is destroyed with 50% probability. Each living agent is entirely selfish, with utility linear in money, and the dead agent is altruistic towards his survivor. This is similar to my initial intuition in this post. Note that every agents have the same utility: "as long as I live, I care about myself, but after I die, I'll care about the other guy", so you can't distinguish them based on their utility. As before, every survivor will be presented with the same bet.
Here, once you have been told the scenario, but before knowing whether anyone has been killed, you should pre-commit to taking 1:1 odds that you are in the world with two agents. And in UDT/TDT precommitting is the same as making the decision.
- Scenario 3 (reverse SIA):
Same as before, except the dead agent is triply altruistic toward his survivor (you can replace this altruism with various cash being donated to various charities of value to various agents). Then you should pre-commit to taking 1:2 odds that you are in the world with two agents.
This illustrates the importance of the utility of the dead agent in determining the decision of the living ones, if there is even a short moment when you believe you might be the agent who is due to die. By scaling the altruism or hatred of the dead man, you can get any odds you like between the two worlds.
So I was wrong; dead men tell tales, and even thinking you might be one of them will change your behaviour.
24 comments
Comments sorted by top scores.
comment by Manfred · 2011-02-18T17:42:46.754Z · LW(p) · GW(p)
Taking a bet is not the same as determining a probability if your utility function changes in some cases (e.g. if you are altruistic in some cases but not others). Precommitting to odds that are not the same as the probability is consistent with SIA in these cases.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-18T18:51:20.360Z · LW(p) · GW(p)
This post doesn't destroy SIA. It just destroys an argument that I found was the strongest one in favour of it.
Replies from: Manfred, entirelyuseless↑ comment by entirelyuseless · 2016-01-11T14:27:56.013Z · LW(p) · GW(p)
How exactly does it destroy that argument? It does look like this post is arguing about the question of what odds you should bet at, not about the question of what you think is likely the case. These are not exactly the same thing. I would be willing to bet any amount, at any odds, that the world will still exist 10 years from now, or 1000 years from now, but that doesn't mean that I am confident that it will. It simply means I know I can't lose that bet, since if the world doesn't exist, neither will I or the person I am betting with.
(I agree that the other post was mistaken, and I think it went from a 99% probability in A, B, and C, to a 50% probability in the remaining scenarios.)
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2016-01-12T10:48:27.438Z · LW(p) · GW(p)
I think my old post here has the core of the argument: http://lesswrong.com/lw/18r/avoiding_doomsday_a_proof_of_the_selfindication/14vy
But I no longer consider anthropic probabilities to have any meaning at all; see for instance https://www.youtube.com/watch?v=aiGOGkBiWEo
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-01-12T13:57:01.035Z · LW(p) · GW(p)
Ok. I watched the video. I still disagree with that, and I don't think it's arbitrary to prefer SSA to SIA. I think that follows necessarily from the consideration that you could not have noticed yourself not existing.
In any case, whatever you say about probability, being surprised is something that happens in real life. And if someone did the Sleeping Beauty experiment on me in real life, but so that the difference was between 1/100,000 and 1/2, and then asked me if I thought the coin was heads or tails, I would say I didn't know. And then if they told me it was heads, I would not be surprised. That shows that I agree with the halfer reasoning and disagree with the thirder reasoning.
Whether or not it makes sense to put numbers on it, either you're going to be surprised at the result or not. And I would apply that to basically every case of SSA argument, including the Doomsday argument; I would be very surprised if 1,000,000 years from now humanity has spread all over the universe.
Replies from: ChristianKl, Stuart_Armstrong↑ comment by ChristianKl · 2016-01-13T12:31:05.701Z · LW(p) · GW(p)
In any case, whatever you say about probability, being surprised is something that happens in real life.
As someone who actually experienced in real life how it feels to awake from artifical coma having multiple days without memory in the past, I think your naive intuition about what would surprise has no basis.
Being surprised happens at system I level and system I has no notion of having been in an artificial coma.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-01-13T12:32:28.210Z · LW(p) · GW(p)
If system I has no notion of being in an artificial coma, then there is no chance I would be surprised by either heads or tails, which supports my point.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-01-13T13:40:04.600Z · LW(p) · GW(p)
No, system I considers it's model of the world that the time passed was just the time of a normal sleep between two days. Anything that deviates from that is highly surprising.
↑ comment by Stuart_Armstrong · 2016-01-13T11:59:53.598Z · LW(p) · GW(p)
Yes, but if we have SB problems all over the place and were commonly exposed to them, what would our sense of surprise evolve to?
comment by Emile · 2011-02-18T16:23:26.794Z · LW(p) · GW(p)
(A reminder that SIA stands for Self-Indication Assumption would be nice, especially since newcomers are likely to confuse it with SIAI)
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-18T16:34:01.161Z · LW(p) · GW(p)
Added reminder, thanks.
comment by Johnicholas · 2011-02-18T18:46:31.317Z · LW(p) · GW(p)
The presentation of this article could be improved. For one, "triply altruistic" is novel enough that it could do with some concrete expansion. Also, the article is currently presented as a delta - I would prefer a "from first principles" (delta-already-applied) format.
Here's my (admittedly idiosyncratic) take on a "from first principles" concrete introduction:
Suppose that some creatures evolve in a world where they are likely to be plucked out by an experimenter, possibly cloned, possibly some clones are killed, then the survivors are offered a bet of some sort and then deposited back.
For example, in scenario 1 (or A in the previous post), the experimenter first clones the agent, then flips a coin, then if the coin came up heads, kills an agent, then elicits a "probability" of how the coin flip landed from the surviving agents using a bet (or a scoring rule?), then lets the surviving agents go free.
The advantage of this concreteness is that if we can simulate it, then we can see which strategies are evolutionarily stable. Note that though you don't have to specify the utilities or altruism parameters in this scenario, you do have to specify how money relates to what the agents "want" - survival and reproduction. Possibly rewarding the agents directly in copies is simplest.
I admit I have not done the simulation, but my intuition is that the two procedures "creates extra copies and then kill them" or "never create them at all" create identical evolutionary pressures, and so have identical stable strategies. So I'm dubious about your conclusion that there is a substantive difference between them.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-18T19:05:47.590Z · LW(p) · GW(p)
Don't know what a delta is, sorry :-)
Looking for an evolutionary stable strategy might be an interesting idea.
But the point is not to wonder what would be ideal if your utility were evolutionarily stable, but what to do with your current utility, in these specific situations.
Replies from: gwern, Johnicholas↑ comment by Johnicholas · 2011-02-18T20:49:08.995Z · LW(p) · GW(p)
Sorry, by "delta" I meant change, difference, or adjustment.
The reason to investigate evolutionarily stable strategies is to look at the space of workable, self-consistent, winningish strategies. I know my utility function is pretty irrational - even insane. For example, I (try to) change my explicit values when I hear sufficiently strong arguments against my current explicit values. Explaining that is possible for a utilitarian, but it takes some gymnastics, and the upshot of the gymnastics is that utility functions become horrendously complicated and therefore mostly useless.
My bet is that there isn't actually much room for choice in the space of workable, self-consistent, winningish strategies. That will force most of the consequentialists, whether they ultimately care about particular genes or memes, paperclips or brass copper kettles, to act identically with respect to these puzzles, in order to survive and reproduce to steer the world toward their various goals.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-19T09:29:34.244Z · LW(p) · GW(p)
I'm unsure. For a lone agent in the world, who can get copied and uncopied, I think that following my approach here is the correct one. For multiple competing agents, this becomes a trade/competition issue, and I don't have a good grasp of that.
comment by rwallace · 2011-02-18T17:58:48.588Z · LW(p) · GW(p)
Long ago, in a book on evolutionary biology (I forget which one it was) there was the excellent quote "fitness is what appears to be maximized when what is really being maximized is gene survival" together with an analysis of the peculiar genetic system of the Hymenoptera which predisposes them to evolve eusociality.
The author first presented a classical analysis by a previous author, which used the concept of inclusive fitness, and via a series of logical steps that obviously took a great deal of intelligence to work out, and nontrivial mental effort even to follow the explanation, managed to stretch fitness to cover the case. Oh, but there was an error in the last step that nobody had spotted, so the answer came out wrong.
The newer author then presented his own analysis, discarding the concept of fitness and just talking directly about gene survival. Not only did it give the right answer, but the logic was so simple and transparent you could easily verify the answer was right.
I think there's a parallel here. You're obviously putting a lot of intelligence and hard work into trying to analyze these cases in terms of things like selfishness and altruism... but the difficulty evaporates if you discard those concepts and just talk directly about utility.
Replies from: orthonormal, Stuart_Armstrong↑ comment by orthonormal · 2011-02-19T20:23:40.975Z · LW(p) · GW(p)
I want to upvote this for the excellent anecdote, but the comment seems to go off the rails at the end. "Selfishness w.r.t. copies" and "Altruism w.r.t. copies", here, are two different utility functions that an agent could have. What do you mean by "talking directly about utility"?
↑ comment by Stuart_Armstrong · 2011-02-18T19:01:53.444Z · LW(p) · GW(p)
I think that the selfishness and altruism concepts are well captured by utility here. All that is needed for, say, the second model, is that the dead guy derives utility from the survivor betting that they're in a single-person universe.
Altruism was the easiest way to do this, but there are other ways - maybe the money will be given to a charity to prevent the death of hypothetical agents in thought experiments or something (but only if there is a death). Or you could cast it in evolutionary terms (the pair share their genes, and there won't be enough food for two, and the agents are direct gene-maximisers).
The point is that I'm using a clear utility, and using selfishness or altruism as a shorthand to describing it.
comment by PhilGoetz · 2011-02-20T18:00:41.951Z · LW(p) · GW(p)
Sorry, downvoted because I still can't figure out what this post is about, or what its conclusion is. I think I'm missing some critical context about what issues you're trying to get at, and what kinds of decisions these issues are relevant for.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-20T18:27:36.646Z · LW(p) · GW(p)
Did you look at the previous post it was referring to? It's basically pointing out that there was a hole in the argumentation there. Since that was a major argument in favour of SIA, the fact that the argument doesn't work is something worth pointing out.
comment by casebash · 2016-01-11T10:32:01.131Z · LW(p) · GW(p)
When utility is used in game theory, it doesn't have to apply to utility that you gain personally, but instead can apply to anything you value. Call this inclusive utility and personal utility the value you gain from the situation.
Changing the altruism levels changes the inclusive utilities that each decision offers (even though the personal utility remains the same), so even with the same probability you should take different odds.
Inclusive utility can be replaced with personal utility only for selfish agents.