A resolution to the Doomsday Argument.
post by Fivehundred · 2015-05-24T17:58:11.857Z · LW · GW · Legacy · 86 commentsContents
86 comments
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?
86 comments
Comments sorted by top scores.
comment by DanArmak · 2015-05-24T18:37:56.773Z · LW(p) · GW(p)
If you believe the DA, and you also believe you're being simulated (with some probability), then you should believe to be among the last N% humans in the simulation. So you don't escape the DA entirely.
However, it may be that if you believe yourself to be likely in a simulation, you shouldn't believe the DA at all. The DA assumes you know how many humans lived before you, and that you're not special among them. Both may be false in a simulation of human history: it may not have simulated all the humans and pre-humans who ever lived, and/or you may be in a special subset of humans being simulated with extra fidelity. Not to mention that only periods of your life may be simulated, possibly out of order or without causal structure.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-24T18:51:39.003Z · LW(p) · GW(p)
I'm not talking about the DA only, I'm talking about the assumption that our experiences should be more-or-less ordinary. And this is designed to escape the DA; it's the only reason to think you are simulated in the first place.
Really, I got the whole idea from HPMOR: fulfilling a scary prophecy on your own terms.
Replies from: DanArmak↑ comment by DanArmak · 2015-05-24T19:04:44.316Z · LW(p) · GW(p)
the assumption that our experiences should be more-or-less ordinary
How do you know what to call "ordinary"? If you think you're being simulated, then you need to predict what kinds and amounts of simulations exist besides the one you're in, as well as how extensive and precise your own simulation is in past time and space, not just in its future.
And this is designed to escape the DA; it's the only reason to think you are simulated in the first place.
There are lots of reasons other than the DA to think we're being simulated: e.g. Bostrom's Simulation Argument (posthumans are likely to run ancestor simulations). The DA is a very weak argument for simulation: it is equally consistent with there being an extinction event in our future.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-24T19:11:57.515Z · LW(p) · GW(p)
If you think you're being simulated, then you need to predict what kinds and amounts of simulations exist besides the one you're in, as well as how extensive and precise your own simulation is in past time and space, not just in its future.
I don't see why simulated observers would almost ever outnumber physical observers. It would need an incredibly inefficient allocation of resources.
There are lots of reasons other than the DA to think we're being simulated: e.g. Bostrom's Simulation Argument (posthumans are likely to run ancestor simulations).
Avoiding the DA gives them a much clearer motive. It's the only reason I can think of that I would want to do it. Surely it's at least worth considering?
Replies from: DanArmak↑ comment by DanArmak · 2015-05-24T19:30:23.462Z · LW(p) · GW(p)
I don't see why simulated observers would almost ever outnumber physical observers. It would need an incredibly inefficient allocation of resources.
The question isn't how many simulated observers exist in total (although that's also unknown), but how many of them are like you in some relevant sense, i.e. what to consider "typical".
Avoiding the DA gives them a much clearer motive. It's the only reason I can think of that I would want to do it. Surely it's at least worth considering?
Many people do think they would have other reasons to run ancestor simulations.
But in any case, I don't think your original idea works. Running a simulation of your ancestors causes your simulated ancestors to be wrong about the DA, but it doesn't cause yourself to be wrong about it.
Trying to steelman, what you'd need is to run simulations of people successfully launching a friendly self-modifying AI. Suppose out of every N civs that run an AI, on average one succeeds and all the others go extinct. If each of them precommits to simulating N civs, and the simulations are arranged so that in a simulation running an AI always works, so in the end there are still N civs that successfully ran an AI.
This implies a certain measure on future outcomes: it's counting "distinct" existences while ignoring the actual measure of future probability. This is structurally similar to quantum suicide or quantum roulette.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-24T19:55:43.328Z · LW(p) · GW(p)
The question isn't how many simulated observers exist in total (although that's also unknown), but how many of them are like you in some relevant sense, i.e. what to consider "typical".
I also find it hard to believe that humans of any sort would hold special interest to a superintelligence. Do I really have the burden of proof there?
But in any case, I don't think your original idea works. Running a simulation of your ancestors causes your simulated ancestors to be wrong about the DA, but it doesn't cause yourself to be wrong about it.
The whole point is that the simulators want to find themselves in a simulation, and would only discover the truth after disaster has been avoided. It's a way of ensuring that superintelligence does not fulfill the DA.
Replies from: DanArmak↑ comment by DanArmak · 2015-05-25T12:53:27.749Z · LW(p) · GW(p)
I also find it hard to believe that humans of any sort would hold special interest to a superintelligence. Do I really have the burden of proof there?
It's plausible, to me, that a superintelligence built by humans and intended by them to care about humans would in fact care about humans, even if it didn't have the precise goals they intended it to have.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-25T13:14:47.931Z · LW(p) · GW(p)
This is overly complex. Now we assume that AI goes wrong? These people want to be in a simulation; they need a Schelling point with other humanities. Why wouldn't they just give clear instructions to the AI to simulate other Earths?
comment by ChristianKl · 2015-05-24T19:00:33.264Z · LW(p) · GW(p)
Schilling point
Do you mean Schelling point? If so, I don't see what you mean.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-24T19:01:47.419Z · LW(p) · GW(p)
Whoopsie daisy.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-07-17T13:40:52.441Z · LW(p) · GW(p)
You didn't get on to what you mean.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-07-17T19:36:54.829Z · LW(p) · GW(p)
Whoopsie daisy generally indicates a mistake. Also consider that I edited it to 'Schelling'. It can't be that hard...
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-07-17T20:15:46.000Z · LW(p) · GW(p)
What did you mean by" they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them"
Replies from: Fivehundred↑ comment by Fivehundred · 2015-07-18T07:00:21.190Z · LW(p) · GW(p)
As a Schelling point to the other simulated humanities, who will do the same. The goal of the originals is to find themselves in a simulation.
comment by DanielLC · 2015-05-24T20:29:59.046Z · LW(p) · GW(p)
If I were doing it, I'd save computing power by only simulating the people who would program the AI. I don't think I'm going to do that, so it doesn't apply to me. Eliezer doesn't accept the Doomsday Argument, or at least uses a decision theory that makes it irrelevant, so it wouldn't apply to him.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-24T20:42:07.748Z · LW(p) · GW(p)
Well, for a start, I don't think that the builders would want to be the only people in their world. And recall that this only serves to produce new humans, because simply making all existing humans immortal solves the DA as well. I think it would be more efficient to fully populate the simulation.
What is this decision theory? I haven't read the Sequences yet, sorry.
Replies from: DanielLC, ChristianKl↑ comment by DanielLC · 2015-05-24T23:05:07.124Z · LW(p) · GW(p)
because simply making all existing humans immortal solves the DA as well.
I disagree. An appreciable number of people might be the ones designing the AI, but they won't spend an appreciable portion of their lives doing it.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-25T09:20:53.574Z · LW(p) · GW(p)
I'm not sure I understand. If existing humans became immortal, but no more humans were created, then it removes the need for a future extinction to explain the number of humans that will ever exist.
Replies from: DanielLC↑ comment by DanielLC · 2015-05-25T22:53:49.022Z · LW(p) · GW(p)
It's not about the number of humans. It's about the number of observer-moments. Imagine if you were the only human ever. If you're only twenty years old, it's unlikely that you'd live to be a billion. You're not going to just happen to be in one of the first twenty years.
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2015-05-29T05:14:45.936Z · LW(p) · GW(p)
What does that mean, "You're not going to just happen to be in one of the first twenty years"? There are people who have survived more than one billion seconds past their twenty first birthdays. And each one, at one point, was within twenty second of their twenty first birthday. What would you say to someone whose twenty first birthday was less than twenty seconds ago who says "I'm not going to just happen to be in the first twenty seconds"?
Replies from: DanielLC↑ comment by DanielLC · 2015-05-29T06:25:52.196Z · LW(p) · GW(p)
And each one, at one point, was within twenty second of their twenty first birthday.
Yes, but at many more points they were not.
What would you say to someone whose twenty first birthday was less than twenty seconds ago who says "I'm not going to just happen to be in the first twenty seconds"?
I'd tell them that they're even less likely to hallucinate evidence that suggests they are.
Every day, at some point it's noon, to the second. If you looked at your watch and it had a second hand, and it was noon to the second, you'd still find that a pretty big coincidence, wouldn't you?
↑ comment by ChristianKl · 2015-05-24T22:10:28.803Z · LW(p) · GW(p)
What is this decision theory? I haven't read the Sequences yet, sorry.
That's not the kind of question to be answered in a paragraph. But the label for Eliezers theory is Timeless Decision Theory (TDT).
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-26T03:41:45.723Z · LW(p) · GW(p)
How exactly does it make it 'irrelevant?' I haven't been able to find a single reference to the DA.
comment by bortels · 2015-06-04T06:30:21.951Z · LW(p) · GW(p)
So - I am still having issues parsing this, and I am persisting because I want to understand the argument, at least. I may or may not agree, but understanding it seems a reasonable goal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
The success of the self-modifying AI would make the builders of that AI's observations extremely rare... why? Because the AI's observations count, and it is presumably many orders of magnitude faster?
For a moment, I will assume I have interpreted that correctly. So? How is this risky, and how would creating billions of simulated humanities change that risk?
I think the argument is that - somehow - the overwhelming number of simulated humanities somehow makes it likely that the original builders are actually a simulation of the original builders running under an AI? How would this make any difference? How would this be expected to "percolate up" thru the stack? Presumably somewhere there is the "original" top level group of researchers still, no? How are they not at risk?
How is it that a builder's observations are ok, the AI's are bad, but the simulated humans running in the AI are suddenly good?
I think, after reading what I have, that this is the same fallacy I talked about in the other thread - the idea that if you find yourself in a rare spot, it must mean something special, and that you can work the probability of that rareness backwards to a conclusion. But I am by no means sure, or even mostly confident, that I am interpreting the proposal correctly.
Anyone want to take a crack at enlightening me?
comment by Gunnar_Zarncke · 2015-05-24T21:37:50.596Z · LW(p) · GW(p)
See LW wiki's Doomsday Argument for reference.
The problem I have with this kind of reasoning is that it causes early reasoners to come to wrong conclusions (though 'on average' the reasoning is most probably true).
Replies from: Unknownscomment by estimator · 2015-05-24T19:12:37.461Z · LW(p) · GW(p)
Nope. I don't think ignoring causality to such extent makes sense. Simulating many instances of humanity won't make other risks magically go away, because it basically has no effect on them.
Yet another example of how one can misuse rationality and start to believe bogus statements.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-24T19:17:57.309Z · LW(p) · GW(p)
It's counterintuitive, yes, which does not make it acceptable for you to fart out some mockery and consider the argument closed.
Replies from: ThisSpaceAvailable, estimator↑ comment by ThisSpaceAvailable · 2015-05-29T04:58:36.667Z · LW(p) · GW(p)
There was no "mockery", just criticism and disagreement. It's rather disturbing that you saying that criticism and disagreement is "not acceptable" has been positively received. And estimator didn't say that the argument is closed, only that zie has a solid opinion about it.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-29T11:48:15.190Z · LW(p) · GW(p)
He didn't bother with a serious argument, only an appeal to "causality." I don't go around posting my opinions on random threads unless I really can improve the discussion.
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2015-05-29T18:17:20.687Z · LW(p) · GW(p)
Most people consider causality to be a rather serious argument. If you're going to unilaterally declare certain lines of argument illegitimate, and then criticize people for failing to present a "legitimate" argument, and declaring that any opinions that disagree with you don't improve the discussion, that's probably going to piss people off,
↑ comment by estimator · 2015-05-24T19:29:50.411Z · LW(p) · GW(p)
Sorry for probably being too sharp.
So, we have a choice: to deny a very counterintuitive statement or to deny causality. Do we have enough evidence to choose the latter? IMO, certainly, no. Anything we have are some thought experiments, which can be misinterpreted or wrong, and contain a lot of hand-waving.
Reformulating in bayesian terms: prior probability for your statement being correct is extremely tiny, and there is almost no evidence to update on. What to do? Reject.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-24T20:03:07.666Z · LW(p) · GW(p)
So, we have a choice: to deny a very counterintuitive statement or to deny causality.
I'm not 'denying causality', I'm pointing out a way around it.
Replies from: estimator↑ comment by estimator · 2015-05-24T20:15:01.083Z · LW(p) · GW(p)
You say that one can change A by changing B, while there is no causal mechanism by which B can influence A. That's denying causality.
Well, if you don't like the term 'denying causality', feel free to replace it, but the point holds anyway.
In my prior probabilities system, finding a way around causality is somewhere near finding a way around energy conservation law. No way, unless there are tons of evidence.
Replies from: ike, Fivehundred↑ comment by ike · 2015-05-26T14:45:50.163Z · LW(p) · GW(p)
You say that one can change A by changing B, while there is no causal mechanism by which B can influence A. That's denying causality.
Do you accept in theory that, provided MWI is true, one can win a quantum lottery by committing suicide if one does not win? If yes, is that not a similar violation of causality? If no, why not? What's your model of what would happen?
Replies from: Luke_A_Somers, Jiro, estimator↑ comment by Luke_A_Somers · 2015-07-17T21:06:04.798Z · LW(p) · GW(p)
Under MWI, you can win a lottery just by entering it; committing suicide is not necessary. Of course, almost all of you will lose.
All you're doing in quantum lotteries is deciding you really, REALLY don't care about the case where you lose, to the point that you want to not experience those branches at all, to the point that you'd kill yourself if you find yourself stuck in them.
That's the causality involved. You haven't gone out and changed the universe in any way (other than almost certainly killing yourself).
Replies from: ike↑ comment by ike · 2015-07-19T15:31:54.024Z · LW(p) · GW(p)
Under MWI, you can win a lottery just by entering it; committing suicide is not necessary. Of course, almost all of you will lose.
Replace "win a lottery" with "have a subjective probability of ~1 of winning a lottery".
All you're doing in quantum lotteries is deciding you really, REALLY don't care about the case where you lose, to the point that you want to not experience those branches at all, to the point that you'd kill yourself if you find yourself stuck in them.
That's wrong. If I found myself stuck in one, I would prefer to live; that's why I need a very strong precommitment, enforced by something I can't turn off.
You haven't gone out and changed the universe in any way (other than almost certainly killing yourself).
Here's where we differ; I identify every copy of me as "me", and deny any meaningful sense in which I can talk about which one "I" am before anything has diverged (or, in fact, before I have knowledge that excludes some of me). So there's no sense in which I "might" die, some of me certainly will, and some won't, and the end state of affairs is better given some conditions (like selfishness, no pain on death, and lots of other technicalities).
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2015-07-21T20:17:44.736Z · LW(p) · GW(p)
That's wrong. If I found myself stuck in one, I would prefer to live; that's why I need a very strong precommitment, enforced by something I can't turn off.
I mean, you_now would prefer to kill you_then.
As for your last paragraph, the framing was from a global point of view, and probability in this case is the deterministic, Quantum-Measure-based sort.
Replies from: ike↑ comment by ike · 2015-07-21T21:07:05.418Z · LW(p) · GW(p)
I mean, younow would prefer to kill youthen
Not really. I prefer to kill my future self only because I anticipate living on in other selves; this can't accurately be described as "you really, REALLY don't care about the case where you lose, to the point that you want to not experience those branches at all, to the point that you'd kill yourself if you find yourself stuck in them."
I do care; what I don't care about is my measure between two measures of the same cardinality. If there was a chance of my being stuck in one world and not living on anywhere else, I wouldn't (now) want to kill myself in that future.
As for your last paragraph, the framing was from a global point of view, and probability in this case is the deterministic, Quantum-Measure-based sort.
Ok, we sort of agree, then; but then your claim of "You haven't gone out and changed the universe in any way" seems weak. If I can change my subjective probability of experiencing X, and the state of the universe that's not me doesn't factor into my utility except insofar as it affects me, why should I care whether I'm "changing the universe"?
(To clarify the "I care" claim further; I'm basically being paid in one branch to kill myself in another branch. I value that payment more than I disvalue killing myself in the second branch; that does not necessarily mean that I don't value the second branch at all, just less than the reward in branch 1)
↑ comment by Jiro · 2015-05-26T16:27:30.218Z · LW(p) · GW(p)
Saying that "one can" do something in MWI is misleading because there are many "ones". If you don't commit suicide, there is a "one" who won and other "ones" who lost; if you do commit suicide, there is a "one" who won and the others are dead. Committing suicide doesn't cause you to win because you would have won in one of the branches in either situation.
Replies from: ike↑ comment by estimator · 2015-05-26T18:35:43.904Z · LW(p) · GW(p)
I don't have a model which I believe with certainty, and I think it is a mistake to have one, unless you know sufficiently more than modern physics knows.
Why do you think that your consciousness always moves to the branch where you live, but not at random? Quantum lotteries, quantum immortality and the like require not just MWI, but MWI with a bunch of additional assumptions. And if some QM interpretation flavor violates causality, it is more an argument against such an interpretation, than against causality.
The thing I don't like about such way of winning quantum lotteries is that they require non-local physical laws. Imagine that a machine shoots you iff some condition is not fulfilled; you say that you will therefore find yourself in the branch where the condition is fulfilled. But the machine won't kill you instantly, so the choice of branch at time t must be done based on what happens at time t + dt.
Replies from: ike↑ comment by ike · 2015-05-26T20:47:28.899Z · LW(p) · GW(p)
I don't have a model which I believe with certainty, and I think it is a mistake to have one, unless you know sufficiently more than modern physics knows.
Note that I said provided MWI is true.
Why do you think that your consciousness always moves to the branch where you live, but not at random?
I think that, given MWI, your consciousness is in any world in which you exist, so that if you kill yourself in the other worlds, you only exist in worlds that you didn't kill yourself. I'm not sure what else could happen; obviously you can't exist in the worlds you're dead in.
The thing I don't like about such way of winning quantum lotteries is that they require non-local physical laws.
I don't see why; MWI doesn't violate locality.
Imagine that a machine shoots you iff some condition is not fulfilled; you say that you will therefore find yourself in the branch where the condition is fulfilled. But the machine won't kill you instantly, so the choice of branch at time t must be done based on what happens at time t + dt.
You have a point; my scenario is different from that, but I guess it isn't obvious. So let me restate my quantum suicide lottery in more detail. The general case I imagine is as follows: I go to sleep at time t. My computer checks some quantum data, and compares it to n. If it doesn't equal n, it kills me. Say I die at time t+dt in that case. If I don't die, it wakes me.
So at time t, the data is already determined from the computer's perspective, but not from mine. At t+dt, the data is determined from my perspective, as I've awoken. In the time between t and t+dt, it's meaningless to ask what "branch" I'm in; there's no test I can do to determine that in theory, as I only awaken if I'm in the data=n branch. It's meaningful to other people, but not to me. I don't see anywhere that requires non-local laws in this scenario.
Replies from: estimator↑ comment by estimator · 2015-05-26T21:53:26.240Z · LW(p) · GW(p)
I don't have a model which I believe with certainty even provided MWI is true.
I think that, given MWI, your consciousness is in any world in which you exist, so that if you kill yourself in the other worlds, you only exist in worlds that you didn't kill yourself. I'm not sure what else could happen; obviously you can't exist in the worlds you're dead in.
What happens if you die in a non-MWI world? Pretty much the same for the case of MWI with random branch choice. If your random branch happens to be a bad one, you cease to exist, and maybe some of your clones in other branches are still alive.
So at time t, the data is already determined from the computer's perspective, but not from mine. At t+dt, the data is determined from my perspective, as I've awoken. In the time between t and t+dt, it's meaningless to ask what "branch" I'm in; there's no test I can do to determine that in theory, as I only awaken if I'm in the data=n branch. It's meaningful to other people, but not to me. I don't see anywhere that requires non-local laws in this scenario.
Non-locality is required if you claim that you (that copy of you which has your consciousness) will always wake up. Otherwise, it's just a twisted version of a Russian roulette and has nothing to do with quants.
At time t, the computer either shoots you, or not. At time t + dt, its bullet kills you (or not). So you say that at time t you will go to the branch where the computer doesn't kill you. But such a choice of a branch requires information at time t + dt (whether you are alive or not in that branch). So, physical laws have to perform a look-ahead in time to decide in which Everett branch they should put your consciousness.
Now, imagine that your (quantum) computer generates a random number n from the Poisson distribution. Then, it will kill you after n days. Now n = ... what? Well, thanks to thermodynamics, your (and computer's) lifespan is limited, so hopefully it will be a finite number -- but, look, if the universe allowed unbounded lifespan, it would be a logical contradiction in physical laws. Anyway, you see that the look-ahead in time required after the random number generation can be arbitrarily large. That's what I mean by non-locality here.
Replies from: ike↑ comment by ike · 2015-05-27T13:56:18.716Z · LW(p) · GW(p)
Non-locality is required if you claim that you (that copy of you which has your consciousness)
I deny that this is meaningful. If there are two copies of me, both "have my consciousness". I fail to see any sense in which my consciousness must move to only one copy.
So you say that at time t you will go to the branch where the computer doesn't kill you.
I do not claim that. I claim that I exist in both branches, up until one of them no longer contains my consciousness, because I'm dead, and then I only exist in one branch. (In fact, I can consider my sleeping self unconscious, in which case no branches contained my consciousness until I woke up.)
Now, imagine that your (quantum) computer generates a random number n from the Poisson distribution. Then, it will kill you after n days.
Then many copies of my consciousness will exist, some slowly dying each day.
So, physical laws have to perform a look-ahead in time to decide in which Everett branch they should put your consciousness.
I don't have any look-ahead required in my model at all.
Can you dissolve consciousness? What test can be performed to see which branch my consciousness has moved to, that doesn't require me to be awake, nor have knowledge of the random data?
Replies from: estimator↑ comment by estimator · 2015-05-27T14:05:52.942Z · LW(p) · GW(p)
OK, now imagine that the computer shows you the number n on it's screen. What will you see? You say that both copies have your consciousness; will you see a superposition of numbers? I don't see how simultaneously being in different branches makes sense from the qualia viewpoint.
Also, let's remove sleeping from the thought experiment. It is an unnecessary complication; by the way, I don't think that consciousness flow is interrupted while sleeping.
And no, I'm currently unable to dissolve the hard problem of consciousness.
Replies from: ike↑ comment by ike · 2015-05-27T14:51:59.153Z · LW(p) · GW(p)
OK, now imagine that the computer shows you the number n on it's screen. What will you see? You say that both copies have your consciousness; will you see a superposition of numbers?
No, one copy will see 1, another 2, etc. Something like that will fork my consciousness, which has uncertain effects, which is why I proposed being asleep throughout. Until my brain has any info about what the data is, my consciousness hasn't forked yet. The fact that the info is "out there" in this world is irrelevant; the opposite data is also out there "in this world", as long as I don't know, and both actually exist (although that requirement arguably is also irrelevant to the anthropic math), then I exist in both worlds. In other words, both copies will be "continuations" of me. If one suddenly disappears, then only the other "continues" me.
Also, let's remove sleeping from the thought experiment. It is an unnecessary complication; by the way, I don't think that consciousness flow is interrupted while sleeping.
There's a reason I included it. I'm more confident that the outcome will be good with it than without. In particular, if I'm not sleeping when killed, I expect to experience death.
But the fact that you think it's not interrupted when sleeping suggests we're using different definitions. If it's because of dreaming, then specify that the person isn't dreaming. The main point is that I won't feel pain upon dying (or in fact, won't feel anything before dying), so putting me under general anesthesia and ensuring the death would be before I begin to feel anything should be enough, in that case.
And no, I'm currently unable to dissolve the hard problem of consciousness.
I meant just enough that I could understand what you mean when you claim that consciousness must only go to one path.
Replies from: estimator↑ comment by estimator · 2015-05-27T15:27:14.114Z · LW(p) · GW(p)
I think, the problem with consciousness/qualia discussions is that we don't have a good set of terms to describe such phenomena, while being unable to reduce it to other terms.
No, one copy will see 1, another 2, etc. Something like that will fork my consciousness, which has uncertain effects, which is why I proposed being asleep throughout.
I mean, one of the copies would be you (and share your qualia), while others are forks of you. That's because I think that a) your consciousness is preserved by the branching process and b) you don't experience living in different branches, at least after you observed their difference. So, if the quantum lottery works when you're awake, it requres look-ahead in time.
Now about sleeping. My best guess about consciousness is that we are sort-of conscious even while in non-REM sleep phases and under anesthesia; and halting (almost) all electric activity in the brain doesn't preserve consciousness. That's derived from the requirement of continuity of experience, which I find plausible. But that's probably irrelevant to our discussion.
As far as I understand, in your model, one's conscious experience is halted during quantum lottery (i.e. sleep is some kind of a temporary death). And then, his conscious experience continues in one of the survived copies. Is this a correct description of your model?
Replies from: ike↑ comment by ike · 2015-05-27T17:32:16.340Z · LW(p) · GW(p)
I mean, one of the copies would be you (and share your qualia), while others are forks of you.
In my model, all the copies have qualia. Put another way, clearly there's no way for an outside observer to say about any copy that it doesn't have qualia, so the only possible meaning here would be subjective. However, each copy subjectively thinks itself to have qualia. (If you deny either point, please elaborate.) Given those, I don't see any sense that anyone can say that the qualia "only" goes to a single fork, with the others being "other" people.
That's because I think that a) your consciousness is preserved by the branching process and b) you don't experience living in different branches, at least after you observed their difference.
I agree with a, but I think your consciousness is forked by the branching process. I agree with b, assuming you mean "no one person observes multiple branches after a fork". I don't think those two imply that QL requires look-ahead.
What if I rephrased this in one-world terms? I clone you while you're asleep. I put you in two separate rooms. I take two envelopes, one with a yes on it, the other with a no, and put one in each room. Someone else goes into each room, looks at the envelope, then kills you iff it says yes, and wakes you iff it says no.
Do you think you won't awaken in a room with no in the envelope?
My best guess about consciousness is that we are sort-of conscious even while in non-REM sleep phases and under anesthesia; and halting (almost) all electric activity in the brain doesn't preserve consciousness.
As long as we aren't defining consciousness, I can't really disagree that some plausible definition would make this true.
That's derived from the requirement of continuity of experience, which I find plausible.
I don't.
As far as I understand, in your model, one's conscious experience is halted during quantum lottery (i.e. sleep is some kind of a temporary death). And then, his conscious experience continues in one of the survived copies. Is this a correct description of your model?
Yes, but I also think conscious experience is halted during regular sleep. Also, should multiple copies survive, his conscious experience will continue in multiple copies. His subjective probability of finding himself as any particular copy depends on the relative weightings (i.e. self-locating uncertainty).
There is no "truth" as to which copy they'll end up in.
Replies from: estimator↑ comment by estimator · 2015-05-27T17:44:09.619Z · LW(p) · GW(p)
Do you think you won't awaken in a room with no in the envelope?
I think that I either wake up in a room with no in the envelope, or die, in which case my clone continues to live.
Yes, but I also think conscious experience is halted during regular sleep. Also, should multiple copies survive, his conscious experience will continue in multiple copies. His subjective probability of finding himself as any particular copy depends on the relative weightings (i.e. self-locating uncertainty).
I find this model implausible. Is there any evidence I can update on?
Replies from: ike↑ comment by ike · 2015-05-27T18:50:04.796Z · LW(p) · GW(p)
I think that I either wake up in a room with no in the envelope, or die, in which case my clone continues to live.
But this world I described is (or can be) completely deterministic; how can you be uncertain of what will happen? I understand how I can be subjectively uncertain due to self-locating uncertainty, but there should be no possible objective uncertainty in a deterministic world. The only out I see if if you think consciousness requires non-deterministic physical processes.
I find this model implausible. Is there any evidence I can update on?
I'm not sure I understand your reasoning here, so I'm not sure. Have you read the Ebborian posts in the quantum sequence?
What exactly do you think would happen when someone is cloned? Why would one copy be "real" and the other not? Would there be any way to detect which was real for outsiders?
Replies from: estimator↑ comment by estimator · 2015-05-27T19:18:37.726Z · LW(p) · GW(p)
OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.
What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from -- if it was built as a clone, then, well, it is a clone.
Note that I'm not saying that it's the true model, just that I currently find it more plausible; none of the consciousness theories I've seen so far is truly satisfactory.
I've read the Ebborian posts and wasn't convinced; a thought experiment is just a thought experiment, there are many ways it can be flawed (that is true for all the thought experiments I proposed in this discussion, btw). But yes, that's a problem.
Replies from: ike↑ comment by ike · 2015-05-27T19:46:36.685Z · LW(p) · GW(p)
OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.
I hope you realize that you're just moving the problem into determining which one is "your" room, considering neither room had any of you thinking in it until after one was killed.
What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from -- if it was built as a clone, then, well, it is a clone.
The root of our disagreement then seems to be this "continuous" insistence. In particular, you and I would disagree on whether consciousness is preserved with teleportation or stasis.
I could try to break that intuition by appealing to discrete time; does your model imply that time is continuous? It would seem unattractive for a model to postulate something like that.
What arguments/intuitions are causing you to find your model plausible?
Replies from: estimator↑ comment by estimator · 2015-05-27T19:58:50.496Z · LW(p) · GW(p)
I find a model plausible if it isn't contradicted by evidence and matches my intuitions.
My model doesn't imply discrete time; I don't think I can precisely explain why, because I basically don't know how consciousness works at that level; intuitively, just replace t + dt with t + 1. Needless to say, I'm uncertain of this, too.
Honestly, my best guess is that all these models are wrong.
Now, what arguments cause you to find your model plausible?
Replies from: ike↑ comment by ike · 2015-05-27T20:24:40.410Z · LW(p) · GW(p)
My model doesn't imply discrete time
I think your model implies the opposite; did you misunderstand me?
Now, what arguments cause you to find your model plausible?
(First of all, you didn't mention if you agree with my assessment of the root cause of our disagreement. I'll assume you do, and reply based on that.)
So, why do I think that consciousness doesn't require continuity? Well, partly because I think sleep disturbs continuity, yet I still feel like I'm mostly the same person as yesterday in important ways. I find it hard to accept that someone could act exactly like me and not be conscious, for reasons mostly similar to those in the zombie sequence. I identify consciousness with physical brain states, which makes it really hard to consider a clone somehow less, if it would have the exact same brain state as me. (For clones, that may not be practical, but for MWI-clones, it is.)
Replies from: estimator↑ comment by estimator · 2015-05-27T20:42:13.889Z · LW(p) · GW(p)
That's a typo; I mean't that my model doesn't imply continuous time. By the way, does it make sense to call it "my model" if my estimate of the probability of it being true is < 50%?
So, why do I think that consciousness requires continuity?
I guess, you have meant "doesn't require"?
I'd say that continuity requirement is the main cause for the divergence in our plausibility rankings, at least.
What is your probability estimate of your model being (mostly) true?
Replies from: ike↑ comment by ike · 2015-05-27T20:49:00.281Z · LW(p) · GW(p)
I guess, you have meant "doesn't require"?
Fixed. I guess we're even now :)
By the way, does it make sense to call it "my model" if my estimate of the probability of it being true is < 50%?
You're criticising other theories based on something you put less then 50% credence in? That's how this all started.
What is your probability estimate of your model being (mostly) true?
More than 90%. If I had a consistent alternative that didn't require anything supernatural, then that would go down.
Replies from: estimator↑ comment by estimator · 2015-05-27T21:11:35.942Z · LW(p) · GW(p)
p("your model") < p("my model") < 50% -- that's how I see things :)
Here is another objection to your consciousness model. You say that you are unconscious while sleeping; so, at the beginning of sleep your consciousness flow disappears, and then appears again when you wake up. But your brain state is different before and after sleep. How does your consciousness flow "find" your brain after sleep? What if I, standing on another planet many light years away from Earth, build atom-by-atom a brain which state is closer to your before-sleep brain state than your after-sleep brain state is?
The reason why I don't believe these theories with a significant degree of certainty isn't that I know some other brilliant consistent theory; rather, I think that all of them are more or less inconsistent.
Actually, I think that it's probably a mistake to consider consciousness a binary trait; but non-binary consciousness assumption makes it even harder to find out what is actually going on. I hope that the progress in machine learning or neuroscience will provide some insights.
Replies from: ike↑ comment by ike · 2015-05-27T21:18:21.313Z · LW(p) · GW(p)
You say that you are unconscious while sleeping; so, at the beginning of sleep your consciousness flow disappears, and then appears again when you wake up. But your brain state is different before and after sleep. How does your consciousness flow "find" your brain after sleep?
I don't think it's meaningful to talk about a "flow" here.
What if I, standing on another planet many light years away from Earth, build atom-by-atom a brain which state is closer to your before-sleep brain state than your after-sleep brain state is?
Then that would contain my consciousness, as well as myself after awaking. You could try to quantify how similar and dissimilar those states might be, but they're still close enough to call it the same person.
What would you say to your thought experiment, if I replace "brain" with "computer", turn off my OS, then start it again? The state of RAM is not the same as it was right before shutdown, so who is to say it's the same computer? If you make hardware arguments, I'll tell you the HD was cloned after power-off, then transferred to another computer with identical hardware. If that preserves the state of "my OS", then the same should be true for "brains", assuming physicalism.
Replies from: estimator↑ comment by estimator · 2015-05-27T21:30:34.846Z · LW(p) · GW(p)
OK, suppose I come to you while you're sleeping, and add/remove a single neuron. Will you wake up in your model? Yes, because while you're naturally sleeping, much more neurons change. Now imagine that I alter your entire brain. Now, the answer seems to be no. Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.
You don't assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?
It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.
Replies from: ike↑ comment by ike · 2015-05-27T21:50:12.533Z · LW(p) · GW(p)
Now imagine that I alter your entire brain. Now, the answer seems to be no.
Alter how? Do I still have memories of this argument? Do I share any memories with my past self? If I share all memories, then probably it's still me. If all have gone, then most likely not. (Identifying self with memories has its own problems, but let's gloss over them for now.) So I'm going to interpret your "remove a neuron" as "remove a memory", and then your question becomes "how many memories can I lose and still be me"? That's a difficult question to answer, so I'll give you the first thing I can think of. It's still me, just a lower percentage of me. I'm not that confident that it can be put to a linear scale, though.
Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.
This is a bit like the Sorites paradox. The answer is clearly to switch to a non-binary same-consciousness dichotomy. That doesn't mean I can't point to an exact clone and say it's me.
You don't assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?
Not sure what you mean. Some things change, so it won't be exactly the same. It's still close enough that I'd consider it "me".
It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.
Such analogies can help if they force you to explain the difference between computer and brain in this regard. You seem to have an identical model to my brain model by computers; why isn't it illogical there?
Replies from: estimator↑ comment by estimator · 2015-05-27T22:15:22.731Z · LW(p) · GW(p)
That's a difficult question to answer, so I'll give you the first thing I can think of. It's still me, just a lower percentage of me. I'm not that confident that it can be put to a linear scale, though.
That is one of the reasons why I think binary-consciousness models are likely to be wrong.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I'm pretty confident (>90%) that my computer isn't conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one's beliefs.
Replies from: ike↑ comment by ike · 2015-05-27T22:23:16.889Z · LW(p) · GW(p)
There are many differences between brains and computers; they have different structure, different purpose, different properties; I'm pretty confident (>90%) that my computer isn't conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one's beliefs.
Do you have any of these qualities in mind? It seems strange to reject something because "maybe" it has a quality that distinguishes it from another case. Can you point to any of these details that's relevant?
Replies from: estimator↑ comment by estimator · 2015-05-27T22:43:23.093Z · LW(p) · GW(p)
I don't think it's strange. Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
As for specific examples: something like quantum consciousness by Penrose (although I don't actually believe it it). Or any other reason why consciousness (not intelligence!) can't be reproduced in our computer devices (I don't actually believe it either).
Replies from: ike↑ comment by ike · 2015-05-28T15:10:29.434Z · LW(p) · GW(p)
Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
I'm not saying not to double check them. My problem was that you seemed to have come to a conclusion that requires there to be a relevant difference, but didn't identify any.
As for specific examples: something like quantum consciousness by Penrose (although I don't actually believe it it). Or any other reason why consciousness (not intelligence!) can't be reproduced in our computer devices (I don't actually believe it either).
Even repeating the thought experiment with a quantum computer doesn't seem to change my intuition.
↑ comment by Fivehundred · 2015-05-24T20:31:57.848Z · LW(p) · GW(p)
You say that one can change A by changing B, while there is no causal mechanism by which B can influence A. That's denying causality.
That's finding a loophole in causality, and the distinction is certainly worth making. The DA is only a product of perspective; it isn't a 'real' thing that exists.
Replies from: estimator↑ comment by estimator · 2015-05-24T20:47:42.482Z · LW(p) · GW(p)
Whether the distinction is worth making or not, it is irrelevant to my point, since both are very unlikely and therefore require much more evidence than we have now.
I assume that your idea is to prevent doomsday or make it less likely. If not, why bother with all these simulations?
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-25T09:30:50.984Z · LW(p) · GW(p)
Whether the distinction is worth making or not, it is irrelevant to my point, since both are very unlikely and therefore require much more evidence than we have now.
Look, does this seem like solid reasoning to you? Because your arguments are beginning to sound quite like it.
I am not the first Lesswronger to think of a causality-evading idea, btw.
Replies from: estimator, ThisSpaceAvailable↑ comment by estimator · 2015-05-25T10:14:14.040Z · LW(p) · GW(p)
Nope: there is sufficient evidence that the Earth is not flat, but there isn't sufficient evidence that causality doesn't exist. That is the difference. There are some counterintuitive theories, like QM or relativity or, maybe, round Earth, but all of them have been supported by a lot of evidence, there were actual experiments to prove them, etc. And these theories appeared, because old theories failed to explain existing evidence.
Can you name a single real-world example where causality doesn't work?
And you're not the first LessWronger to think that if your idea sounds clever enough, you don't actually need any evidence to prove it.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-25T10:47:52.352Z · LW(p) · GW(p)
"Species can't evolve, that violates thermodynamics! We have too much evidence for thermodynamics to just toss it out the window."
Just realized how closely your argument mirrors this.
Replies from: estimator↑ comment by estimator · 2015-05-25T11:00:20.714Z · LW(p) · GW(p)
Er.. what? Evolution doesn't violate thermodynamics.
Bad analogies don't count as solid arguments, either. The difference between evolution/thermodynamics example and your case is that the relation between thermodynamics and evolution is complicated, and in fact there is no contradiction. While it's evident that your idea works only if you can acausally influence something. That's much closer to perpetual motion engine (direct contradiction), than to evolution (non-direct, questionable contradiction which turns out to be false).
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-25T11:23:53.672Z · LW(p) · GW(p)
Look, I explained the details in the OP. Create a lot of Earths and hope that yours turns out to be one of them. That already violates causality, according to your standards. I don't see much of a way to make it clearer.
Replies from: bortels↑ comment by bortels · 2015-06-01T21:34:53.410Z · LW(p) · GW(p)
Ah - that's much clearer than your OP.
FWIW - I suspect it violates causality under nearly everyone's standards.
You asked if your proposal was plausible. Unless you can postulate some means to handle that causality issue, I would have to say the answer is "no".
So - you are suggesting that if the AI generates enough simulations of the "prime" reality with enough fidelity, then the chances that a given observer is in a sim approach 1, because of the sheer quantity of them. Correct?
If so - the flaw lies in orders of infinity. For every way you can simulate a world, you can incorrectly simulate it an infinite number of other ways. So - if you are in a sim, it is likely with a chance approaching unity that you are NOT in a simulation of the higher level reality simulating you. And if it's not the same, you have no causality violation, because the first sim is not actually the same as reality; it just seems to be from the POV an an inhabitant.
The whole thing seems a bit silly anyway - not your argument, but the sim argument - from a physics POV. Unless we are actually in a SIM right now, and our understanding of physics is fundamentally broken, doing the suggested would take more time and energy than has ever or will ever exist, and is still mathematically impossible (another orders of infinity thing).
Replies from: Fivehundred↑ comment by Fivehundred · 2015-06-02T01:42:11.379Z · LW(p) · GW(p)
FWIW - I suspect it violates causality under nearly everyone's standards.
Oh god damn it, Lesswrong is responsible for every single premise of my argument. I'm just the first to make it!
As for the rest of your post: I have to admit I did not consider this, but I still don't see why they wouldn't just create a less complex physical universe for the simulation.
Or maybe I'm misunderstanding you. My brain is feeling more than usually fried at the moment.
↑ comment by ThisSpaceAvailable · 2015-05-29T05:10:30.154Z · LW(p) · GW(p)
Look, does this seem like solid reasoning to you? Because your arguments are beginning to sound quite like it.
"Species can't evolve, that violates thermodynamics! We have too much evidence for thermodynamics to just toss it out the window."
Listing arguments that you find unconvincing, and simply declaring that you find your opponent's argument to be similar, is not a valid line of reasoning, isn't going to make anyone change their mind, and is kind of a dick move. This is, at its heart, simply begging the question: the similarity that you think exists is that you think all of these arguments are invalid. Saying "this argument is similar to another one because they're both invalid, and because it's so similar to an invalid argument, it's invalid" is just silly.
"My argument shares some similarities to an argument made by someone respected in this community" isn't much of an argument, either.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-05-29T11:51:50.421Z · LW(p) · GW(p)
Sure, but I found the analogy useful because it is literally the exact same thing. Both draw a line between a certain mechanism and a broader principle with which it appears to clash if the mechanism were applied universally. Both then claim that the principle is very well established and that they do not need to condescend to address my theory unless I completely debunk the principle, even though the theory is very straightforward.
I was sort of hoping that he would see it for himself, and do better. This is a rationality site after all; I don't think that's a lot to ask.
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2015-05-29T18:13:50.020Z · LW(p) · GW(p)
You clearly expect estimator to agree that the other arguments are fallacious. And yet estimator clearly believes that zir argument is not fallacious. To assert that they are literally the same thing, that they are similar in all respects, is to assert that estimator's argument is fallacious, which is exactly the matter under dispute. This is begging the question. I have already explained this, and you have simply ignored my explanation.
All the similarities that you cite are entirely irrelevant. Simply noting similarities between an argument, and a different, fallacious argument, does nothing to show that the argument in question is fallacious as well, and the fact that you insist on pretending otherwise does not speak well to your rationality.
Estimator clearly believes that there is no way that creating simulations can affect whether we are in a simulation. You have presented absolutely no argument for why it can. Instead, you've simply declared that your "theory" is "straightforward", and that disagreeing is unacceptable arrogance. Arguing that your "theory" violates a well-established principled is addressing your "theory". So apparently, when you write "do not need to condescend to address my theory", what you really mean is "have failed to present a counterargument that I have deigned to recognize as legitimate".
comment by bortels · 2015-06-01T04:57:31.233Z · LW(p) · GW(p)
Seems backwards. If you are a society that has actually designed and implemented an AI and infrastructure capable of "creating billions of simulated humanities" - it seems de-facto you are the "real" set, as you can see the simulated ones, and a recursive nesting of such things should, in theory have artifacts of some sort (ie. a "fork bomb" in the unix parlance).
I rather think that pragmatically, if a simulated society developed an AI capable of simulating society in sufficient fidelity, it would self-limit - either the simulations would simply lack fidelity, or the +1 society running us would go "whoops, that one is spinning up exponentially" and shut us down. If you really think you are in a simulated society, things like this would be tantamount to suicide...
I don't find the Doomsday argument compelling, simply because it assumes something is not the case ("we are in the first few percent of humans born") just because it is improbable.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-06-01T13:25:15.869Z · LW(p) · GW(p)
Seems backwards. If you are a society that has actually designed and implemented an AI and infrastructure capable of "creating billions of simulated humanities" - it seems de-facto you are the "real" set, as you can see the simulated ones, and a recursive nesting of such things should, in theory have artifacts of some sort (ie. a "fork bomb" in the unix parlance).
No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted.
I rather think that pragmatically, if a simulated society developed an AI capable of simulating society in sufficient fidelity, it would self-limit - either the simulations would simply lack fidelity, or the +1 society running us would go "whoops, that one is spinning up exponentially" and shut us down. If you really think you are in a simulated society, things like this would be tantamount to suicide...
Why? The terminal point is creation of FAI. But they wouldn't shut down the humans of the simulation; that would defeat the whole point of the thing.
I don't find the Doomsday argument compelling, simply because it assumes something is not the case ("we are in the first few percent of humans born") just because it is improbable.
...so you are arguing that probability doesn't mean anything? Something that will happen in 99.99% of universes can be safely assumed to occur in ours.
Replies from: TheAncientGeek, bortels↑ comment by TheAncientGeek · 2015-07-17T14:23:41.325Z · LW(p) · GW(p)
...so you are arguing that probability doesn't mean anything? Something that will happen in 99.99% of universes can be safely assumed to occur in ours.
Absent other information.
Replies from: Fivehundred↑ comment by Fivehundred · 2015-07-17T19:39:20.592Z · LW(p) · GW(p)
What is this supposed to mean?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-07-17T20:20:44.733Z · LW(p) · GW(p)
Evidence of some unique features.
↑ comment by bortels · 2015-06-01T21:17:41.197Z · LW(p) · GW(p)
No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted.
Then perhaps I simply do not understand the proposal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
This is where I am confused. The "of course" is not very "of coursey" to me. Can you explain how a self-modifying AI would be risky in this regard (a citation is fine, you do not need to repeat a well known argument I am simply ignorant of).
I am also foggy on terminology - DA and FAI and so on. I don't suppose there's a glossary around. Ok - DA is "Doomsday Argument" from the thread context (which seems silly to me - the SSA seems to be wrong on the face of it, which then invalidates DA).
Replies from: Fivehundred↑ comment by Fivehundred · 2015-06-02T01:28:12.018Z · LW(p) · GW(p)
Can you explain how a self-modifying AI would be risky in this regard (a citation is fine, you do not need to repeat a well known argument I am simply ignorant of).
I'm not sure that you can avoid picking it up, just by being on this site. http://www.yudkowsky.net/singularity/ai-risk/
which seems silly to me - the SSA seems to be wrong on the face of it
You clearly know something I don't.
Replies from: bortels↑ comment by bortels · 2015-06-04T04:00:42.077Z · LW(p) · GW(p)
Ah - I'd seen the link, but the widget just spun. I'll go look at the PDF. The below is before I have read it - it could be amusing and humility inducing if I read it and it makes me change my mind on the below (and I will surely report back if that happens).
As for the SSA being wrong on the face of it - the DA wiki page says "The doomsday argument relies on the self-sampling assumption (SSA), which says that an observer should reason as if they were randomly selected from the set of observers that actually exist." Assuming this is true (I do not know enough to judge yet), then if the SSA is false, then the DA argument is unsupported.
So - lets look at SSA. In a nutshell, it revolves around how unlikely it is that you were born in the first small% of history - and ergo, doomsday must be around the corner.
I can think of 2 very strong arguments for the SSA being untrue.
First - this isn't actually how probability works. Take a fair coin and decide to flip it. The probability of heads and tails are the same, 1/2 - 50% for each. Flip the coin, and note the result. The probability is now unity - there is no magic way to get that 50/50 back. That coin toss result is now and forever more heads (or tails). You cannot look at a given result, and work backwards about how improbable it was, then use that - because it is no longer improbable, it's history. Probability does not actually work backwards in time, although it is convenient in some cases to pretend it does.
Another example - what is the probability that I was born at the exact second, minute, hour, and day, at the exact location I was born at, out of the countless other places and times that humanity has existed that I could have been born in/at? The answer, of course - unity. And nil at all other places and times, because it has already happened - the wave form, if you will, has collapsed, Elvis has left the building.
So - what is the probability you were born so freakishly close to the 5 million year reign of humanity, in the first 0.000001% of all living people? Unity. Because it's history. And the only thing making this position any different whatsoever from the others is blind chance. There is nothing one bit special about being in the first bit, other than that it allows you to notice that. (Feel free to substitute anything for 5 million above - it's all the same).
Second - there are also logical issues - you can spin the argument on it's head, and it still works (with less force to be sure). What are the chances of me being alive for Doomsday? Fairly small - despite urban legend, the number of people alive are a fairly small percentage (6-7%) of all who have ever lived. Ergo - doomsday cannot be soon, because it was unlikely I would be born to see it. (again, flawed - right now, the liklihood I was born at that time is unity)
An argument that can be used to "prove" both T and ~T is flawed, and should be discarded, aside from the probability thing. Prove here being used very loosely, because this is nowhere close to proof, which is good because I like things like Math working.
Time to go read a PDF.
Update: Done. That was quite enjoyable, thank you. A great deal of food for thought, and like most good, crunchy info filled things, there were bits I quite agreed with, and quite disagreed with (and that's fine.)
I took some notes; I will not attempt to post them here, because I have already run into comment length issues, and I'm a wordy SOB. I can post them to a gist or something if anyone is interested, I kept them mostly so I could comment intelligently after reading it. Scanning back thru for the important bits:
Anthropomorphic reasoning would be useless as suggested - unless the AI was designed by and for humans to use. Which it would be. So - it may well be useful in the beginning, because presumably we would be modeling desired traits (like "friendliness") on human traits. That could easily fail catastrophically later, of course.
The comparison between evolution and AI, in terms of relation to humans on page 11 was profound, and very well said.
There are an awful lot of assumptions presented as givens, and then used to assert other things. If any of them are wrong - the chain breaks. There were also a few suggestions that would violate physics, but the point being made was still valid ("With molecular nanotechnology, the AI could (potentially) rewrite the solar system unopposed." was my favorite; it is probably beneficial to separate what is possible and impossible, given things like distances and energy and time, not to mention "why?").
There is an underlying assumption that intelligence can increase without bound. I am by no means sure this is true - I can think of no other trait that does so, you run into limits (again) of physics and energy and so on. It is very possible that things like the speed-of-light propagation delay, heat, and inherent difficulty of certain tasks such as factoring would end up imposing an upper-limit on intelligence of an AI before it reached the w00 w00 god-power magic stage. Not that it matters that much, if it's goal is to harm us, you don't need to be too smart to do that...
Anyone thinking an AI might want my body for it's atoms is not thinking clearly. I am made primarily of carbon, hydrogen, and oxygen - all are plentiful, in much easier to work with form, elsewhere. An early stage AI bootstrapping production would almost certainly want metals, some basic elements like silicon, and hydrocarbons (which we keep handy). Oh, and likely fissionables for power. Not us. Later on, all bets are off, but there are still far better places to get atoms than people.
Finally - the flaw in assuming an AI will predate mind upload is motivation. Death is a powerful, powerful motivator. A researcher close to being able to do it, about to die, is damn well going to try, no matter what the government says they can or can't do - I would. And the guesses as to fidelity required are just that - guesses. Life extension is a powerful, powerful draw. Upload may also ultimately be easier - hand-waving away a ton of details, it's just copy and simulation; it does not require new, creative inventions, just refinements on current thoughts. You don't need to totally understand how something works to scan and simulate it.
Enough. If you have read this far - more power to you, thank you much for your time.
PS. I still don't get the whole "simulated human civilizations" bit - the paper did not seem to touch on that. But I rather suspect it's the same backwards probability thing...
Replies from: gjm↑ comment by gjm · 2015-06-04T14:36:35.008Z · LW(p) · GW(p)
I think you're wrong about "backwards probability".
Probabilities describe your state of knowledge (or someone else's, or some hypothetical idealized observer's, etc.). It is perfectly true that "your" probability for some past event known to you will be 1 (or rather something very close to 1 but allowing for the various errors you might be making), but that isn't because there's something wrong with probabilities of past events.
Now, it often happens that you need to consider probabilities that ignore bits of knowledge you now have. Here's a simple example.
I have a 6-sided die. I am going to roll the die, flip a number of coins equal to the number that comes up, and tell you how many heads I get. Let's say the number is 2. Now I ask you: how likely is it that I rolled each possible number on the die?
To answer that question (beyond the trivial observation that clearly I didn't roll a 1) one part of the calculation you need to do is: how likely was it, given a particular die roll but not the further information you've gained since then, that I would get 2 heads? You will get completely wrong answers if you answer all those questions with "the probability is 1 because I know it was 2 heads".
(Here's how the actual calculation goes. If the result of the die roll was k, then Pr(exactly 2 heads) was (k choose 2) / 2^k, which for k=1..6 goes 0, 1/4, 3/8, 6/16, 10/32, 15/64; since all six die rolls were equiprobable to start with, your odds after learning how many heads are proportional to these or (taking a common denominator) to 0 : 16 : 24 : 24 : 20 : 15, so e.g. Pr(roll was 6 | two heads) is 15/99 = 5/33. Assuming I didn't make any mistakes in the calculations, anyway.)
The SSA-based calculations work in a similar way.
- Consider the possible different numbers of humans there could ever have been (like considering all the possible die rolls).
- For each, see how probable it is that you'd have been human # 70 billion, or whatever the figure is (like considering how probable it was that you'd get two heads).
- Your posterior odds are obtained from these probabilities, together with the probabilities of different numbers of human beings a priori.
I am not claiming that you should agree with SSA. But the mere fact that it employs these backward-looking probabilities is not an argument against it; if you disagree, you should either explain why computations using "backward probabilities" correctly solve the die+coins problem (feel free to run a simulation to verify the odds I gave) despite the invalidity of "backward probabilities", or else explain why the b.p.'s used in the doomsday argument are fundamentally different from the ones used in the die+coins problem.