Resurrection through simulation: questions of feasibility, desirability and some implications
post by jacob_cannell · 2012-05-24T07:22:20.480Z · LW · GW · Legacy · 57 commentsContents
Simulation Feasibility Simulation Ethics Closed Loops None 57 comments
Could a future superintelligence bring back the already dead? This discussion has come up a while back (and see the somewhat related); I'd like to resurrect the topic because ... it's potentially quite important.
Algorithmic resurrection is a possibility if we accept the same computational patternist view of identity that suggests cryonics and uploading will work. I see this as the only consistent view of my observations, but if you don't buy this argument/belief set then the rest may not be relevant.
The general implementation idea is to run a forward simulation over some portion of earth's history, constrained to enforce compliance with all recovered historical evidence. The historical evidence would consist mainly of all the scanned brains and the future internet.
The thesis is that to the extent that you can retrace historical reality complete with simulated historical people and their thoughts, memories, and emotions, to this same extent you actually recreate/resurrect the historical people.
So the questions are: is it feasible? is it desirable/ethical/utility-efficient? And finally, why may this matter?
Simulation Feasibility
A few decades ago pong was a technical achievement, now we have avatar. The trajectory seems to suggest we are on track to photorealistic simulations fairly soon (decades). Offline graphics for film arguably are already photoreal, real-time rendering is close behind, and the biggest remaining problem is the uncanny valley, which really is just the AI problem by another name. Once we solve that (which we are assuming), the Matrix follows. Superintelligences could help.
There are some general theorems in computer graphics that suggest that simulating an observer optimized world requires resources only in proportion to the observational power of the observers. Video game and film renderers in fact already rely heavily on this strategy.
Criticism from Chaos: We can't even simulate the weather more than a few weeks in advance.
Response: Simulating the exact future state of specific chaotic systems may be hard, but simulating chaotic systems in general is not. In this case we are not simulating the future state, but the past. We already know something of the past state of the system, to some level of detail, and we can simulate the likely (or multiple likely) paths within this configuration space, filling in detail.
Physical Reversibility Criticism: The AI would have to rewind time, it would have to know the exact state of every atom on earth and every photon that has left earth.
Response: Yes the most straightforward brute force way to infer the past state of earth would be to compute the reverse of all physical interactions and would require ridiculously impractical amounts of information and computation. The best algorithm for a given problem is usually not brute force. The specifying data of a human mind is infinitesimal in comparison, and even a random guessing algorithm would probably require less resources than fully reversing history.
Constrained simulation converges much faster to perfectly accurate recovery, but by no means is full perfect recovery even required for (partial) success. The patternist view of identity is fluid and continuous.
If resurrecting a specific historical person is better than creating a hypothetical person, creating a somewhat historical person is also better, and the closer the better.
Simulation Ethics
Humans appear to value other humans, but each human appears to value some more than others. In general humans typically roughly value themselves the most, then kin and family, followed by past contacts, tribal affiliations, and the vaguely similar.
We can generalize this as a valuation in person-space which peaks at the self identity-pattern and then declines in some complex fashion as we move away to more distant locales and less related people.
If we extrapolate this to a future where humans have the power to create new humans and or recreate past humans, we can infer that the distribution of created people may follow the self-centered valuation distribution.
Thus recreating specific ancestors or close relations is better than recreating vaguely historical people which is better than creating non-specific people in general.
Suffering Criticism: An ancestral simulation would recreate a huge amount of suffering.
Response: Humans suffer and live in a world that seems to suffer greatly, and yet very few humans prefer non-existence over their suffering. Evolution culls existential pessimists.
Recreating a past human will recreate their suffering, but it could also grant them an afterlife filled with tremendous joy. The relatively small finite suffering may not add up to much in this consideration. It could even initially relatively enhance subsequent elevation to joyful state, but this is speculative.
The utilitarian calculus seems to be: create non-suffering generic people who we value somewhat less vs recreate initially suffering specific historical people who we value more. In some cases (such as lost love ones), the moral calculus weighs heavily in favor of recreating specific people. Many other historicals may be brought along for the ride.
Closed Loops
The vast majority of the hundred billion something humans who have ever lived share the singular misfortune of simply being born too early in earth's history to be saved by cryonics and uploading.
Recreating history up to 2012 would require one hundred billion virtual brains. Simulating history into the phase when uploading and virtual brains become common could vastly increase the simulation costs.
The simulations have the property that they become more accurate as time progresses. If a person is cryonically perserved and then scanned and uploaded, this provides exact information. Simulations will converge to perfect accuracy at that particular moment in time. In addition, the cryonic brain will be unconscious and inactive for a stretch.
Thus the moment of biological death, even if the person is cryonically preserved, could be an opportune time to recycle simulation resources, as there is no loss of unique information (threads converged).
How would such a scenario effect the Simulation Argument? It would seem to shift probabilities such that more (most?) observer moments are in pre-uploading histories, rather than in posthuman timelines. I find this disquieting for some reason, even though I don't suspect it will effect my observational experience.
57 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-25T00:45:57.602Z · LW(p) · GW(p)
The primary problem and question is whether a pattern-identical version of you with different causal antecedents is the same person. Believing that uploading works, in virtue of continuing the same type of pattern from a single causal antecedent, does not commit you to believing that destruction followed by a whole new causal process coincidentally producing the same pattern, is continuity of consciousness.
Many will no doubt reply saying that what I am questioning is meaningless, in which case, I suppose, it works as well as anything does; but for myself I do not dismiss such things as meaningless until I understand exactly how they are meaningless.
Replies from: jacob_cannell, TheOtherDave↑ comment by jacob_cannell · 2012-05-25T16:23:22.434Z · LW(p) · GW(p)
An upload will become a file, a string of bits. Said file could then be copied, or even irreversibly mixed if you prefer, into many such files, which all share the same causal antecedents. But we could also create an identical file through a purely random process, and the randomly-created file and the upload file are logically/physically/functionally identical. We could even mix and scramble them if desired, but it wouldn't really matter because these are just bits, and bits have no intrinsic history tags. You have spent some time dismantling zombie arguments, and it would seem there is an analog here: if there is no objective way, in practice/principle, to differentiate two minds (mindfiles), then they are the same in practice/principle.
On the other hand, I doubt that creating a human-complexity mindfile through a random process will be computationally tractable anytime soon, and so I agree that recreating the causal history is the likely path.
But if you or I die and then one or the other goes on to create a FAI which reproduces the causal history of earth, it will not restore our patterns through mere coincidence alone.
Curious though, as I wouldn't have predicted that this would be your disagreement. Have you written something on your thoughts on this view of identity?
↑ comment by TheOtherDave · 2012-05-25T03:37:39.870Z · LW(p) · GW(p)
I don't assert that it's meaningless, but if two agents being "pattern-identical" entails them behaving the same ways and having the same experiences in the same situations, then I'm not sure why replacing me with a pattern-identical agent -- even if it turns out that we aren't the same person in some meaningful sense -- is something I should care about one way or another.
Replies from: cousin_it↑ comment by cousin_it · 2012-05-25T08:39:44.424Z · LW(p) · GW(p)
I seem to have preferences about subjective anticipation. For example, if I were a selfish person placed in mjd's scenario and someone told me that I have zero chance of "continuing into" the uploaded version, then I wouldn't bother uploading.
Replies from: Vladimir_Nesov, TheOtherDave↑ comment by Vladimir_Nesov · 2012-05-25T09:37:05.844Z · LW(p) · GW(p)
Apparent preferences about decisions run a risk of actually being rationalizations of intuitively appealing incorrect decisions (and not real preferences). Especially in situations where consequentialist analysis runs against intuition, the decision suggested by intuition can feel like preference, maybe because preferences are usually accessed in the form of intuitions.
↑ comment by TheOtherDave · 2012-05-25T13:21:07.395Z · LW(p) · GW(p)
Oh, certainly... I agree that it's possible to care about whether I'm replaced by such an agent.
Heck, it's possible to care about whether I'm replaced with my future self... e.g., I can imagine deciding there's no point to continuing at my job and saving money for retirement, because 70-year-old-Dave is a completely different person and why should he get to spend this money instead of me?
What I'm not sure about is whether I should care.
That said, though, as far as I can tell through imagining scenarios, I don't in fact seem to care.
That is, if I imagine myself in an upload scenario like that, my intuitive sense about it is that I get up from the machine and go on to get cancer and die, and also that I get up from the machine and go on to live in a digitized Utopia, and these are two distinct loci of experience neither of which is each other but both of which are me, and to the extent that I care about my future selves at all (which is considerable) I care about both of them.
Of course, people's judgments of that sort of thing are notoriously unreliable. If I imagine myself being told I have six weeks to live, for example, my intuitive sense is that I would feel relieved and kind of joyful. I find it unlikely that this would actually be my reaction.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-25T19:49:45.821Z · LW(p) · GW(p)
The view you describe is a common one, but it leaves some things underspecified.
Specifically, you anticipate getting up from the chair in real life and also in the simulation - but that's only clearly true if the copies are perfectly faithful. What if a slightly different copy of you is created - should you expect to experience being that copy slightly less, or not expect it at all?
If the latter, how perfect does a copy have to be? A quantum-identical copy is practically impossible. Do different substrates (simulations) count or is that too much difference?
If the former, there are important questions as to which differences influence your expectations how much. (Both clone physical bodies and simulations leave a great many parameters open for slight changes.) The drop-off in your expectations controls your preferences and actions (e.g. how to split your wealth between the copies before their creation), but it seems like it's impossible to obtain empirical data about this - after creating the clones, they behave in whatever way you created them to behave, and remember whatever you created them to remember, which proves nothing. On the other hand, we don't seem to partake in the experiences of slightly different people, who ought to exist in an infinite universe... I believe this is part of what Eliezer refers to as not fully understood.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-25T21:38:53.358Z · LW(p) · GW(p)
Sure. Specifying my position more precisely will take a fair number of words, but OK, here goes.
There are three entities under discussion here:
A = Dave at T1, sitting down in the copier.
B = Dave at T2, standing up from the copier.
C = Copy-of-Dave at T2, standing up from the copier.
...and the question at hand is which of these entities, if any, is me. (Yes? Or is that a different question than the one you are interested in)
Well, OK. Let's start with A... why do I believe A is me?
Well, I don't, really. I mean, I have never sat down at an identity-copying machine.
But I'm positing that A is me in this thought experiment, and asking what follows from that.
Now, consider B... why do I believe B is me?
Well, in part because I expect B and A to be very similar, even if not quite identical.
But is that a fair assumption in this thought experiment?
It might be that the experience of knowing C exists would cause profound alterations in my psyche, such that B believes (based on his memories of being A) that A was a very different person, and A would agree if he were somehow granted knowledge of what it was like to be B. I'm told having a child sometimes creates these kinds of profound changes in self-image, and it would not surprise me too much if having a duplicate sometimes did the same thing.
More mundanely, it might be that the experience of being scanned for copy causes alterations in my mind, brain, or body such that B isn't me even if A is.
Heck, it's possible that I'm not really the same person I was before my stroke... there are certainly differences. It's even more possible that I'm not really the person I was at age 2... I have less in common with that entity than I do with you.
Thinking about it, it seems that there's a complex cluster of features that I treat as evidence of identity being preserved from one moment to another, none of which is either sufficient or necessary in isolation. Sharing memories is one such feature. Being in the same location is another. Having the same macroscopic physical composition (e.g. DNA) is a third. Having the same personality is a fourth. (Many of these are themselves complex clusters of neither-necessary-nor-sufficient features.)
For convenience, I will label the comparison operation that relies on that set of features to judge similarity F(x,y). That is, what F(A,B) denotes is comparing A and B, determining how closely they match along the various referenced dimensions, weighting the results based on how important that dimension is and degree of match, comparing those weighted results to various thresholds, and ultimately coming out at the other end with a "family resemblance" judgment: A and B are either hashed into the same bucket, or they aren't.
So, OK. B gets up from the machine, and I expect that while B may be quite different from A, F(B,A) will still sort them both into the same bucket. On that basis, I conclude that B is me, and I therefore expect that I will get up from the machine.
If instead I assume that F(B,A) sorts them into different buckets, then the possibility that I don't get up from that machine starts to seem reasonable... B gets up, but B isn't me.
I just don't expect that to happen, because I have lots of experiences of sitting down and getting up from chairs.
But of course those experiences aren't probitive. Sure, my memories of the person who sat down at my desk this morning match my sense of who I am right now, but that doesn't preclude the possibility that those memories are different from what they were before i sat down, and I just don't remember how I was then. Heck, I might be a Boltzman brain.
I can't disprove any of those ideas, but neither is there any evidence supporting them; there's no reason for those hypotheses to be promoted for consideration in the first place. Ultimately, I believe that I'm the same person I was this morning because it's simplest to assume so; and I believe that if I wake up tomorrow I'll be the same person then as well for the same reason. If someone wants me to seriously consider the possibility that these assumptions are false, it's up to them to provide evidence of it.
Now let's consider C.
Up to a point, C is basically in the same case as B: C gets up from the machine, and I expect that while C may be quite different from A, F(C,A) will still sort them both into the same bucket. As with B, on that basis I expect that I will get up from the machine (a second time).
If instead I assume that F(C,A) sorts them into different buckets, the possibility that I don't get up from that machine a second time starts to seem reasonable... C gets up, but C isn't me.
So, sure. If the duplication process is poor enough that evaluating the key cluster of properties for C gives radically different results than for A, then I conclude that A and C aren't the same person. If A is me, then I sit down at the machine but I don't get up from it.
And, yes, my expectations about the reliability of the duplication process governs things like how I split my wealth, etc.
None of this strikes me as particularly confusing or controversial, though working out exactly what F() comprises is an interesting cognitive science problem.
Oh, and just to be clear, since you brought up quantum-identity: quantum-identity is irrelevant here. If it turns out that my quantum identity has not been preserved over the last 42 years of my existence, that doesn't noticeably alter my confidence that I've been me during that time.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-26T08:43:04.417Z · LW(p) · GW(p)
I'm a bit embarrassed to have made you write all that out in long form. Because it doesn't really answer my question: all the complexity is hidden in the F function, which we don't know.
You suggest F is to be empirically derived by (in the future) observing other people in the same situations. That's a good strategy for dealing with other people, but should I update towards having the same F as everyone else? As Eliezer said, I'm not perfectly convinced, and I don't feel perfectly safe, because I don't understand the problem that is purportedly being solved, even though I seem to understand the solution.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-26T14:37:56.454Z · LW(p) · GW(p)
Given that the cognitive mechanism for computing that two perceptions are of the same concept is a complex evolved system, I find it about as likely that your mechanism for doing so is significantly different from mine as that you digest food in a significantly different way, or that you use a different fundamental principle for extracting information about your surroundings from the light that strikes your body.
But, OK, let's suppose for the sake of the argument that it's true... I have F1(), and you have F2(), and as a consequence one of us might have two experiences E1 and E2 and compute the existence of two agents A1 and A2, while the other has analogous experiences but computes the existence of only one agent A1.
So, OK, we disagree about whether A1 has had both experiences. For example, we disagree about whether I have gotten up from the copier twice, vs. I have gotten up from the copier once and someone else who remembers being me and is similar to me in some ways but isn't actually me got up from the copier once.
So what? Why is it important that we agree?
What might underlie such a concern is the idea that there really is some fact of the matter as to whether I got up once, or twice, or not at all, over and above the specification of what entities got up and what their properties are, in which case one (or both) of us might be wrong, and we don't want to be wrong. Is that the issue here?
Replies from: DanArmak↑ comment by DanArmak · 2012-05-26T18:32:53.768Z · LW(p) · GW(p)
I wasn't thinking of F like that, but rather like a behavior or value that we can influence by choosing. In that sense, I spoke of 'updating' my F (the way I'd update a belief or change a behavior).
Your model is that F is similar across humans because it's a mostly hardcoded, complex, shared pattern recognition mechanism. I think that description is true but for people who don't grow up used to cloning or uploading or teleporting, who first encounter it as adults and have to adjust their F to handle the new situation, initial reactions will be more varied than that model suggests.
Some will take every clone, even to different substrates, to be the same as the original for all practical purposes. Others may refuse to acknowledge specific kinds of cloning as people (rejecting patternism), or attach special value to the original, or have doubts about cloning themselves.
What might underlie such a concern is the idea that there really is some fact of the matter
Yes. I fear that there may be, because I do not fully understand the matter of consciousness and expectations of personal experience.
The only nearly (but still not entirely) full and consistent explanation of it that I know of, is the one that rejects the continousness of conscious experience over time, and says each moment is experienced separately (each by a different experiencer, or all moments in the universe by the same experiencer, it makes no difference), it's just that every experienced moment comes with memories that create the illusion of being connected to the previous moment of that mind-pattern.
This completely discards the notion of personal identity. I know some people believe in this, but I don't, and don't really want to if there's a way to escape this repugnant conclusion without going against the truth.
So as long as there's an open question, it's a very important one. I want to be very sure of what I'm doing before I let myself be cloned.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-26T21:00:56.112Z · LW(p) · GW(p)
Ah, OK.
Sure, if we're concerned that I have individual consciousness which arises in some way we don't understand, such that I might conclude that C is me on the basis of various observable facts when in reality C lacks that essential me-consciousness (either because C possesses someone-else-consciousness, or because C possess no consciousness at all and is instead a p-zombie, or for some other reason), then I can understand being very concerned about the possibility that C might get treated as though it were me when it really isn't.
I am not in fact concerned about that, but I agree that if you are concerned about it, none of what I'm saying legitimately addresses that concern. (As far as I can tell, neither can anything else, but that's a different question.)
Of course, similar issues arise when trying to address the concern that five minutes from now my consciousness might mysteriously be replaced by someone-else-consciousness, or might simply expire or move elsewhere, leaving me a p-zombie. Or the concern that this happened five minutes ago and I didn't notice.
If you told me that as long as that remained an open question it was important, and you wanted to be very sure about it before you let your body (or mine!) live another five minutes, I'd be very concerned on a practical level.
As it stands, since there isn't actually a cloning machine available for you to refuse the use of, it doesn't really matter for practical purposes.
This completely discards the notion of personal identity.
This strikes me as a strange thing to say, given what you've said elsewhere about accepting that your personal identity -- the referent for "I" -- is a collection of agents that is neither coherent nor unique nor consistent. For my own part I agree with what you said there, which suggests that a notion of personal identity can be preserved even if my brain doesn't turn out to house a single unique coherent consciousness, and I disagree with what you say here, which suggests that it can't.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-27T13:15:44.158Z · LW(p) · GW(p)
neither can anything else, but that's a different question
Fully answering or dissolving the question - why is there subjective experience and qualia at all? - would I think address my concerns. It would also help if I could either construct a notion of identity through time which somehow tied into subjective experience, or else if it was conclusively proven (by logical argument, presumably) that such a notion can't exist and that the "illusion of memory" is all there is.
For my own part I agree with what you said there, which suggests that a notion of personal identity can be preserved even if my brain doesn't turn out to house a single unique coherent consciousness, and I disagree with what you say here, which suggests that it can't.
As I said, I don't personally endorse this view (which rejects personal identity). I don't endorse it mostly because it is to me a repugnant conclusion. But I don't know of a good model that predicts subjective experience meaningfully and doesn't conflict with anything else. So I mentioned that model, for completeness.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-27T14:58:35.363Z · LW(p) · GW(p)
FWIW, I reject the conclusion that the "illusion of memory" is all there is to our judgment of preserved identity, as it doesn't seem to fit my observations. We don't suddenly perceive Sam as no longer being Sam when he loses his memory (although equally clearly memory is a factor). As I said originally, it seems clear to me that there are a lot of factors like this, and we perform some aggregating computation across all of them to make a judgment about whether two experiences are of the same thing.
What I do say is that our judgment of preserved identity, which is a computation (what I labelled F(x) above) that takes a number of factors into account, is all there is... there is no mysterious essence of personal identity that must be captured over and above the factors that contribute to that computation.
As for what factors those are, that's a question for cognitive science, which is making progress in answering it. Physical similarity is clearly relevant, although we clearly accept identity being preserved across changes in appearance... indeed, we can be induced to do so in situations where very small variations would prevent that acceptance, as with color phi. Gradualness of change is clearly relevant, though again not absolute. Similarity of behavior at some level of description is relevant, although there are multiple levels available and it's possible for judgments to conflict here. Etc.
Various things can happen that cause individual judgments to differ. My mom might get Alzheimers and no longer recognize me as the same person she gave birth to, while I continue to identify myself that way. I might get amnesia and no longer recognize myself as the same person my mom gave birth to, while she continues to identify herself that way. Someone else might have a psychotic break and begin to identify themselves as Dave, while neither I nor my mom do. Etc. When that happens, we sometimes allow the judgments of others to substitute for our own judgments (e.g., "Well, I don't remember being this Dave person and I don't really feel like I am, but you all say that I am and I'll accept that.") to varying degrees.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-27T15:50:29.368Z · LW(p) · GW(p)
I was midway through writing a response, and I had to explain the "illusion of memory" and why it matters. And then I thought about it. And I think I dissolved the confusion I had about it. I now realize it's true but adds up to normality and therefore doesn't lead to a repugnant conclusion.
I think you may have misunderstood what the "illusion" is. It's not about recognizing others. It's about recognizing oneself: specifically, self-identifying as an entity that exists over time (although it changes gradually over time). I self-identify like that, so do most other people.
The "illusion" - which was a poor name because there is no real illusion once properly understood - is: on the level of physics there is no tag that stays attached to my self (body or whatever) during its evolution through time. All that physically exists is a succession of time-instants in each of which there is an instance of myself. But why do I connect that set of instances together rather than some other set? The proximate reason is not that it is a set of similar instances, because I am not some mind that dwells outside time and can compare instances for similarity. The proximate reason is that each instant-self has memories of being all the previous selves. If it had different memories, it would identify differently. ("Memories" take time to be "read" in the brain, so I guess this includes the current brain "state" beyond memories. I am using a computer simile here; I am not aware of how the brain really works on this level.)
So memory, which exists in each instant of time, creates an "illusion" of a self that moves through time instead of an infinite sequence of logically-unconnected instances. And the repugnant conclusion (I thought) was that there really was no self beyond the instant, and therefore things that I valued which were not located strictly in the present were not in some sense "mine"; I could as well value having been happy yesterday as someone else having been happy yesterday, because all that was left of it today was memories. In particular, reality could have no value beyond that which false memories could provide, including e.g. false knowledge.
However, now I am able to see that this does in fact add up to normality. Not just that it must do so (like all things) but the way it actually does so. Just as I have extension in space, I have extension in time. Neither of these things makes me an ontologically fundamental entity, but that doesn't prevent me from thinking of myself as an entity, a self, and being happy with that. Nature is not mysterious.
Unfortunately, I still feel some mystery and lack of understanding regarding the nature of conscious experience. But given that it exists, I have now updated towards "patternism". I will take challenges like the Big Universe more seriously, and I would more readily agree to be uploaded or clones than I would have this morning.
Thank you for having this drawn-out conversation with me so I could come to these conclusions!
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-27T15:54:57.677Z · LW(p) · GW(p)
You're welcome.
comment by Stuart_Armstrong · 2012-05-24T10:17:43.794Z · LW(p) · GW(p)
Anders has a post on the subject: http://www.aleph.se/andart/archives/2012/04/how_many_persons_can_there_be_brain_reconstruction_and_big_numbers.html
Replies from: jacob_cannell↑ comment by jacob_cannell · 2012-05-24T17:01:13.158Z · LW(p) · GW(p)
Interesting, thanks. He brings up some good points which I partly agree with, but he seems to be only considering highly exact recreations, which I would agree are unfeasible. We don't need anything near exactness for success though.
This leads to a first tentative argument against reconstruction based on external data: we are acquiring potentially personality-affecting information at a fairly high rate during our waking life, yet not revealing information at the same high rate. The ratio seems to be at least 1000:1.
True, but much of the point of our large sensory cortices is to compress all the incoming sensory information down into a tiny abstract symbolic stream appropriate for efficient prediction computations.
A central point would be the inner voice: our minds are constantly generating internal output sentences, only a fraction of which are externally verbalized. The information content of the inner voice is probably some of the most crucial defining information for reconstructing thoughts, and it is very compact.
That's my short reply on short notice. I'll update on Anders points and post a longer reply link here later.
comment by lsparrish · 2012-05-24T14:06:46.764Z · LW(p) · GW(p)
I kind of wonder if there might be better ways of retrieval than simulation. There's a lot of interstellar dust out there, and an expanding sphere of light from the earth for every moment of history is interacting (very weakly) with that dust.
Thus if we were to set up something like a dyson shell at 50AU from the sun designed to maximize computational energy collection and also act as an extremely powerful telescope, I have to wonder if we could collect enough data from interstellar space that could be analyzed to produce an accurate recording of human history. New data would constantly be coming in from further dust about each moment of history.
If a solar system centered version turns out to be too weak, that could be a motive (as if one is needed) for colonizing the galaxy. Perhaps converting each star of the galaxy into a hyper-efficient computer (which would take tens if not hundreds of thousands of years to reach them all) would enable us to effectively analyze dust particles from the intergalactic void. Or perhaps by targeting large stars further out on the spiral arms as power sources for telescopes, we could get a picture with reduced interference.
Replies from: lsparrish↑ comment by lsparrish · 2012-05-24T14:10:44.227Z · LW(p) · GW(p)
Could this all be converted into actual history (and thus recreatable minds) without basically instantiating the pain that happened? That's a bit hazy for me. The physics extrapolations would involve a complex set of heuristics for comparative analysis of diverse data streams, and trying to narrow down the causal chain based on new data as it becomes available. However it wouldn't exactly be simulation in the sense we're used to thinking of it.
Obviously when resurrecting the minds, you would want to avoid creating traumatized and anti-social individuals. There would probably be an empirically validated approach to making humans come back "whole" and capable of integrating into the galactic culture while retaining their memories and experiences to sufficient degree that they are indisputably the same person.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2012-05-24T18:24:56.181Z · LW(p) · GW(p)
Recreating accurate historical minds entails recreating accurate history, complete with traumatized and anti-social individuals. We should be able to 'repair' and integrate them into future posthuman galactic culture after they are no longer constrained to historical accuracy (ie, after death), but not so much before. There may be some leeway, but each person's history is entwined with the global history. You can't really change too much without changing everything and thus getting largely different people.
comment by Jack · 2012-05-24T19:39:50.584Z · LW(p) · GW(p)
This, is perhaps related to my favoring the unorthodox 1/2 answer to the Sleeping Beauty problem but is anyone else pretty sure that simulating a suffering person doesn't change the amount of suffering in the world? This is not an argument that "simulations don't have feelings"-- I just think that the number of copies of you doesn't have moral significance (so long as that number is at least 1). I'm pretty happy right now-- I don't think the world would be improved significantly if there were a server somewhere running a few hundred exact copies of my brain state and sensory input. I consider my identity to include all exactly similar simulations of me and the quantity of those simulations in no way impacts my utility function (until you put us in a decision problem where how many copies of me actually matters). I am not concerned about token persons I'm concerned about the types. What people care about is that there be some future instantiation of themselves and that that instantiation be happy.
Historical suffering already happened and making copies of it doesn't make it worse (why would the time at which a program is run possibly matter morally?). Moreover, it's not clear why the fact that historical people no longer exist should make a bit of difference in our wanting to help them. In a timeless sense they will always be suffering-- what we can do is instantiate an experienced end to that suffering (a peaceful afterlife).
Replies from: CarlShulman, Kaj_Sotala↑ comment by CarlShulman · 2012-05-24T20:39:57.989Z · LW(p) · GW(p)
If you combine this with a Big World (e.g. eternal inflation) where all minds get instantiated then nothing matters. But you would still care about what happens even if you believed this is a Big World.
Replies from: Jack↑ comment by Jack · 2012-05-24T20:56:42.899Z · LW(p) · GW(p)
Why shouldn't we be open to the possibility that a Big World renders all attempts at consequentially altruistic behavior meaningless?
Even if I'm wrong that single instantiation is all that matters it seems plausible that what we should be concerned with is not the frequency with which happy minds are instantiated but the proportion of "futures" it which suffering has been relieved.
↑ comment by Kaj_Sotala · 2012-05-24T20:17:36.143Z · LW(p) · GW(p)
Replies from: Jack↑ comment by Jack · 2012-05-24T21:08:24.968Z · LW(p) · GW(p)
Hmm. I don't really disagree that qualia is dupilicated, it's more that I'm not sure I care about qualia instantiations rather than types of qualia (confusing this, of course, is uncertainty about what is meant by qualia). His ethical arguments I find pretty unpersuasive but the epistemological argument requires more unpacking.
comment by JenniferRM · 2012-05-24T16:42:13.244Z · LW(p) · GW(p)
See also: Neuroimaging as alternative/supplement to cryonics?
Replies from: jacob_cannell↑ comment by jacob_cannell · 2012-05-24T17:35:55.963Z · LW(p) · GW(p)
Ahh thanks. I agree with your train of thought.
I thought that the same slippery slope argument for identity from patternism entails that details are unimportant, but that view is perhaps less common here than I would have thought.
Replies from: JenniferRM↑ comment by JenniferRM · 2012-05-24T22:32:15.770Z · LW(p) · GW(p)
Is "patternism" a private word that you use to refer to some constellation of philosophic tendencies you've personally observed, or is it a coherent doctrine described by others (preferably used by proponents to self-describe) in a relatively public way? It sounds like something you're using in a roughly descriptive way based on private insights, but google suggests a method in comparative religion or Goertzel's theory of that name...
Replies from: jacob_cannell↑ comment by jacob_cannell · 2012-05-25T15:10:22.852Z · LW(p) · GW(p)
I thought I first heard that term from Kurzweil in the TSIN or his earlier work, but I've read or skimmed some of Goertzel's writing, so perhaps I picked it up from there. I'm realizing the term probably has little meaning in philosophy, but suggests computationalism and or functionalism.
Replies from: JenniferRM↑ comment by JenniferRM · 2012-05-25T20:06:59.893Z · LW(p) · GW(p)
For politico-philosophical l stuff, I kind of like the idea of taking the name that people who half-understand a mindset apply from a distance to distinguish from all the other mind sets that they half-understand... in which case the best term I know is "cybernetic totalism".
However, in this case the discussion isn't a matter of general mindset but actually is a falsifiable scientific/engineering question from within the mindset: how substrate independent is the mind? My sense is that biologists willing to speculate publicly think the functionality of the mind is intimately tangled up with the packing arrangements of DNA and the precise position of receptors in membranes and so on. I suspect that its higher than that, but also I don't think enough people understand the pragmatics of substrate independence for there to be formal politico-philosophic labels for people who cherish one level of abstraction versus another.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2012-05-25T20:35:05.322Z · LW(p) · GW(p)
I remember and loved Jarod Lanier's piece where he coined that term, and I considered myself a cybernetic totalist (and still do). It just doesn't exactly role off the tongue.
At some point in college I found Principa Cybernetica, and I realized I had found my core philosophical belief set. I'm not sure what you call that worldview though, perhaps systemic evolutionary cyberneticism?
Patternist at least conveys that the fundamental concept is information patterns.
However, in this case the discussion isn't a matter of general mindset but actually is a falsifiable scientific/engineering question from within the mindset: how substrate independent is the mind?
Yes!
My sense is that biologists willing to speculate publicly think the functionality of the mind is intimately tangled up with ..
They may, and they may or may not be correct, but in doing so they would be speculating outside of their domain of expertise.
The questions of which level of abstraction is relevant is also a scientific/engineering question, and computational neuroscience already has much to say on that, in terms of what it takes to create simulations and or functional equivalents of brain components.
comment by Kaj_Sotala · 2012-05-24T08:18:28.931Z · LW(p) · GW(p)
Suffering Criticism: An ancestral simulation would recreate a huge amount of suffering.
Response: Humans suffer and live in a world that seems to suffer greatly, and yet very few humans prefer non-existence over their suffering. Evolution culls existential pessimists.
Recreating a past human will recreate their suffering, but it could also grant them an afterlife filled with tremendous joy. The relatively small finite suffering may not add up to much in this consideration. It could even initially relatively enhance subsequent elevation to joyful state, but this is speculative.
Even if the future joy of the recreated past human would outweigh that of the suffering (s)he endured while being recreated, all else being equal it would be even better to create entirely new kinds of people, who wouldn't need to suffer at all, from scratch.
Replies from: A4FB53AC, DanArmak, Ghatanathoah↑ comment by A4FB53AC · 2012-05-24T08:48:23.402Z · LW(p) · GW(p)
I know I prefer to exist now. I'd also like to survive for a very long time, indefinitely. I'm also not even sure the person I'll be 10 or 20 years from now will still be significantly "me". I'm not sure the closest projection of my self on a system incapable of suffering at all would still be me. Sure I'd prefer not to suffer, but over that, there's a certain amount of suffering I'm ready to endure if I have to in order to stay alive.
Then on the other side of this question you could consider creating new sentiences who couldn't suffer at all. But why would these have a priority over those who exist already? Also, what if we created people who could suffer, but who'd be happy with it? Would such a life be worthy? Is the fact that suffering is bad something universal, or a quirk of terran animals neurology? Pain is both sensory information and the way this information is used by our brain. Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-05-24T10:08:09.690Z · LW(p) · GW(p)
Then on the other side of this question you could consider creating new sentiences who couldn't suffer at all. But why would these have a priority over those who exist already?
From the point of view of those who'll actually create the minds, it's not a choice between somebody who exists already and a new mind. It's the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
One might also invoke Big Universe considerations to say that even the "new" kind of a mind has already existed in some corner of the universe (maybe as a Boltzmann brain), so they'll be regardless choosing between two kinds of minds that have existed once. Which just goes to show that the whole "this mind has existed once, so it should be given priority over a one that hasn't" argument doesn't make a lot of sense.
Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.
Yes. See also David Pearce's notion of beings who've replaced pain and pleasure with gradients of pleasure - instead of having suffering as a feedback mechanism, their feedback mechanism is a lack of pleasure.
Replies from: jacob_cannell, FeepingCreature, lsparrish, Ghatanathoah, A4FB53AC↑ comment by jacob_cannell · 2012-05-24T15:07:07.973Z · LW(p) · GW(p)
Then on the other side of this question you could consider creating new sentiences who couldn't suffer at all. But why would these have a priority over those who exist already?
From the point of view of those who'll actually create the minds, it's not a choice between somebody who exists already and a new mind. It's the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
I'm proposing to create these minds, if I survive. Many will want this. If we have FAI, it will help me, by its definition.
I would rather live in a future afterlife that has my grandparents in it than your 'better designs'. Better by whose evaluation? I'd also say that my sense of 'better' outweighs any other sense of 'better' - my terminal values are my own.
One might also invoke Big Universe considerations to say that even the "new" kind of a mind has already existed in some corner of the universe
I could care less about some corner of the universe that is not casually connected to my corner. The big world stuff isn't very relevant: this is a decision between two versions of our local future: one with people we love in it, and one without.
↑ comment by FeepingCreature · 2012-05-24T20:36:26.536Z · LW(p) · GW(p)
Those who will actually create the minds will want to rescue people in the past, so they can reasonably anticipate being rescued themselves. Or differently put, those who create the minds will want the right answer to "should I rescue people or create new people" to be "rescue people".
↑ comment by lsparrish · 2012-05-24T14:24:43.635Z · LW(p) · GW(p)
There's a big difference between recreating an intelligence that exists/existed large numbers of lightyears away due to sheer statistical chance, and creating one that verifiably existed with high probability in your own history. I suspect the latter are enough more interesting to be created first. We might move on to creating the populations of interesting alternate histories, as well as randomly selected worlds and so forth down the line.
Beings who only experience gradients of pleasure might be interesting, but since they already likely have access to immortality wherever they exist (being transhuman / posthuman and all) it seems like there is less utility to trying to resurrect them as it would only be a duplication. Naturally evolved beings lacking the capacity for extreme suffering could be interesting, but it's hard to say how common they would be throughout the universe -- thus it would seem unfair to give them a priority in resurrection compared to naturally evolved ones.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-05-24T15:00:51.581Z · LW(p) · GW(p)
There's a big difference between recreating an intelligence that exists/existed large numbers of lightyears away due to sheer statistical chance, and creating one that verifiably existed with high probability in your own history.
What difference is that?
Beings who only experience gradients of pleasure might be interesting, but since they already likely have access to immortality wherever they exist (being transhuman / posthuman and all) it seems like there is less utility to trying to resurrect them as it would only be a duplication.
I don't understand what you mean by "only a duplication".
Naturally evolved beings lacking the capacity for extreme suffering could be interesting, but it's hard to say how common they would be throughout the universe -- thus it would seem unfair to give them a priority in resurrection compared to naturally evolved ones.
This doesn't make any sense to me.
Suppose that you were to have a biological child in the traditional way, but could select whether to give them genes predisposing them to extreme depression, hyperthymia, or anything in between. Would you say that you should make your choice based on how common each temperament was in the universe, and not based on the impact to the child's well-being?
Replies from: lsparrish↑ comment by lsparrish · 2012-05-24T19:14:56.522Z · LW(p) · GW(p)
What difference is that?
There's a causal connection in one case that is absent in the other, and a correspondingly higher distribution in the pasts of similar worlds.
I don't understand what you mean by "only a duplication".
Duplication of effort as well as effect with respect to other parts of the universe. Meaning you are increasing the numbers of immortals and not granting continued life to those who would otherwise be deprived of it.
Suppose that you were to have a biological child in the traditional way, but could select whether to give them genes predisposing them to extreme depression, hyperthymia, or anything in between. Would you say that you should make your choice based on how common each temperament was in the universe, and not based on the impact to the child's well-being?
We aren't talking about the creation of random new lives as a matter of reproduction, we're talking about the resurrection of people who have lived substantial lives already as part of the universe's natural existence. If you want to resurrect the most people (out of those who have actually existed and died) in order to grant them some redress against death, you are going to have to recreate people who, for physically plausible reasons, would have actually died.
↑ comment by Ghatanathoah · 2013-11-29T02:31:42.082Z · LW(p) · GW(p)
It's the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
If the modeled mind is the same person as the mind that existed once, it is clearly the better choice. And by same person I of course mean that it is related to a preexisting mind in certain ways.
One might also invoke Big Universe considerations to say that even the "new" kind of a mind has already existed in some corner of the universe (maybe as a Boltzmann brain), so they'll be regardless choosing between two kinds of minds that have existed once. Which just goes to show that the whole "this mind has existed once, so it should be given priority over a one that hasn't" argument doesn't make a lot of sense.
We seem too have a moral intuition that things that occur in far distant parts of the universe that have no causal connection to us aren't morally relevant. You seem to think that this intuition is a side-effect of the population ethics principle you seem to believe in (the Impersonal Total Principle). However, I would argue that it is a direct, terminal value.
Evidence for my view is the fact that we tend to also discount the desires of causally unconnected people in distant parts of the universe in nonpopulation ethics situations. For instance, when discussing whether to pave over a forest, we think the desires of those who live near the forest should be considered. However, we do not think the desires of the vast amount of Forest Maximizing AIs who doubtless exist out there should be considered, even though there is likely some part of the Big World they exist in.
Minds that existed once, and were causally connected to our world in certain ways, should be given priority over minds that have only existed in distant, causally unconnected parts of the Big World.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-11-29T20:50:21.731Z · LW(p) · GW(p)
If the modeled mind is the same person as the mind that existed once, it is clearly the better choice. And by same person I of course mean that it is related to a preexisting mind in certain ways.
"Clearly the better choice" is stating your conclusion rather than making an argument for it.
We seem too have a moral intuition that things that occur in far distant parts of the universe that have no causal connection to us aren't morally relevant. You seem to think that this intuition is a side-effect of the population ethics principle you seem to believe in (the Impersonal Total Principle). However, I would argue that it is a direct, terminal value.
Evidence for my view is the fact that we tend to also discount the desires of causally unconnected people in distant parts of the universe in nonpopulation ethics situations. For instance, when discussing whether to pave over a forest, we think the desires of those who live near the forest should be considered. However, we do not think the desires of the vast amount of Forest Maximizing AIs who doubtless exist out there should be considered, even though there is likely some part of the Big World they exist in.
There's an obvious reason for discounting the preferences of causally unconnected entities: if they really are causally unconnected, that means that they can't find out about our decisions and that the extent to which their preferences are satisfied isn't therefore affected by anything that we do.
One could of course make arguments relating to acausal trade, or suggest that we should try to satisfy even the preferences of beings who never found out about it. But to do that, we would have to know something about the distribution of preferences in the universe. And there our uncertainty is so immense that it's better to just focus on the preferences of the humans here on Earth.
But in any case, these kinds of considerations don't seem relevant for the "if we create new minds, should they be similar to minds that have already once existed" question. It's not like the mind that we're seeking to recreate already exists within our part of the universe and has a preference for being (re-)created, while a novel mind that also has a preference for being (re-)created exists in some other part of the universe. Rather, our part of the universe contains information that can be used for creating a mind that resembles an earlier mind, and it also contains information that can be used for creating a more novel mind. When the decision is made, both minds are still non-existent in our part of the universe, and existent in some other.
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-11-30T02:26:43.328Z · LW(p) · GW(p)
"Clearly the better choice" is stating your conclusion rather than making an argument for it.
I assumed that the rest of what I wrote made it clear why I thought it was clearly the better choice.
There's an obvious reason for discounting the preferences of causally unconnected entities: if they really are causally unconnected, that means that they can't find out about our decisions
If that was the reason then people would feel the same about causally connected entities who can't find out about our decisions. But they don't. People generally consider it bad to spread rumors about people, even if they never find out. We also consider it immoral to ruin the reputation of dead people, even though we can't find out.
I think a better explanation for this intuition is simply that we have a bedrock moral principle to discount dissatisfied preferences unless they are about a person's own life. Parfit argues similarly here.
This principle also explains other intuitive reactions people have. For instance, in this problem given by Stephen Landsburg, people tend to think the rape victim has been harmed, but that McCrankypants and McMustardseed haven't been. This can be explained if we consider that the preference the victim had was about her life, whereas the preference of the other two wasn't.
Just as we discount preference violations on a personal level that aren't about someone's own life, so we can discount the existence of distant populations that do not impact the one we are a part of.
and that the extent to which their preferences are satisfied isn't therefore affected by anything that we do.
Just because someone never discovers their preference isn't satisfied, doesn't make it any less unsatisfied. Preferences are about desiring one world state over another, not about perception. If someone makes the world different then the way you want it to be then your preference is unsatisfied, even if you never find out.
Of course, as I said before, if said preference is not about one's own life in some way we can probably discount it.
It's not like the mind that we're seeking to recreate already exists within our part of the universe and has a preference for being (re-)created, while a novel mind that also has a preference for being (re-)created exists in some other part of the universe.
Yes it does, if you think four-dimensionally. The mind we're seeking to recreate exists in our universe's past, whereas the novel mind does not.
People sometimes take actions because a dead friend or relative would have wanted them to. We also take action to satisfy the preferences of people who are certain to exist in the future. This indicates that we do indeed continue to value preferences that aren't in existence at this very moment.
↑ comment by A4FB53AC · 2012-05-24T12:07:41.343Z · LW(p) · GW(p)
It's the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
Still I wonder then, what could I do, to enhance my probability of being resurrected if worse comes to worse and I can't manage to stay alive to protect and ensure the posterity of my own current self if I am not one of those better minds (according to which values though?)
Replies from: Kaj_Sotala, jacob_cannell↑ comment by Kaj_Sotala · 2012-05-24T12:51:52.028Z · LW(p) · GW(p)
I realize that this probably won't be very useful advice for you, but I'd recommend working on letting go of the sense of having a lasting self in the first place. Not that I'd fully alieve in that yet either, but the closer that I've gotten to always alieving it, the less I've felt like I have reason to worry about (not) living forever. Me possibly dying in fourty years is no big deal if I don't even think I'm the same person tomorrow, or five minutes from now.
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-11-29T02:39:12.830Z · LW(p) · GW(p)
Me possibly dying in fourty years is no big deal if I don't even think I'm the same person tomorrow, or five minutes from now.
You're confusing two meaning of the word "the same." When we refer to a person as "the same" that doesn't mean they haven't changed, it means that they've changed in some ways, but not in others.
If you define "same" as "totally unchanging" then I don't want to be the same person five minutes from now. Being frozen in time forever so I'd never change would be tantamount to death. There are some ways I want to change, like acquiring new skills and memories.
But there are other ways I don't want to change. I want my values to stay the same, and I want to remember my life. If I change in that way this is bad. It doesn't matter if this is done in an abrupt way, like dying, or a slow way, like an FAI gradually turning me into a different person.
If people change in undesirable ways, then it is a good thing to restore them through resurrection. I want to be resurrected if I need to be. And I want you to be resurrected to. Because the parts of you that shouldn't change are valuable, even if you've convinced yourself they're not.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-11-29T20:22:37.552Z · LW(p) · GW(p)
You're confusing two meaning of the word "the same." When we refer to a person as "the same" that doesn't mean they haven't changed, it means that they've changed in some ways, but not in others.
Sure, I'm aware of that. But the bit that you quoted didn't make claims about what "the same" means in any objective sense - it only said that if you choose your definition of "the same" appropriately, then you can stop worrying about your long-term survival and thus feel better. (At least that's how it worked for me: I used to worry about my long-term survival a lot more when I still found personal identity to be a meaningful concept.)
↑ comment by jacob_cannell · 2012-05-24T15:13:40.901Z · LW(p) · GW(p)
I've pondered this some, and it seems that the best strategy in distant historical eras was just to be famous, and more specifically to write an autobiography. Having successful ancestors also seems to grow in importance as we get into the modern era. For us today we have cryonics of course, and being succesful/famous/wealthy is obviously viable, but blogging is probably to be recommended as well.
↑ comment by DanArmak · 2012-05-25T20:12:59.247Z · LW(p) · GW(p)
The first people to become immortal and to be able to simulate others, will want to simulate ("revive") their own loved ones who died just before immortality was developed.
These people, once resurrected and integrated into society, will themselves want to resurrect their own loved ones who died a little earlier than that.
And so on until most, if not all, of humanity is simulated.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2012-05-25T22:19:29.192Z · LW(p) · GW(p)
Yes this.
An interesting consequence of this is historical drift: my recreation of my father would differ somewhat from reality, my grandfather more so, and so on. This wouldn't be a huge concern for any of us though, as we wouldn't be able to tell the difference. As long as the reconstructions pass interpersonal turing tests, all is good.
↑ comment by Ghatanathoah · 2013-11-29T02:06:36.204Z · LW(p) · GW(p)
I am disappointed that this has not spawned more principled objections. Morally speaking, the creating people from scratch is far, far worse than resurrecting existing people, even if the existing people experience some suffering in the course of the resurrection.
Your entire argument seems to be based on the "Impersonal Total Principle;" an ethical principle that states that all that matters is the total amount of positive and negative experiences in the world, other factors like the identity of the people having those experiences are not ethically important. I consider this principle to be both wrong and gravely immoral and will explain why in detail below.
When developing moral principles what we typically do is take certain moral intuitions we have, assume that they are being generated by some sort of overarching moral principle, then try to figure out what that principle is. If the principle is correct (or at least a step in the right direction) then other moral intuitions will probably also generate it, if it isn't then they probably won't.
The IPT was developed by Derek Parfit as a proposed solution to the Nonidentity problem. It happens to give the intuitively correct answer to the problem, but generates so many wrong answers in so many other scenarios that I believe it is obviously wrong.
For instance, the Nonidentity Problem has an instance where one child's life will be better than the other because of reduced capabilities. I came up with a version of the problem where the children have the same capabilities, but one has a worse life than the other because they have more ambitious preferences that are harder to satisfy. In that instance it doesn't seem obvious at all to me that we should chose the one with the better life. Plus, imagine an iteration of the NIP where the choice is unhealthy triplets or a healthy child. I think most people would agree that a woman who picks unhealthy triplets is doing something even worse than the woman who picks one unhealthy child in the original NIP. But according to the IPT she's done something better.
Then there are issues like the fact that the IPT suggests there's nothing wrong with someone dying if a new person is created to replace them who will have as good a life as they did. And of course, there is the repugnant conclusion.
But I think the nail in the coffin for IPT is that people seem to accept the Sadistic Conclusion. People regularly harm themselves and others in order to avoid having more children, and they seem to regard this as a moral duty, not a selfish one.
So IPT is wrong. What do I propose to replace it? Not average utilitarianism, that's just as crazy. Rather, I'd replace it with a principle that a small population with higher utility per person is generally better than a large population with lower utility per person, even if the total amount of utility is larger.
Now, I understand you're a personal identity skeptic. That's okay. I'm perfectly willing to translate this principle into phrasing that makes no mention of "persons" or people being "the same." Here goes: It is better to create sets of experiences that are linked in certain ways (ie, memory, personality, etc.). It is better to create experiences that are linked in this way, even if the total amount of positive experiences is lower because of this. It may even be better to create some amount of negative experiences if doing so allows you to make sure more of the experience sets are linked in certain ways.
So there you have it. I completely totally reject the moral principle you base your argument on. It is a terrible principle that does not derive from human moral intuitions at all. Everyone should reject it.
I also want to respond to the other points you've made in this thread but this is getting long, so I'll reply to them separately.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-11-29T20:14:29.884Z · LW(p) · GW(p)
Your entire argument seems to be based on the "Impersonal Total Principle;" an ethical principle that states that all that matters is the total amount of positive and negative experiences in the world, other factors like the identity of the people having those experiences are not ethically important.
Your wording suggests that I would assume the ITP, which would then imply rejecting the value of identity. But actually my reasoning goes in the other direction: since I don't find personal identity to correspond to anything fundamental, my rejection of it causes me to arrive at something ITP-like. But note that I would not say that my rejection of personal identity necessarily implies ITP: "the total amount of positive and negative experience is all that matters" is a much stronger claim than a mere "personal identity doesn't matter". I have only made the latter claim, not the former.
That said, I'm not necessarily rejecting the ITP either. It does seem like a relatively reasonable claim, but that's more because I'm skeptical about the alternatives for ITP than because ITP itself would feel that strongly convincing.
I came up with a version of the problem where the children have the same capabilities, but one has a worse life than the other because they have more ambitious preferences that are harder to satisfy. In that instance it doesn't seem obvious at all to me that we should chose the one with the better life.
To me, ambitious preferences sound like a possible good thing because they might lead to the world becoming better off on net. "The reasonable man adapts himself to his environment. The unreasonable man adapts his environment to himself. All progress is therefore dependent upon the unreasonable man." That does provide a possible reason to prefer the child with the more ambitious preferences, if the net outcome for the world as a whole could be expected to be positive. But if it can't, then it seems obvious to me that we should prefer creating the non-ambitious child.
Then there are issues like the fact that the IPT suggests there's nothing wrong with someone dying if a new person is created to replace them who will have as good a life as they did.
Even if we accepted IPT, we would still have good reasons to prefer not killing existing people: namely that society works much better and with much lower levels of stress and fear if everyone has strong guarantees that society puts a high value on preserving their lives. Knowing that you might be killed at any moment doesn't do wonders for your mental health.
And of course, there is the repugnant conclusion.
I stopped consdering the Repugnant Conclusion a problem after reading John Maxwell's, Michael Sullivan's and Eliezer's comments to your "Mere Cable Channel Addition Paradox" post. And even if I hadn't been convinced by those, I also lean strongly towards negative utilitarianism, which also avoids the Repugnant Conclusion.
Here goes: It is better to create sets of experiences that are linked in certain ways (ie, memory, personality, etc.). It is better to create experiences that are linked in this way, even if the total amount of positive experiences is lower because of this. It may even be better to create some amount of negative experiences if doing so allows you to make sure more of the experience sets are linked in certain ways.
While this phrasing indeed doesn't make any mention of "persons", it still seems to me primarily motivated by a desire to create a moral theory based on persons. If not, demanding the "link" criteria seems like an arbitrary decision.
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-11-30T02:04:55.242Z · LW(p) · GW(p)
Your wording suggests that I would assume the ITP, which would then imply rejecting the value of identity. But actually my reasoning goes in the other direction: since I don't find personal identity to correspond to anything fundamental, my rejection of it causes me to arrive at something ITP-like. But note that I would not say that my rejection of personal identity necessarily implies ITP: "the total amount of positive and negative experience is all that matters" is a much stronger claim than a mere "personal identity doesn't matter". I have only made the latter claim, not the former.
I have the same reductionist views of personal identity as you. I completely agree that it isn't ontologically fundamental or anything like that. The difference between us is that when you concluded it wasn't ontologically fundamental you stopped caring about it. I, by contrast, just replaced the symbol with what it stood for. I figured out what it was that we meant by "personal identity" and concluded that that was what I had really cared about all along.
That does provide a possible reason to prefer the child with the more ambitious preferences, if the net outcome for the world as a whole could be expected to be positive. But if it can't, then it seems obvious to me that we should prefer creating the non-ambitious child.
I can't agree with this. If I had the choice between a wireheaded child who lived a life of perfect passive bliss, or a child who spent their life scientifically studying nature (but lived a hermitlike existence so their discoveries wouldn't benefit others), I would pick the second child, even if they endured many hardships the wirehead would not. I would also prefer not to be wireheaded, even if the wireheaded me would have an easier life.
When considering creating people who have different life goals, my first objective is of course, making sure both of those people would live lives worth living. But if the answer is yes for both of them then my decision would be based primarily on whose life goals were more in line with my ideals about what humanity should try to be, rather than whose life would be easier.
I suppose I am advocating something like G.E. Moore's Ideal Utilitarianism, except instead of trying to maximize ideals directly I am advocating creating people who care about those ideals and then maximizing their utility.
Even if we accepted IPT, we would still have good reasons to prefer not killing existing people: namely that society works much better and with much lower levels of stress and fear if everyone has strong guarantees that society puts a high value on preserving their lives.
I agree, but I also think killing and replacing is wrong in principle.
I stopped consdering the Repugnant Conclusion a problem after reading John Maxwell's, Michael Sullivan's and Eliezer's comments to your "Mere Cable Channel Addition Paradox" post.
I did too, but then I realized I was making a mistake. I realized that the problem with the RC was in it's premises, not it's practicality. I ultimately realized that the Mere Addition Principle was false, and that that is what is wrong with the RC.
While this phrasing indeed doesn't make any mention of "persons", it still seems to me primarily motivated by a desire to create a moral theory based on persons.
No, it is motivated a desire to create a moral theory that accurately maps what I morally value, and I consider the types of relationships we commonly refer to as "personal identity" to be more morally valuable than pretty much anything. Would you rather I devise a moral theory based on stuff I didn't consider morally valuable?
If not, demanding the "link" criteria seems like an arbitrary decision.
You can make absolutely anything sound arbitrary if you use the right rhetoric. All you have to do is take the thing that I care about, find a category it shares with things I don't care about nearly as much, and then ask me why I am arbitrarily caring for one thing over the other even though they are in the same category.
For instance, I could say "Pain and pleasure are both brain states. It's ridiculously arbitrary to care about one brain state over another, when they are all just states that occur in your brain. You should be more inclusive and less arbitrary. Now please climb into that iron maiden."
I believe personal identity is one of the cornerstones of morality, whether you call it by that name, or replace the name with the things it stands for. I don't consider it arbitrary at all.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-11-30T11:17:05.756Z · LW(p) · GW(p)
No, it is motivated a desire to create a moral theory that accurately maps what I morally value, and I consider the types of relationships we commonly refer to as "personal identity" to be more morally valuable than pretty much anything. Would you rather I devise a moral theory based on stuff I didn't consider morally valuable?
Of course you should devise a moral theory based on what you consider morally valuable; it just fails to be persuasive to me, since it appeals to moral intuitions that I do not share (and which thus strike me as arbitrary).
Continued debate in this thread doesn't seem very productive to me, since all of our disagreement seems to come down to differing sets of moral intuitions / terminal values. So there's not very much to be said beyond "I think that X is valuable" and "I disagree".
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-12-06T12:46:00.945Z · LW(p) · GW(p)
Continued debate in this thread doesn't seem very productive to me, since all of our disagreement seems to come down to differing sets of moral intuitions / terminal values.
You're probably right.
EDIT: However, I do think you should consider if your moral intuitions really are different, or if you've somehow shut some important intuitions off by use of the "make anything arbitrary" rhetorical strategy I described earlier.
Also, I should clarify that while I disapprove of the normative conclusions you've drawn from personal identity skepticism, I don't see any inherent problem with using it to improve your mental health in the way you described (when you said that it decreased your anxiety about death). If your emotional systems are out of control and torturing you with excessive anxiety I don't see any reason why you shouldn't try a mental trick like that to treat it.
comment by Pfft · 2012-05-24T15:30:11.093Z · LW(p) · GW(p)
I also wonder whether this is computationally feasible. If you literally had to search through all possible brains until you found the right one, then you would never get anywhere (for even a single person, let alone history). But it's not clear whether any more efficient algorithm exists: inferring the neural net weights from a recorded input-output trace seems like it could be a hard problem.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2012-05-24T15:37:38.757Z · LW(p) · GW(p)
"Right one" is a relative concept. I would like to recreate my grandfather, but my knowledge of him would probably only fill a page of text or so. Thus it is much much easier to recreate a being that passes my personal grandfather turing test than it would be to create a being that satisfies my internal model of my father (of whom I know much more).
The recreations only have to be accurate to the point of complete consistency with surviving evidence.
On the neural net issue, the exact weights certainly don't matter so much. There's massive redundancy at multiple levels. Every one has V1 layers which are specifically unique, but they all functionally do almost exactly the same thing.