How Many of Me Are There?
post by Eneasz · 2011-04-15T19:00:48.008Z · LW · GW · Legacy · 43 commentsContents
43 comments
This post is not about many worlds. It is somewhat disjointed, but builds to a single point.
If an AI was asked today how many human individuals populate this planet, it may not return a number the several-billions range. In fact I’d be willing to bet it’d return a number in the tens of thousands, with the caveat that the individuals vary wildly in measure.
I agree with Robin Hanson that if two instances of me exist, and one is terminated, I didn’t die, I simply got smaller.
In 1995 Robert Sapolsky wrote in Ego Boundaries “My students usually come with ego boundaries like exoskeletons. […] They want their rituals newly minted and shared horizontally within their age group, not vertically over time,” whereas in older societies “needs transcend individual rights to a bounded ego, and people in traditional communities are named and raised as successive incarnations. In such societies, Abraham always lives 900 years--he simply finds a new body to inhabit now and then. ”
Ego boundaries may be more rigid now, but that doesn’t make people more unique. If anything, people have become more like each other. Memes are powerful shapers of mental agents, and as technology allows memes to breed and compete more freely the most viral ones spread through the species.
Acausal trade allows for amazing efficiencies, not merely on a personal level but also via nationalism and religion. People executing strong acausal trading routines will out-compete those who don’t.
Timeless Decision Theory proscribes making decisions as if choosing the outcome for all actors sufficiently like yourself across all worlds. As competition narrows the field of memeplexes to a handful of powerful and virulent ubermemes, and those memeplexes influence the structure and strength of individual’s mental agents in similar ways, people become more like each other. In so doing they are choosing *as if* a single entity more and more effectively. To an outside observer, there may be very little to differentiate two such humans from each other.
Therefore it may be wrong to think of oneself as a singular person. I am not just me – I am also effectively everyone who is sufficiently like me. It’s been argued that there are only seven stories, and every story can be thought of as an elaboration of one of these. It seems likely there are only a few thousand differentiable people, and everyone is simply one of these with some flare.
If we think of people in these terms, certain behaviors make more sense. Home-schooling is looked down on because institutional schools are about making other people into us. Suicide is considered more sinful than killing outsiders because suicide *always* reduces the size of the Meta-Person that the suicidee belonged to. Argument and rhetoric isn’t just a complete waste of your free time, it’s also an attempt to make Meta-Me larger, and Meta-SomeoneElse smaller. Art finally makes sense.
Added Bonus: You no longer have to have many children to exist. You can instead work on enlarging your Meta-Self’s measure.
43 comments
Comments sorted by top scores.
comment by Manfred · 2011-04-16T04:07:42.173Z · LW(p) · GW(p)
Even if there were only 50 different memes you could have or not have, that's 2^50 possible combinations, which is 100000 of times the population of the earth. And in fact there are bajillions of memes out there, and absolutely no reason to think we've reduced the variation between people to under 50 memes - in fact this applies more strongly to modern people than medieval serfs, because peoples' experiences are so much more varied now. Thus everyone is almost certainly a separate person, even if we just look at their memes and not the strengths of their responses, or their memories, or their favorite foods, or their et cetera et cetera.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-16T09:35:04.753Z · LW(p) · GW(p)
Even if there were only 50 different memes you could have or not have, that's 2^50 possible combinations, which is 100000 of times the population of the earth.
This is true but that doesn't have much to do with the argument in the original post. Some people think that humans are just the hollow-ware for memes and that the complexity of values and goals is a fact about our culture and environment more than a fact about our neurological settings.
Our evolutionary mind-template might only feature a narrow set of values and goals. Small fluctuations in this template seeded human societies with lumps that eventually grew into complex cultural and academic preferences.
If that is true then humans appear to be unique only due to the rich cultural environment they are embedded in. The apparent complexity of value exhibited by human beings is a result of their upbringing, education, bodily environment and experiences.
If an AI was going to look at how many unique evolutionary templates there are it might come up with a much smaller number than there are people.
If the complexity of value is induced and not an intrinsic property of our genetic make-up then any AI that would attempt to extrapolate our volition by making us more knowledgeable would mainly be extrapolating the inherent consistency of that knowledge as such artificially induced ideas might override any evolutionary values.
I have written more about that here.
comment by Mitchell_Porter · 2011-04-16T07:08:42.942Z · LW(p) · GW(p)
I would never have guessed that one of the side effects of the computational paradigm of mind would be a new form of proxy immortalism - by this I mean philosophies according to which you live on in your children, your race, your species, in the future consequences of your acts, or in your duplicates elsewhere in the multiverse. I suppose it's just another step beyond the idea that your copy is you, but it's still ironic to see this argument being made, given that it emerges from the same techno-conceptual zeitgeist which elsewhere is employed to urge a person not to be satisfied with living on in their children, their race, et cetera, but rather to seek personal survival beyond the traditional limits, because now it is finally conceivable that something more is possible.
Needless to say, I disagree profoundly with the idea that I "am" also anyone with whom I have some vague similarity. There's also a special appendix to my disagreement, in which I disagree specifically with the idea that Timeless Decision Theory has anything new to add here, except an extra layer of confusion. You don't need Timeless Decision Theory in order to identify with a group; identification with a group is not secretly a case of a person employing Timeless Decision Theory; and "acausal trade", as a phenomenon, is as real as "praying to God". Someone might engage in a thought process for which that is an accurate designation; and somewhere else in some other reality there might even be an entity which views itself as being on the other end of that relationship; but even if there is, it's a folie a deux, a combined madness. If someone in one insane asylum believes that he is the Emperor Napoleon, separated from his beloved Josephine, and if someone in another insane asylum believes that she is Josephine, separated from Napoleon, that doesn't validate the beliefs of either person, it doesn't make those beliefs rational, even if, considered together, the beliefs of those two people are mutually consistent. Acausal trade is irrational because it is acausal. It requires that you imagine, as your counterpart on the other side of the trade, only one, highly specific possible agent that happens to be thinking of you, and happens to respond to your choices in the way that you imagine, rather than any of the zillion other possible agents that would think of you but react according to completely different criteria.
If you do try to take into account the full range of agents that might be imagining you or simulating you, then you're no longer trading, you're just acting under uncertainty. There is a logic to the idea of acting as if you are - or, better, as if you might be - any of the possible entities which are your subjective duplicates (whose experience is indistinguishable from yours), but again, that's just acting under uncertainty: you don't know which possible world you're in, so you construct a probability distribution across the possibilities, and choose accordingly.
As for identifying with entities which are not your subjective duplicates but which share some cognition with you, where do you draw the line? My personal model of the hydrogen atom has something in common with Richard Feynman's; does that mean I should regard my decisions as Feynman's decisions?! I won't absolutely decry the practice of feeling kinship with others, even in remote times and places, far from it; but the moment we start to make this more than a metaphorical identification, a feeling of similarity, and start to speak as if I am you, and you are me, and all of us are one, we are sliding away from awareness of reality, and the fact, welcome or unwelcome as it may be, that I am not you, that we are distinct beings.
I would even say that, existentially, it is important for a person to realize that they are a separate being. You, and only you, will experience your life from inside. You have an interest in how that life unfolds that is not shared by anyone else, because they don't and can't live it for you. It is possible to contemplate one's existential isolation, in all its various dimensions, and then to team up with others, even to decide that something other than yourself is more important than you are; but if you do it that way, at least you'll be doing so in awareness of the actual relationship between you and the greater-than-you with which you identify; and that should make you more, not less, effective in supporting it, unless its use for you really is based on blinding you to your true self.
Replies from: Eneasz↑ comment by Eneasz · 2011-04-16T18:47:26.067Z · LW(p) · GW(p)
I would never have guessed that one of the side effects of the computational paradigm of mind would be a new form of proxy immortalism
Odd, this was one of the first things that occurred to me when I learned of it.
it's still ironic to see this argument being made, given that it emerges from the same techno-conceptual zeitgeist which elsewhere is employed to urge a person not to be satisfied with living on in their children, their race, et cetera, but rather to seek personal survival beyond the traditional limits
I am actually already signed up with CI, so I'm not solely satisfied with my greater-self continuing on. But I also realize the two are related - if/when I am revived, the measure of meta-me is increased. Also, once reanimation becomes possible, I would work toward getting everyone revived regardless of who they are or how much it costs. As such, increasing the number of people sufficiently like me (ie: the measure of meta-me) increases my chances of being revived, and also is healthy for meta-me.
I'm curious if a meta-being having awareness of it's own existence is a competitive advantage. I'd wager it's not, but it'll be interesting to see.
It may seem like a confused form of thinking, but I have come to accept that I have a very mystically-oriented thought process. I prefer to think of Azathoth and Alethea as beings, even though I know they are not. Struggling against evolution is hopeless, but holding back Azathoth for the good of man is noble. I find that I am both happier in life and more able to do useful things in the real world when I think in terms of metaphor. So while meta-beings may be no more than a useful mental construct, the same can be said of many things such as "life" and "particles". If it makes winning more achievable it's worth embracing, or at least exploring.
Replies from: randallsquared↑ comment by randallsquared · 2011-04-19T12:58:26.178Z · LW(p) · GW(p)
I would never have guessed that one of the side effects of the computational paradigm of mind would be a new form of proxy immortalism
Odd, this was one of the first things that occurred to me when I learned of it.
I actually had the opposite reaction, wondering if the me of next year was close enough to be the same person. I have a tendency toward a high future discount in any case, and this didn't help. :)
comment by [deleted] · 2011-04-16T11:51:57.584Z · LW(p) · GW(p)
I don't wanna hijack this, but this post presents a fairly logical conclusion of a basic position I see on LW all the time.
Many people seem to identify themselves with their beliefs, memes and/or preferences. I find that very strange, but maybe my values are unusual in this regard.
For example, I don't want to see my preferences be fulfilled for their own sake, but so that I can experience them being fulfilled. If someone made a clone of me who wanted exactly the same things and succeeded at getting them, I wouldn't feel better about this. The very idea that my preferences have any meaning after my death seems absurd to me. Trying to affect the world in a way I won't personally experience is just as silly.
TDT might be a neat trick, but fundamentally I don't see how it could possible apply to me. There's exactly one entity like me, i.e. one that has this phenomenal experience. In other words, I care little about copying the data and a lot about this one running instance.
Is this really a (fairly extreme) value dissonance or am I misunderstanding something?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-16T13:47:23.242Z · LW(p) · GW(p)
If your values are, in fact, as you've described them, then it's a fairly extreme value dissonance.
Just to make this maximally concrete: if you were given a magic button that, if pressed, caused the world to end five minutes after your death, would you press the button?
If you're serious about thinking that effects on the world you won't personally experience are irrelevant to you, then presumably your answer is to shrug indifferently... there's no reason to prefer pressing the button to not pressing it, or vice-versa, since neither activity has any effects that matter.
OTOH, if I offered you twenty bucks to press the button, presumably you would. Why not? After all, nothing important happens as a consequence, and you get twenty bucks.
Most people would claim they wouldn't press the button. Of course, that might be a pure signaling effect.
Replies from: None, Raemon, AlephNeil, randallsquared↑ comment by [deleted] · 2011-04-16T16:14:23.216Z · LW(p) · GW(p)
I have thought about similar scenarios and with the caveat that I'm speaking from introspection and intuitions and not the real thought pattern I would go through given the real choice, yes, I would be mostly indifferent about the button in the first example and would press it in the second.
Replies from: TheOtherDave, AlephNeil, Eneasz↑ comment by TheOtherDave · 2011-04-16T16:32:06.830Z · LW(p) · GW(p)
Fair enough. It follows that, while for all I know you may be a remarkably pleasant fellow and a great friend, I really hope you don't ever get any significant amount of power to affect the future state of the world.
Replies from: None↑ comment by [deleted] · 2011-04-16T19:55:10.167Z · LW(p) · GW(p)
In his defense he says:
not the real thought pattern I would go through given the real choice
But since he seems aware of this, he ought to align his "introspection and intuitions" with this.
Replies from: None↑ comment by [deleted] · 2011-04-16T22:42:10.071Z · LW(p) · GW(p)
Well, how exactly am I supposed to do this? I can't convincingly pretend to blow up the world, so there's always this caveat. Like in the case of the trolley problem, I suspect that I would simply freeze or be unable to make any reasonable decision simply due to the extreme stress and signalling problem, regardless of what my current introspection says.
↑ comment by Eneasz · 2011-04-16T18:02:56.853Z · LW(p) · GW(p)
Voted up for honesty, not agreement.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-20T20:55:16.941Z · LW(p) · GW(p)
Which reminds me of this:
"Would it be a kind of victory if people who now say that they care about truth, but who really don't, started admitting that they really don't?"
↑ comment by Raemon · 2011-04-16T22:30:35.692Z · LW(p) · GW(p)
I wouldn't press the button so that I could experience being the sort of person who wouldn't press the button.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-04-18T14:31:35.377Z · LW(p) · GW(p)
Let's say that regardless of your choice you will be selectively memory-wiped so that you will no knowledge of having been offered the choice or making one.
Would you press the button then? You will be twenty dollars richer if you press it, and the world will get destroyed 5 minutes after your death.
Replies from: Raemon↑ comment by Raemon · 2011-04-18T15:00:57.597Z · LW(p) · GW(p)
If I still know that the world will be destroyed while I was pressing the button, then no. The fact that I lose memories isn't any more significant than the fact that I die. I still experience violating my values while pressing the button.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-04-18T15:31:26.266Z · LW(p) · GW(p)
Cool, then one last scenario:
If you press the button, you'll be memory-modified into thinking you chose not to press it.
If you don't press the button, you'll be memory-modified into thinking you pressed it.
Do you press the button now? With this scenario you'll have a longer experience of remembering yourself violating your values if you don't violate them. If you want to not remember violating your values, you'll need to violate them.
Replies from: Raemon↑ comment by Raemon · 2011-04-18T16:57:54.772Z · LW(p) · GW(p)
I confess that I'm still on the fence about the underlying philosophical question here.
The answer is that I still don't press the button, because I just won't. I'm not sure if that's a decision that's consistent with my other values or not.
Essentially the process is: As I make the decision, I have the knowledge that pressing the button will destroy the world, which makes me sad. I also have the knowledge that I'll spend the rest of my life thinking I'll press the button, which also makes me sad. But knowing (in the immediate future) that I destroyed the world makes me more sad than knowing that I ruined my life, so I still don't press it.
The underlying issue is "do I count as the same person after I've been memory modified?" I don't think I do. So my utility evaluation is "I'm killing myself right now, then creating a world with a new happy person but a world that will be destroyed." I don't get to reap the benefits of any of it, so it's just a question of greater overall utility.
But I realize that I actually modify my own memory in small ways all the time, and I'm not sure how I feel about that. I guess I prefer to live in a world where people don't mindhack themselves to think they do things that harm me without feeling guilty. To help create that world, I try not to mindhack myself to not feel guilty about harming other people.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-04-19T09:00:36.825Z · LW(p) · GW(p)
I think you're striving too much to justify your position on the basis of sheer self-interest (that you want to experience being such a person, that you want to live in such a world) -- that you're missing the more obvious solution that your utility function isn't completely selfish, that you care about the rest of the real world, not just your own subjective experiences.
If you didn't care about other people for themselves, you wouldn't care about experiencing being the sort of person who cares about other people. If you didn't care about the future of humanity for itself, you wouldn't care about whether you're the sort of person who presses or doesn't press the button.
Replies from: Raemon, AlephNeil↑ comment by Raemon · 2011-04-19T13:53:56.248Z · LW(p) · GW(p)
Oh I totally agree. But satisfying my utility function is still based on my own subjective experiences.
The original comment, which I agreed with, wasn't framing things in terms of "do I care more about myself or about saving the world." It was about "do I care about PERSONALLY having experiences or about other people who happen to be similar/identical to me having those experiences?"
If there are multiple copies of me, and one of them dies, I didn't get smaller. One of them died. If I get uploaded to a server and then continue on my life, periodically hearing about how another copy of me is having transhuman sex with every Hollywood celebrity at the same time, I didn't get to have that experience. And if a clone of me saves the world, I didn't get to actually save the world.
I would rather save the world than have a clone do it. (But that preference is not so strong that I'd rather have the world saved less than optimally if it meant I got to do it instead of a clone)
↑ comment by AlephNeil · 2011-04-18T14:56:03.537Z · LW(p) · GW(p)
Of course, that might be a pure signaling effect.
You mean their verbal endorsement of 'not pressing' is a pure signaling effect? Or that they have an actual policy of 'not pressing' but one which has been adopted for signaling reasons? (Or that the difference is moot anyway since the 'button' is very far from existing?)
Well I think most people care about (in ascending order of distance) relatives, friends, causes to which they have devoted themselves, the progress of science or whichever cultural traditions they identify with, objects and places of great beauty - whether natural or manmade. Putting all of this down to 'signaling', even if true, is about as relevant and informative as putting it all down to 'neurons'.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-18T15:22:12.806Z · LW(p) · GW(p)
I meant that their claim (verbal endorsement) might be a pure signaling effect and not actually a reliable predictor of whether they would press the button.
I also agree that to say that something is a signaling effect is not saying very much.
↑ comment by randallsquared · 2011-04-16T16:48:15.612Z · LW(p) · GW(p)
Most people would claim they wouldn't press the button. Of course, that might be a pure signaling effect.
Most people believe in an afterlife, which alters that scenario a bit. Of the remainder, though, I think it's clear that it's very important to signal that you wouldn't push that button, since it's an obvious stand-in for all the other potential situations where you could benefit yourself at the expense of others.
comment by benjayk · 2011-04-16T14:25:37.515Z · LW(p) · GW(p)
I think the most practical / accurate way of conceiving of individuality is through the connection of your perceptions through memory. You are the same person as 3 years ago, because you remember being that person (not only rationally, but on a deeper level of knowing and feeling that you were that person). Of course different persons will not share the memory of being the same person. So if we conceive of individuality in the way we actually experience individuality (which I think is most reasonable), there is not much sense in saying that many persons living right know are the same person, no matter how much they share certain memes. Even for an outside observer this is true, since people express enough of their memory to the outside world to understand that their memories form distinct life stories. It may be true to say that many persons share a cultural identity or share a meme space, but this does make them the same person, since they do not share their personal identity. So unless your AI is dumb and does not understand what individuality consists of it won't say that there are only thousands of people.
I might be true though that at some point in the future some people that have different memories right know will merge into one entity and thus share the same memory (if the singularity happens I think it is not that unlikely). Then we could say that different persons living right now might not be different persons ultimately, but they still are different persons right now.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-16T15:21:18.537Z · LW(p) · GW(p)
You are the same person as 3 years ago, because you remember being that person (not only rationally, but on a deeper level of knowing and feeling that you were that person)
Two things:
1) Can you clarify what you mean by "rationally remembering" here?
2) If you're actually talking about "knowing and feeling that I am that person," then you aren't talking about memory at all. There are many, many events that I do not remember, but which I "know and feel" I was involved in -- my birth, for example.
If you define my "personal identity" as including those things, that's OK with me... I do, as well... but it's not clear to me that there's a sharp line between saying that, and saying that my "personal identity" can include events at which my body was not present but which I "know and feel" I was involved in.
(Just to be clear: I'm not suggesting anything mystical here. I'm talking about the psychological constituents of identity.)
I'm not sure there's anything to say about that other than, if I do identify with those things, then those things are part of my identity.
To put that more affirmitively: if personal identity is a matter of what I "know and feel," then it is a psychological construct very much like cultural identity and family identity, and those constructs flow into one another with no sharp dividing lines, and therefore discussions of where "personal identity" ends and "cultural identity" begins is entirely a discussion of how we choose to define those terms, not actually a discussion about their referents.
comment by Armok_GoB · 2011-04-15T21:03:59.105Z · LW(p) · GW(p)
Taboo "humans" and "individual". This post seems to be how many decision theoretically unique agents there are. You can also ask how many physical brains there are, how may "moral agents" there are, how much weight some group or brain should have in CEV, if disrupting a specific configuration of particles would count as "murder" by X utility function, etc.
comment by fubarobfusco · 2012-07-18T22:14:38.004Z · LW(p) · GW(p)
Rather than seeing the population as being made up of more-or-less copies of individuals, a different approach would be to see each individual, including yourself as a sum of various patterns, some of which are longer-lived than others; and many of which change over time.
By "yourself" here I mean "the you that is thinking this thought right now", not "the four-dimensional extent of your lifetime" or some such.
For instance, one of the patterns that is an ingredient in "me" is the human genome, and specifically my genome, including any hereditary quirks I've inherited from my specific ancestors. This persists through lifetimes and reproduction; and this explains certain facts about me — such as that I have memories of bodily senses such as hunger and sexual arousal, which contribute to my genome's survival; but also such as that I have ten fingers and ten toes, as is typical for a human.
Another of the patterns that is an ingredient in "me" is the English language. This persists through different mechanisms than my genome: the imitative tendencies of human children; the teaching tendencies of human adults; and the economics of the English-speaking nations being favorable to immigrants (such as my German-speaking ancestors) learning English. The fact that I am having these thoughts in English is not a consequence of my genome; it is a consequence of my body being symbiotic with the English-language memeplex.
Yet another set of the patterns in "me" is the set of habits I have learned in my lifetime. I consider that skills are a form of habit. Many of these may persist only so long as I live; except insofar as I happen to teach them to others. Some of the things that I habitually do, such as writing computer code to solve problems, I would also find rewarding to teach to others; and so I am likely to intentionally contribute to those patterns' persistence — for instance by teaching people how to code. Other habits, such as certain fears and anxieties, I would rather prefer not to convey to others.
There are economic patterns, which are closely modeled by certain mathematics; I have a job, buy things, donate money in a way which participates in and reinforces a "capitalism" pattern. There are moral patterns, many of which are modeled by game theory. Some of these are carried on the genome (the capacity for empathy); some of them are carried on cultural replicators (the Golden Rule). I participate in and reinforce economic and moral patterns; and this in turn makes them more prevalent and persistent in the world. (A species of sociopaths could still study the mathematics of game theory, but would not implement Hofstadter's superrationality as much as humans do.)
There are patterns in common between humans and many other evolved organisms, such as sexual dimorphism; there are social patterns that depend on those, such as sex and gender roles. Certain facts about "me" are created by these patterns.
Saying that there is another copy of "me" somewhere in the world, in this vocabulary, means saying that there are two individuals at the same point in the configuration-space of the above sorts of patterns. This seems unlikely, since there are really quite a lot of patterns that I participate in. However, we may consider how close two points in configuration-space may be to consider them "almost the same" ...
comment by ahartell · 2011-04-15T19:59:41.974Z · LW(p) · GW(p)
Wait, since when was suicide considered more sinful than murder? Do you mean killing an in-group member vs. killing an out-group member?
Interesting idea though; I've never thought about myself an anything other than an individual despite knowing that I'm not ultimately unique in the grand scheme of things.
"I agree with Robin Hanson that if two instances of me exist, and one is terminated, I didn’t die, I simply got smaller." This reminds my a lot of Eliezer's crossover fanfic. You should check it out.
Replies from: Eneasz↑ comment by Eneasz · 2011-04-15T21:19:23.473Z · LW(p) · GW(p)
Wait, since when was suicide considered more sinful than murder?
In my experience anyway. In standard catholicism and many strains of christianity you can commit murder and still get into heaven. Forgiveness, redemption, all that. Suicide is a one-way ticket to hell with no exceptions.
This reminds my a lot of Eliezer's crossover fanfic. You should check it out.
I'm a huge fan of Eliezer's fic, I wish he could write more often. :( But IMHO the best example of a single identity repeated in many people is in Hal Duncan's Vellum
Replies from: MinibearRex↑ comment by MinibearRex · 2011-04-15T22:36:00.061Z · LW(p) · GW(p)
In my experience anyway. In standard catholicism and many strains of christianity you can commit murder and still get into heaven. Forgiveness, redemption, all that. Suicide is a one-way ticket to hell with no exceptions.
I think this is the conclusion reached by theologians, not people's intuitions. If I am a Catholic, and I murder someone, I can go to the Priest, confess, do penance, and be forgiven. If the person I murder is myself, I obviously can't do that, so I die with my sins unconfessed and therefore go to hell. This is not how people normally think. If somebody commits suicide, we feel pity for them, because they must have been miserable. Murderers, however, are hunted down and imprisoned or executed.
Replies from: None↑ comment by [deleted] · 2011-04-15T23:30:01.363Z · LW(p) · GW(p)
It's also not true. Catholicism holds that only those who commit suicide while the balance of their mind is not disturbed are incapable of going to heaven - and in practice they assume anyone who commits suicide has a disturbed mind. In effect they're saying "If you kill yourself deliberately just so as to annoy God, then you're a sinner. Otherwise you're not." I don't know how recently that became their standard belief, but it's been so for at least the last few decades...
Replies from: orthonormal, MinibearRex↑ comment by orthonormal · 2011-04-19T01:19:42.658Z · LW(p) · GW(p)
Yes and no. The Catholic Church has indeed backed off from the days when they refused to have funeral services for suicides on the presumption that they are hellbound; now they generally give the benefit of the doubt to people who commit suicide from depression, on the grounds that they may not have been sane enough to be morally responsible for their act.
However, the dogma still considers it a mortal sin if chosen deliberately by a sane person, and in particular suicide among the terminally ill is a matter of moral concern for them (or rather, keeping euthanasia illegal is a matter of political concern for them). They're supposed to freak out at the suggestion, like a priest did in a certain recent movie where euthanasia becomes a part of the plot.
↑ comment by MinibearRex · 2011-04-16T08:16:44.644Z · LW(p) · GW(p)
I hadn't heard that before. My understanding is based off of a conversation with a devout catholic, but not a church official. The notion of a disturbed mind is something I haven't heard ago. Thanks for the info.
comment by MikkW (mikkel-wilson) · 2021-10-27T08:00:47.668Z · LW(p) · GW(p)
Suicide is considered more sinful than killing outsiders because suicide always reduces the size of the Meta-Person that the suicidee belonged to.
Huh.
comment by Thomas · 2011-04-24T16:56:18.094Z · LW(p) · GW(p)
For the last nearly 30 years, I see everybody else as a copy of myself. Another instance. Coincarnation, if you will.
Replies from: Alicorn↑ comment by Alicorn · 2011-04-24T17:55:40.357Z · LW(p) · GW(p)
Replies from: Thomas, Desrtopa↑ comment by Desrtopa · 2011-06-10T23:32:53.220Z · LW(p) · GW(p)
I was wondering where I recognized the name of the author. He did a comic where the main characters die all the time and come back for no apparent reason.
Replies from: Alicorncomment by banana · 2011-04-18T12:26:32.681Z · LW(p) · GW(p)
I agree with Robin Hanson that if two instances of me exist, and one is terminated, I didn’t die, I simply got smaller.
I am not to sure about this idea. To the best of my knowledge I am a biological being existing a universe obeying some sort of physics related to quantum mechanics and general relativity. If there are multiple instances of me then it is probably due to reality being some kind of multiverse. Lets keep things simple and assume that it is a Tegmark Level I Multiverse. This implies that there are an infinite number of copies of me.
Next what are these copies likely to be like? One arguement is that they could be Boltzman brains. This requires there to be enough of a statistical fluctuation in the randomness of the universe to arrange all of the atoms into a brain that remembers reading this article and experiences being in the process of typing a reply. Highly unlikely. Much more likely is that a set of atoms or stuff in general in the far distance past were in such a configuration that it evolved into a configuration including me. I expect this to me more likely because the level of organisation required at that time was much less than the level required now. Now if all of these parts of the universe that contain me evolved from parts of the universe that had similar initial conditions then all of them require for example my mother and father to be in them, both to get my DNA right, and also to create the meories that I have of them. Similarly all the other people that I have met need to be in all of those locations so that I would have memories of them. Thus all of those infinite copies of me would live in a world just like my own.
Now, given all this, for me to be terminated means to die. Whatever it is that kills this instance of me probably exists in the environment of each of those infinite copies of me. Thus I would expect that the fate of all of those instances of me to be highly correlated. If this correlation is high enough (say unity) then being terminated means that my measure is reduced all the way to zero, which is a bit more then just getting smaller.