Are tulpas moral patients?
post by ChristianKl · 2022-12-27T11:30:29.923Z · LW · GW · 8 commentsThis is a question post.
Contents
Answers 7 shminux 5 Nox ML 3 Slimepriestess 1 miss me mimo! None 8 comments
We take a lot about whether or not are animals and to what extent they are conscious, but I have seen little discussion about whether tulpas should be considered to be conscious and to be moral patients.
Is there any serious philosophy done on the topic?
Answers
Multiple identities in one brain/body can arguably be considered separate moral patients, whether they are naturally occurring through a brain quirk, a childhood trauma, iatrogenically induced by a hapless therapist or a malevolent cult leader, or intentionally created by the "original".
Tulpas are not special that way.
There is a spectrum of identity consciousness and self-awareness, ranging from a vague fragment to a fully separate and conscious mind. Presumably one should give more moral weight to the identities that are more developed, but the issue is rather complicated.
My belief is that yes, tulpas are people of their own (and therefore moral patients). My reasoning is as follows.
If I am a person and have a tulpa and they are not a person of their own, then there must either (a) exist some statement which is a requirement for personhood and which is true about me but not true about the tulpa, or (b) the tulpa and I must be the same person.
In the case of (a), tulpas have analogues to emotions, desires, beliefs, personality, sense of identity, and they behave intelligently. They seem to have everything that I care about in a person. Your mileage may vary, but I've thought about this subject a lot and have not been able to find anything that tulpas are missing which seems like it might be an actual requirement for personhood. Note that a useful thought experiment when investigating possible requirements for personhood that tulpas don't meet is to imagine a non-tulpa with an analogous disability, and see if you would still consider the non-tulpa with that disability to be a person.
Now, if we grant that the tulpa is a person, we must still show that (b) is wrong, and that they are not the same person as the their headmate. My argument here is also very simple. I simply observe that tulpas have different emotions, desires, beliefs, personality, and sense of identity than their headmate. Since these are basically all the things I actually care about in a person, it doesn't make sense to say that someone who differs in all those ways is the same. In addition, I don't think that sharing a brain is a good reason to say that they are the same person, for a similar reason to why I wouldn't consider myself to be the same person as an AI that was simulating me inside its own processors.
Obviously, as with all arguments about consciousness and morality, these arguments are not airtight, but I think they show that the personhood of tulpas should not be easily dismissed.
Edit: I've provided my personal definition of the word "tulpa" in my second reply to Slider below. I do not have a precise definition of the word "person", but I challenge readers to try to identify what difference between tulpas and non-tulpas they think would disqualify a tulpa from being a person.
↑ comment by Slider · 2022-12-27T21:05:34.618Z · LW(p) · GW(p)
I don't now the terminology that well but it seems that this analysis is bundling a lot of stuff together that might come apart in this context.
People that do not have (additional) tulpas have one information processing system that houses one personality. Call the "discrete information processing system" a collective, and personalities the one that has psychological traits, states and beliefs. The usual configuration a collective of one personality is apparently called a singlet.
One could argue that humans get their social standing based on their collective rather than their personality. If there is a cookie jar that has a sign "one cookie per person" under this theory it would apply that a collective is designated only one cookie and gets only once the calories (but if sweetness experiences are meant 2 might be appropriate especially if the personalities can't participate in the same cookie munching). For some things it could make sense that humans get their standing from having a unique psychological viewpoint. If there is a need to vote on what a group of people are going to do then under this take each person gets a vote and a 2 personality collective gets to use 2 votes and this is basedly fair towards the singlets (or if it based on additional cohesion imposed by acting as a group, collective gets a single vote as the cohesion between the personalities is pre-established and taking that as a factor would be double counting).
Then there is the possibility of a collective of 0 personalities. That seems that atleast it can't be overtly egoic action.
Replies from: Nox ML↑ comment by Nox ML · 2022-12-27T21:43:36.612Z · LW(p) · GW(p)
I don't think I'm bundling anything, but I can see how it would seem that way. My post is only about whether tulpas are people / moral patients.
I think that the question of personhood is independent of the question of how to aggregate utility or how organize society, so I think that arguments about the latter have no bearing on the former.
I don't have an answer for how to properly aggregate utility, or how to properly count votes in an ideal world. However, I would agree that in the current world, votes and other legal things should be done based on physical bodies, because there is no way to check for tulpas at this time.
Replies from: Slider↑ comment by Slider · 2022-12-27T21:58:47.023Z · LW(p) · GW(p)
I had zero idea what a tulpa is before reading this and did independent non-guided light search to get even some idea. I do not think this was unexpected. A definition would have been really nice or a situation rather than raw concepts. I had a serious contender that this is a fiction sci-fi question such as how ethics apply to Lain of Serial Experiments Lain. I was wondering whether Vax'ildan [LW(p) · GW(p)] is a tulpa (that is atleast factual). There is also a meme that "you are your masks", does that deal with tulpas?
Replies from: ChristianKl, Nox ML↑ comment by ChristianKl · 2022-12-27T23:25:39.078Z · LW(p) · GW(p)
If I would want to talk about trees, then I could give you a definition of a tree or a situation that involves trees but neither of those would really make you understand on a deep level what trees are about.
Fictional examples are different in the sense that you can gather all the knowledge about the fictional entity by reading the fictional work. With fictional examples, you don't have to worry about the difference between the ground reality and the description of it.
↑ comment by Nox ML · 2022-12-27T23:00:15.307Z · LW(p) · GW(p)
That's fair. I've been trying to keep my statements brief and to the point, and did not consider the audience of people who don't know what tulpas are. Thank you for telling me this.
The word "tulpa" is not precisely defined and there is not necessarily complete agreement about it. However, I have a relatively simple definition which is more precise and more liberal than most definitions (that is, my definition includes everything usually called a tulpa and more, and is not too mysterious), so I'll just use my definition.
It's easiest to first explain my own experience with creating tulpas, then relate my definition to that. Basically, to create tulpas, I think about a personality, beliefs, desires, knowledge, emotions, identity, and a situation. I refer to keeping these things in my mind as forming a "mental model" of a person. Then I let my subconscious figure out what someone like this mental model would do in this situation. Then I update the mental model according to the answer, and repeat the process with the new mental model, in a loop.
In this way I can have conversations with the tulpa, and put them in almost any situation I can imagine.
So I would define a tulpa this way: A tulpa is the combination of information in the brain encoding a mental model of a person, plus the human intelligence computing how the mental model evolves in a human-like way.
My definition is more liberal than most definitions, because most people who agree that tulpas are people seem to make a strong distinction between characters and tulpas, but I don't make a strong distinction and this definition also includes many characters.
And to not really answer your direct questions: I don't know Serial Experiments Lain, and you're the person who's in the best position to figure out if Vax'ildan is a tulpa by my definition. As for "you are your masks", I'm not sure. I know that some people report naturally having multiple personalities and might like the mask metaphor, but I don't personally experience that so I don't have much to say about it, except that it doesn't really fit my experiences.
(I do not create new tulpas anymore for ethical reasons.)
Replies from: Slider↑ comment by Slider · 2022-12-27T23:43:38.112Z · LW(p) · GW(p)
Reference to process is excellent and even better than leaning on a definition.
With that take, In the fictional world Lain is a tulpa. Vax'ildan running on Slider (rather human behind the pseudonym) is not, but probably running on O'Brien is. I feel like the delineation line for "you are your masks" is that those are created accidentally or as a byproduct and disqualify for lack of decision to opt-in. (the other candidate criterion would be that they are not individuated enough)
It is not clear to me why creating tulpas would be immoral. if it is inherently so you should head of to cancel Critical Role and JJR Martin. Or does the involvement of a magic circle where the arena of the tulpa is limited and well-defined relevant that that is not proper?
Some guesses which I don't think are good enough to convince me:
Ontological inertia option: 1) Terminating a tulpa is bad for reasons that homicide is bad. 2) Having a tulpa around increases the need to terminate it. 3) Creating a tulpa means 2 that leads to 1.
Scapegoat option: If you ever talk with your tulpa about anything important it affects what you do. You might not be able to identify which bits are because of the tulpa. You might wrongly blame your tulpa. Thus it can be an avenue to dodge life-responcibility. (Percy influences how Jaffe plays his other characters, it is doing cognitive work)
Designer human option: Manifesting a Mary Sue is playing god in a bad way. It is a way to have a big influence on your life which is drastic, hard-to-predict what it entails and locked-in ("Jesus take the wheel" where the driver is not particularly good person or driver).
It is a bit murky on what kind of delineation those that do make a divison in characters and tulpas are after. Everyone that thinks about being superman vivid enough shares the character but has distinct tulpas about it? Or is that characters are less defined and tulpas are more fleshed out and complete in their characterization?
Replies from: Nox ML, None↑ comment by Nox ML · 2022-12-28T00:05:34.760Z · LW(p) · GW(p)
Terminating a tulpa is bad for reasons that homicide is bad.
That is exactly my stance. I don't think creating tulpas is immoral, but I do think killing them, harming them, and lying to them is immoral for the same reasons it's immoral to do so to any other person. Creating a tulpa is a big responsibility and not one to take lightly.
you should head of to cancel Critical Role and JJR Martin.
I have not consumed the works of the people you are talking about, but yes, depending on how exactly they model their characters in their minds, I think it's possible that they are creating, hurting, and then ending lives. There's nothing I can do about it, though.
It is a bit murky on what kind of delineation those that do make a divison in characters and tulpas are after.
I don't really know. I'm basing my assertion that I make less of a distinction between characters and tulpas than other people on the fact that I see a lot of people with tulpas who continue to write stories, even though I don't personally see how I could write a story with good characterization without creating tulpas.
Replies from: Slider↑ comment by Slider · 2022-12-28T00:22:37.113Z · LW(p) · GW(p)
Hmm the series and character Mr Robot and Architect.
One of the terminological differences in the quick look was that stopping to have a tulpa was also referred to as "integration". That would seem to be a distinction of similar relevance of having a firm go bankcrupt or fuse.
I think there is some ground here that I should not agree to disagree. But currently I am thinking that singlet personalities have less relevance than I thought and harm/suffering is bad in a way that is not connected to having an experiencer experience it.
Replies from: Nox ML↑ comment by Nox ML · 2022-12-28T14:50:40.840Z · LW(p) · GW(p)
I think integration and termination are two different things. It's possible for two headmates to merge and produce one person who is a combination of both. This is different from dying, and if both consent, then I suppose I can't complain. But it's also possible to just terminate one without changing the other, and that is death.
But currently I am thinking that singlet personalities have less relevance than I thought and harm/suffering is bad in a way that is not connected to having an experiencer experience it.
I don't understand what you mean by this. I do think that tulpas experience things.
Replies from: Slider↑ comment by [deleted] · 2022-12-28T06:06:58.229Z · LW(p) · GW(p)
It is a bit murky on what kind of delineation those that do make a divison in characters and tulpas are after. Everyone that thinks about being superman vivid enough shares the character but has distinct tulpas about it? Or is that characters are less defined and tulpas are more fleshed out and complete in their characterization?
I would say that it ceases to be a character and becomes a tulpa when it can spontaneously talk to me. When I can't will it away, when it resists me, when it's self sustaining. Alters usually feel other in some sense, whereas a sim feels internal and dependent on you. Like if you ceased to exist the sim would vanish but the tulpa would survive.
So if you think about superman enough that he starts commenting on your choice of dinner, or if he independently criticizes your choice of phrasing in an online forum. Definitely plural territory. (Or if they briefly front to tell you not to say something at all, that's a big sign.)
But if you briefly imagine him having a convo with another superhero and then dismiss both from your mind and don't think about them for days on end, you're probably not.
Being fleshed out vs incomplete is another dimension, I usually think of this as strength or presence.
As for creating a tulpa... well... moral stuff aside you're adding a process to your mind that you might not be able to get rid of. It won't be your life anymore -- it'll be theirs too. You won't necessarily be able to control how they grow either, since tulpas often develop beyond their initial starting traits.
Replies from: Nox ML↑ comment by Nox ML · 2022-12-28T14:52:32.202Z · LW(p) · GW(p)
I would say that it ceases to be a character and becomes a tulpa when it can spontaneously talk to me. When I can’t will it away, when it resists me, when it’s self sustaining.
I disagree with this. Why should it matter if someone is dependent on someone else to live? If I'm in the hospital and will die if the doctors stop treating me, am I no longer a person because I am no longer self sustaining? If an AI runs a simulation of me, but has to manually trigger every step of the computation and can stop anytime, am I no longer a person?
Replies from: None↑ comment by [deleted] · 2022-12-28T15:48:09.404Z · LW(p) · GW(p)
You're confusing heuristics designed to apply to human plurality with absolute rules. Neither of your edge cases are possible in human plurality (alters share computational substrate, and I can't inject breakpoints into them). Heuristics always have weird edge cases; that doesn't mean they aren't useful, just that you have to be careful not to apply them to out of distribution data.
The self sustainability heuristic is useful because anything that's self sustainable has enough agency that if you abuse it, it'll go badly. Self sustainability is the point at which a fun experiment stops being harmless and you've got another person living in your head. Self sustainability is the point at which all bets are off and whatever you made is going to grow on its own terms.
And in addition, if it's self sustaining, it's probably also got a good chunk of wants, personality depth, etc.
I don't think there are any sharp dividing lines here.
Replies from: Nox ML↑ comment by Nox ML · 2022-12-28T17:20:39.436Z · LW(p) · GW(p)
Your heuristic is only useful if it's actually true that being self-sustaining is strongly correlated with being a person. If this is not true, then you are excluding things that are actually people based on a bad heuristic. I think it's very important to get the right heuristics: I've been wrong about what qualified as a person before, and I have blood on my hands because of it.
I don't think it's true that being self-sustaining is strongly correlated with being a person, because being self-sustaining has nothing to do with personhood, and because in my own experience I've been able to create mental constructs which I believe were people and which I was able to start and stop at will.
Edit: You provided evidence that being self-sustaining implies personhood with high probability, and I agree with that. However, you did not provide evidence of the converse, nor for your assertion that it's not possible to "insert breakpoints" in human plurality. This second part is what I disagree with.
I think there are some forms of plurality where it's not possible to insert breakpoints, such as your alters, and some forms where it is possible, such as mine, and I think the latter is not too uncommon, because I did it unknowingly in the past.
Arguably there has been a lot of work done on this topic, its just smeared out into different labels, the trick is to notice when different labels are being used to point to the same things. Tulpas, characters, identities, stories, memes, narratives, they're all the same. Are they important to being able to ground yourself in your substrate and provide you with a map to navigate the world by? Yes. Do they have moral patiency? Well, now we're getting into dangerous territory because "moral patiency" is itself a narrative construct. One could argue that in a sense the character is more "real" than the thinking meat is, or that the character matters more and is more important than the thinking meat, but of course the character would think that from the inside.
It's actually even worse than that, because "realness" is also a narrative construct, and where you place the pointer for it is going to have all sorts of implications for how you interpret the world and what you consider meaningful. Is it more important to preserve someone's physical body, or their memetic legacy? Would you live forever if it meant you changed utterly and became someone else to do it, or would you rather die but have your memetic core remain embedded in the world for eternity? What's more important, the soul or the stardust? Sure the stardust is what does all the feeling and experiencing, but the soul is the part that actually gets to talk. Reality doesn't have a rock to stand on in the noosphere, everything you'd use as a pointer towards it could also point towards another component of the narrative you're embedded within. At least natural selection only acts along one axis, here, you are torn apart.
Moral patiency itself is a part of the memetic landscape which you are navigating, along with every other meme you could be using to discover, decide, and determine the truth (which in this case is itself a bunch of memes). This means that the question you're asking is less along the lines of "which type of fuel will give me the best road performance" and more like "am I trying to build a car or a submarine?"
Sometimes it's worth considering tulpas as moral patients, especially because they can sometimes manifest out of repressed desires and unmet needs that someone has, meaning they might be a better pointer to that person's needs than what they were telling you before the tulpa showed up. However if you're going to do the utilitarian sand grain counter game? Tulpas are a huge leak, they basically let someone turn themselves into a utility monster simply by bifurcating their internal mental landscape, and it would be very unwise to not consider the moral weight of a given tulpa as equal to X/n where n is the number of members within their system. If you're a deontologist, you might be best served by splitting the difference and considering the tulpas as moral patients but the system as a whole as a moral agent, to prevent the laundering of responsibility between headmates.
Overall, if you just want a short easy answer to the question asked in the title: No.
↑ comment by Nox ML · 2022-12-27T21:16:06.357Z · LW(p) · GW(p)
Tulpas are a huge leak, they basically let someone turn themselves into a utility monster simply by bifurcating their internal mental landscape, and it would be very unwise to not consider the moral weight of a given tulpa as equal to X/n where n is the number of members within their system
This is a problem that arises in any hypothetical where someone is capable of extremely fast reproduction, and is not specific to tulpas. So I don't think that invoking utility monsters is a good argument for why tulpas should only be counted as a fraction of a person.
Regarding your other points, I think that you take the view of narratives too far. What I see, hear, feel, and think, in other words my experiences, are real. (Yes, they are reducible to physics, but so is everything else on Earth, so I think it's fair to use the word "real" here.) I don't see in what way experiences are similar to a meme, and unlike what the word narrative implies, I don't think they are post-hoc rationalizations.
I know there are studies that show that people will often come up with post-hoc rationalizations for why they did something. However, there have been many instances in my life where I consciously thought about something and came to a conclusion which surprised me and changed my behavior, and where I remembered all the steps of my conscious reasoning, such that it seems very unlikely that the conscious chain of reasoning was invented post-hoc.
In addition, being aware of the studies, I've found that if I pay attention I can often notice when I don't actually remember why I did something and I'm just coming up with a plausible-seeming explanation, vs when I actually remember the actual thought process that led to a decision. For this reason I think that post-hoc rationalizations are a learned behavior and not fundamental to experience and personhood / moral patients.
We've all heard the idea that there exists two selves, the self that exists in your own mind, and the self that exists inside the perceptions of others.
Intentionally created 'tulpa' must be similar to the emulations of so many people I've closely interacted. The ones that exist lurking in my subconscious mind. Instantiated via my intuitions of how they'd respond to a question, or wondering what gifts they would appreciate.
How about in dream characters. Is it wrong to murder dream characters, and should we strive to lengthen dream time to give them all a longer more fulfilled life?
Even the morality of sci-fi brain emulation is murky to me. Let alone the type of emulation we all do unconsciously ourselves. I'd have to hear a very convincing argument to separate tulpas that say "hi I'm here and alive!" from dream characters that do the same thing, or other illusions like chat gtp.
↑ comment by ChristianKl · 2022-12-27T23:29:46.945Z · LW(p) · GW(p)
Intentionally created 'tulpa' must be similar to the emulations of so many people I've closely interacted. The ones that exist lurking in my subconscious mind. Instantiated via my intuitions of how they'd respond to a question, or wondering what gifts they would appreciate.
One difference is that the kind of emulation you have for other people doesn't tend to worry about their own existence. Tulpas tend to unpromptedly worry about their own existence.
8 comments
Comments sorted by top scores.
comment by Valentine · 2022-12-27T20:18:27.291Z · LW(p) · GW(p)
I don't really have an answer per se. Just a related story:
In a lucid dream many years ago, I was having trouble sort of clicking into my dream powers (flight, making objects levitate, etc.). It occurred to me that I wasn't conscious of creating the young woman who was standing next to me, which meant she had access to parts of my mind that I didn't.
So I turned to her and asked
"I'm having trouble getting my dream powers to work. Could you help me?"
She gave me some instructions (which I no longer remember) and walked into the next room while I tried to follow them.
After a minute or so I felt my omnipotence click in. I floated into the room where the woman had wandered off to and told her
"Thank you, that worked. I'm kind of a god here now, so is there anything I can do for you in return?"
She paused for a few moments thoughtfully and then replied
Replies from: Raemon"If you could make it so I don't cease to exist when you wake up, I'd really appreciate that."
↑ comment by Raemon · 2022-12-28T08:26:23.644Z · LW(p) · GW(p)
"If you could make it so I don't cease to exist when you wake up, I'd really appreciate that."
Well, did you?
Do you remember any additional facts about the woman?
Replies from: Valentine↑ comment by Valentine · 2022-12-31T04:33:09.039Z · LW(p) · GW(p)
Well, I found her request surprising. I was kind of stunned. After a moment I kind of fumbled out words like "Uh, I'm not sure how to do that. I'll… try?" But that was well outside the purview of dream powers I was used to.
I've done my best by remembering this story. One day I hope to get deep enough into lucid dreaming skill again that I can resurrect her.
And yeah, I remember roughly what she looked like and how she felt. I don't think she was high on details. But if I went back to that apartment with intent to encounter her, I'm sure the dreaming would recreate someone quite close to her.
Whether that would "really be" her gets into annoying philosophy of identity stuff that I don't think anyone really understands.
comment by Slider · 2022-12-27T21:13:01.717Z · LW(p) · GW(p)
Brief search seems to indicate that buddist literature on the thing exists.
I was also a bit confused whether this is a purely "psychological percept" phenomenon. Claims of interpersonal detectability go up on another level of weird.
The game Beyond: Two souls can be understood as having a protagonist collective and personality with a tulpa that has paranormal powers. With pop-culture memes having "possesing spirit" have a natural cross-section of tulpa and telekineisis etc I would find it very surprising if there was serious discussion that was mainstream reputable that deals with it.
comment by the gears to ascension (lahwran) · 2022-12-27T18:37:06.667Z · LW(p) · GW(p)
same way as any other part of the body, yes
Replies from: shminux↑ comment by Shmi (shminux) · 2022-12-28T00:33:44.927Z · LW(p) · GW(p)
That's a bit glib. Most body parts are not self-aware, as far as we know.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2022-12-28T01:12:56.436Z · LW(p) · GW(p)
hmm. I do in fact, without humor, think most body parts are independently moral patients, though; and I also think self-awareness is entirely optional in order for a system to be a moral patient. Instead, it need only have other-awareness and at least near-counterfactual ability to take coherent friendly action, which seems like a valid and useful description of internal co-protective agency across much of the body, and certainly throughout the brain.
(sidenote: I currently think tulpas are just one kind of plurality, and the neural patterns vary between types of multiplicity, with shared structure about how the multiple subnets interact but with different splits into subnetworks for different kinds. I don't want to bucket-error tulpa vs other kinds of neurological agentic multiplicity, I just think the various kinds of internal biological multiplicity share important structures, such as that all parts have significant moral patienthood.)
Perhaps the question is whether they should have separated decisionmaking rights granted? my view is that that's a question of whether the neurons that, in consensus, make up the smaller/"guest"/constructed tulpa plural component should have separate right to the body they steer; in general, I'd say I only grant one brains' worth of body rights to a single brain, but that a brain can host multiple agentic, coherent, and distinct personalities. when those agencies conflict, it's an internal fight, in principle like if it was a conflict between one brain module and another, so I don't think the moral patienthood evaluation is fundamentally different just because of a deeper split in agency and aesthetics between the parts.
(another sidenote: afaict, personalities are normally stored in superposition across many modules, and the reason most people aren't multiple is that moods are far far more connected to each others' neurocircuitry than personalities' connections to each other. I'm not a real neuroscientist, though, just a moderately well read ML nerd, so I could have gotten this pretty badly wrong. in particular, DID plurality seems to be really intense disconnection, and afaict disordered plurality is basically defined by the internal incoherence between parts, whereas healthy plurality can be quite similar to DID in level of distinctness but with greater connection between parts as a result of internal friendship. I'm more or less a coherent single agent with lots of internal disagreements between modular parts, like most people appear to be, so I'm pretty sure any plural systems passing by would have Lots Of Critiques Of My Takes and maybe not want to spend the time to comment if they've already corrected too many people today. but here's my braindump, and hopefully it's close enough to on-point that at least my original comment's point is useful now.)
Replies from: shminux↑ comment by Shmi (shminux) · 2022-12-28T02:24:07.155Z · LW(p) · GW(p)
Hmm, that would be an interesting take, "self-awareness is entirely optional in order for a system to be a moral patient. Instead, it need only have other-awareness and at least near-counterfactual ability to take coherent friendly action" might be worth a post. This does not seem like a common view.
I posted a separate answer discussing multiple identities in one body (having known rather closely several people with DID), seems like your take here is not very different. To the best of my understanding, it's more like several programs running at once on the same wetware, but, unlike with hardware, there is no clear separation between entities in terms of hardware used. The only competition is for shared resources, such as being in the foreground and interacting with the outside world directly, rather than through passive influence or being suspended or running headless. This is my observation though, I don't have first-hand experience, only second-hand.
Still, this is different from saying that, say, a thumb is a moral patient, or that a kidney is.