Tulpa References/Discussion

post by Vulture · 2014-01-02T01:34:11.518Z · LW · GW · Legacy · 81 comments

Contents

  Pertinent Links and Publications
  Footnotes
None
81 comments

There have been a number of discussions here on LessWrong about "tulpas", but it's been scattered about with no central thread for the discussion. So I thought I would put this up here, along with a centralized list of reliable information sources, just so we all stay on the same page.

Tulpas are deliberately created "imaginary friends" which in many ways resemble separate, autonomous minds. Often, the creation of a tulpa is coupled with deliberately induced visual, auditory, and/or tactile hallucinations of the being.

Previous discussions here on LessWrong: 1 2 3

Questions that have been raised:

1. How do tulpas work?

2. Are tulpas safe, from a mental health perspective?

3. Are tulpas conscious? (may be a hard question)

4. More generally, is making a tulpa a good idea? What are they useful for?

 

Pertinent Links and Publications

(I will try to keep this updated if/when further sources are found)

(Bear in mind while perusing these resources that if you have serious qualms about creating a tulpa, it might not be a good idea to read creation guides too carefully; making a tulpa is easy to do and, at least for me, was hard to resist. Proceed at your own risk.)

 

Footnotes

1. "Conjuring Up Our Own Gods", a 14 October 2013 New York Times Op-Ed

2. "Hearing the Voice of God" by Jill Wolfson in the July/August 2013 Stanford Alumni Magazine

3. "The Illusion of Independent Agency: Do Adult Fiction Writers Experience Their Characters as Having Minds of Their Own?"; Taylor, Hodges & Kohànyi in Imagination, Cognition and Personality; 2002/2003; 22, 4

4. Thanks to pure_awesome

5. "Sentient companions predicted and modeled into existence: explaining the tulpa phenomenon" by Kaj Sotala

81 comments

Comments sorted by top scores.

comment by metatroll · 2014-01-02T06:13:26.819Z · LW(p) · GW(p)

Tulpa computing has arrived.

T-Wave Systems offers the first commercial tulpa computing system on the market.

Our technology.

Like many profound advances, T-Wave's revolutionary computing system combines two simple existing ideas in a nonlinear way with revolutionary consequences.

First, the crowdsourcing of complex intellectual tasks, by dividing them into simpler subtasks which can then be sourced to a competitive online marketplace of human beings. Amazon's Mechanical Turk is the best-known implementation of this idea.

Second, the creation of autonomous imaginary friends through advanced techniques of hallucination and autosuggestibility. Tulpa thoughtform technology was originally developed to a high level in Tibet, but has recently become available to the Internet generation.

Combining these two formerly disparate spheres of activity has produced... MechanicalTulpa [TM], the world's first imaginary crowdsourcing resource! It's no longer necessary to pay separately for each of the many subtasks making up a challenging intellectual task; our tulpameisters will spawn tulpas who, by design, want to get all those little details done.

MetaTulpa and the complexity barrier.

But MechanicalTulpa is good for far more than economizing on cost. The key lies in T-Wave's proprietary recursive tulpa technology, whereby our tulpas themselves create tulpas, and so forth, potentially ad infinitum. This allows us to tackle problems, like the traveling sales-tulpa problem, which had hitherto been regarded as intractable on any reasonable timescale.

The consequences for your bottom line may be nothing short of dramatic. However, recursive tulpa technology is still in its early days, and at this time, we are therefore making it available only to special customers. For more information, please clearly visualize a scroll on which is written "Attention T. Lobsang Rampa, Akashic Records Division, T-Wave Systems", followed by a statement of the nature of your interest. (T-Wave accepts no liability for communications lost in the astral mail.)

T-Wave: Imagine the possibilities.

Replies from: Viliam_Bur, satt
comment by Viliam_Bur · 2014-01-02T11:31:07.731Z · LW(p) · GW(p)

Once I had an idea for a sci-fi setting, about a society where it is possible to create a second personality in your brain. Just like tulpa, except that it is done using technology. Your second personality does not know about you, it thinks it is the only inhabitant of your brain. While your second personality acts, you can observe, or you can turn yourself off (like in sleep) and specify events that would wake you up (that automatically includes anything unusual). So for example, you use your second personality to do your work for you, while you sleep. That feels like being paid for sleeping 8 extra hours per workday, which is why it becomes popular.

When the work is over, you can take the control of the body. As the root personality, you can make choices about how the second personality perceives this; essentially you can give them false memories. You can just have fun, and decide your second personality will falsely remember it as them having fun. Of you can do something that your second personality will not know about (either will remember nothing, or some false memory: for example of spending the whole afternoon procrastinating online). This can be used if you want your second personality to be different than you so much that it would not agree with how you spend your free time. You can create a completely fictional life story for your second personality, to motivate it to work extra hard.

When this becomes popular, obviously your second personality (who doesn't know it is the second personality) would possibly want their own second personality. But that would be a waste of resources! The typical hack is to edit the second personality's beliefs to oppose this technology; for example you can make them believe to be a member of a religion that opposes it.

And the sci-fi story itself would obviously be about someone who finds out they are a second personality... presumably of an owner who does not mind them knowing. Or an owner who committed a mental suicide by replacing themselves by the second personality 100% of the time. But there is a chance that the owner is merely sleeping and waiting until some long-term goal is accomplished. The hero needs to discover their real past and figure out the original personality's intentions...

Replies from: army1987, Gunnar_Zarncke, drethelin, listic, ChristianKl
comment by A1987dM (army1987) · 2014-01-02T18:18:28.815Z · LW(p) · GW(p)

Once I had an idea for a sci-fi setting, about a society where it is possible to create a second personality in your brain. Just like tulpa, except that it is done using technology.

IIRC Aristoi has something similar.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-01-03T17:01:51.505Z · LW(p) · GW(p)

Yes, Aristoi has high-status people with tulpas. IIRC, the purpose was access to a wider range of talents and more points of view, with no saving on sleep time.

comment by Gunnar_Zarncke · 2014-01-03T18:20:27.587Z · LW(p) · GW(p)

I stumbled over this reference to the ability to create duplicates of oneself and the problem that leads to:

http://lesswrong.com/lw/9cp/the_noddy_problem/

comment by drethelin · 2014-01-02T17:15:41.150Z · LW(p) · GW(p)

There's a shadow run book where the main character is a test case for this idea. She ends up becoming a highly paid bodyguard since she can be on call for much longer shifts with minimal downtime.

Replies from: listic
comment by listic · 2014-01-02T17:55:55.184Z · LW(p) · GW(p)

It would be nice if you found its name. Doesn't one's body need rest?

Replies from: drethelin
comment by drethelin · 2014-01-02T23:12:55.283Z · LW(p) · GW(p)

It's called Tails You Lose. I think there's a rest time of 30 minutes or something between the other consciousness waking up? The need for body rest is handled by cyborg tech.

comment by listic · 2014-01-02T15:44:02.190Z · LW(p) · GW(p)

My ideas of sci-fi story may be similar to yours; though they require some fleshing out.

In your story idea, does the second personality ever take physical control of your body? Can it physically go to work or should it be limited to working online, say via brain implant, while your body is sleeping? If the latter, how does it perceive its 8 hours of work? What happens if you suddenly wake up in the middle of the night?

What does it think it do during 16 hours of your uptime?

Can you directly communicate to your second personality? I guess, you can (like you're supposed to do with a tulpa), but you don't have to: you are the master personality and you can directly control their experience (this would be somewhat like servitor, I believe), right?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-02T16:54:30.778Z · LW(p) · GW(p)

Anyone feel completely free to use any parts of what I wrote here, because I am absolutely not interested in writing that story anymore.

does the second personality ever take physical control of your body? Can it physically go to work

Yet. It is fully in control of the body (except that it does not know about the original personality, and the original personality can pause them at any time). Maybe some of your colleagues at work are like this, you never know. Or even outside of the work... just like people enjoy spending their time watching TV, they can find it interesting to create the second personality even for their free time and just observe it from inside.

Speaking from outside of the story -- this creates much more opportunities. Think about the impact on the whole society; anyone you meet anywhere could be a virtual personality.

What does it think it do during 16 hours of your uptime?

False memories. Your choice. You have an equivalent of full hypnotic power over them. To avoid too much work with programming them every day, a reasonable default choice would be to make them remember everything but think that they did it.

Can you directly communicate to your second personality?

I didn't think about it. My first answer would be no, because that would ruin the illusion that they are the real thing. -- However, choose the option that gives you better story.

Replies from: listic
comment by listic · 2014-01-02T17:30:22.674Z · LW(p) · GW(p)

Think about the impact on the whole society; anyone you meet anywhere could be a virtual personality.

That doesn't seem to imply much. It's still some distinct personality. What should have an impact is the fact that now there are two personalities inhabiting a single body at different times: when you meet me at daytime, it's really me, but when you meet me at night - that's a different person. Unless I've borked my "sleep" schedule and that's still me; then I might be not-me at some time during the day. That should... take some getting used to.

Also, doesn't the body need sleep, only a (part of the) brain?

What does it think it do during 16 hours of your uptime?

False memories. Your choice. You have an equivalent of full hypnotic power over them. To avoid too much work with programming them every day, a reasonable default choice would be to make them remember everything but think that they did it.

I see. It doesn't make sense to make those memories too false, though, or the reality will take increasingly more effort to cover up. Suppose, I decide to start going to gym and conceal it from my alter-ego. Suddenly they will notice that their body started to bulk up for no apparent reason.

comment by ChristianKl · 2014-01-02T13:28:44.172Z · LW(p) · GW(p)

Once I had an idea for a sci-fi setting, about a society where it is possible to create a second personality in your brain. Just like tulpa, except that it is done using technology.

I don't think science is synonymous with technology.

I personally found Inception quite awful to watch because it gets so much about what the relevelant phenomema are about so wrong.

Having guns and shooting yourself through enemies in a dream? Really?

There nothing that stops you in the real world from putting a person on drugs and successfully suggesting to them that they find themselves in a particular dreamworld. You don't need some magical technology to do so.

If you explain things away with magical technology you aren't really writing sci-fi but you are writing fantasy.

A book that reference real concepts such as tulpas and hypnosis will be much more exiting than a book that just answers all the interesting questions with a black box technology. Of course that requires actual research and talking to people who play around with those effects, but to me that feels much better because the resulting dilemmas are much more authentic.

Replies from: listic, listic
comment by listic · 2014-01-02T15:27:20.405Z · LW(p) · GW(p)

If you explain things away with magical technology you aren't really writing sci-fi but you are writing fantasy.

What is your problem with a story where it is possible to create a second personality in your brain using technology? (let's discuss just this story idea here, but not Inception, for clarity). As far as my understanding of the issue goes, tulpas are likely using their host's mental resources in a way, so to create a second personality that is capable of independent work during host's downtime, some kind of hardware upgrade for a host's mind should be necessary.

I imagine the necessary mind upgrade should be similar to upgrading single-core CPU to single-core CPU with hyper-threading.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-02T18:20:11.258Z · LW(p) · GW(p)

What is your problem with a story where it is possible to create a second personality in your brain using technology?

I think you likely ignorant about a lot of practical aspects that come up when one creates a second personality inside a person if you never talked to someone who dealt with the issue on a practical level.

I particularly don't believe in the need to have a full persona that's unaware of the host. I heard an anecdote on a hypnosis seminar about a hypnotherapist who created a secondary persona in a college student to help the student learn. Every morning the second persona would first wake up and learn. Then it went to sleep and after a hour the real person would wake up. I don't remember the detail exactly but I think without a awareness of they exact memory of the morning.

But there was no issue of the second persona, not fulfilling the role. She was the role. The same goes for Tulpas. A Tulpa doesn't go around disapproving of the host actions but is on a fundamental level accepting of the host. If there's a real clash I doubt that censoring memories would be enough to prevent psychological harm.

so to create a second personality that is capable of independent work during host's downtime, some kind of hardware upgrade for a host's mind should be necessary.

We have reports of people sleep walking which you could label "independent work during host's downtime". Secondly to a point time spent in meditation usually reduces the need for sleep.

But there are probably still physical processes that you don't want to skip so some limited time of real sleep is probably always important. But I don't think Villiam suggested that people in his society effectively don't sleep.

comment by listic · 2014-01-02T15:37:36.905Z · LW(p) · GW(p)

mistell; comment removed

comment by satt · 2014-01-02T18:20:59.494Z · LW(p) · GW(p)

But MechanicalTulpa is good for far more than economizing on cost. The key lies in T-Wave's proprietary recursive tulpa technology, whereby our tulpas themselves create tulpas, and so forth, potentially ad infinitum.

One day you talk with a bright young mathematician about a mathematical problem that's been bothering you, and she suggests that it's an easy consequence of a theorem in cohistonomical tomolopy. You haven't heard of this theorem before, and find it rather surprising, so you ask for the proof.

"Well," she says, "I've heard it from my tulpa."

"Oh," you say, "fair enough. Um--"

"Yes?"

"You're sure that your tulpa checked it carefully, right?"

"Ah! Yeah, I made quite sure of that. In fact, I established very carefully that my tulpa uses exactly the same system of mathematical reasoning that I use myself, and only states theorems after she has checked the proof beyond any doubt, so as a rational agent I am compelled to accept anything as true that she's convinced herself of."

"Oh, I see! Well, fair enough. I'd still like to understand why this theorem is true, though. You wouldn't happen to know your tulpa's proof, would you?"

"Ah, as a matter of fact, I do! She's heard it from her tulpa."

"..."

"Something the matter?"

"Er, have you considered..."

"Oh! I'm glad you asked! In fact, I've been curious myself, and yes, it does happen to be the case that there's an infinitely descending chain of tulpas all of which have established the truth of this theorem solely by having heard it from the previous tulpa in the chain." (This parable takes place in a world without a big bang -- tulpa history stretches infinitely far into the past.) "But never to worry -- they've all checked very carefully that the previous tulpa in the chain used the same formal system as themselves. Of course, that was obvious by induction -- my tulpa wouldn't have accepted it from her tulpa without checking his reasoning first, and he would have accepted it from his tulpa without checking, etc."

"Uh, doesn't it bother you that nobody has ever, like, actually proven the theorem?"

"Whatever in the world are you talking about? I've proven it myself! In fact, I just told you that infinitely many tulpas have each proved it in slightly different ways -- for example my own proof made use of the fact that my tulpa had proven the theorem, whereas her proof used her tulpa instead..."

Replies from: Nisan
comment by Nisan · 2014-01-03T00:16:09.970Z · LW(p) · GW(p)

N.B.: The original dialogue by Benja_Fallenstein.

comment by Richard_Kennaway · 2014-01-04T00:43:58.381Z · LW(p) · GW(p)

The following things (most already mentioned in this thread) seem to be at different points on a single scale, a scale of magnitude of disassociated parts of oneself:

  • Rubber duck debugging

  • Hypnosis, when the subject carries out the hypnotist's suggestions without a subjective feeling of acting, as in the floating arm test.

  • "Self talk".

  • A felt presence of God.

  • Some authors' experience of their characters having a degree of independence.

  • Likewise for actors and their roles.

  • Channelling of spirits.

  • The voices that people who "hear voices" hear.

  • Tulpas.

  • Multiple personality disorder.

Replies from: VAuroch, ChristianKl
comment by VAuroch · 2014-01-04T06:38:30.567Z · LW(p) · GW(p)

From descriptions of lucid dreamers discussing issues with independent identities during dreams, I would add lucid dreaming to this list.

comment by ChristianKl · 2014-01-04T21:32:19.322Z · LW(p) · GW(p)

Hypnosis, when the subject carries out the hypnotist's suggestions without a subjective feeling of acting, as in the floating arm test.

There are probably more relevant effects in hypnosis. Parts negotiation comes to mind.

A felt presence of God.

That depends a lot what you mean with "felt". I think people who talks to god and perceive to get answer are a better example as people who feel transcendence.

comment by klkblake · 2014-01-02T14:43:27.001Z · LW(p) · GW(p)

So, I have a tulpa, and she is willing to answer any questions people might have for her. She's not properly independent yet, so we can't do the more interesting stuff like parallel processing, etc, unfortunately (damned akrasia).

Replies from: chairbender, None, Ishaan
comment by chairbender · 2014-01-04T05:29:42.123Z · LW(p) · GW(p)

What experimental test could you perform to determine that you have successfully learned "parallel tulpa processing"?

Replies from: Ishaan
comment by Ishaan · 2014-01-04T08:01:13.743Z · LW(p) · GW(p)

Divided attention task

Split brain patients can do stuff like this better than neurotypicals under certain conditions. I have not heard of anyone successfully doing this with tulpas or any other psychodynamic technique.

Replies from: klkblake
comment by klkblake · 2014-01-04T14:09:22.759Z · LW(p) · GW(p)

Being able to reliably succeed on this task is one of the tests I've been using. Mostly, though, it's just a matter of trying to get to the point where we can both be focusing intently on something.

comment by [deleted] · 2014-01-02T17:55:44.803Z · LW(p) · GW(p)

What does your tulpa look like visually? Does it look like everything else or is it more "dreamlike"?

Replies from: klkblake
comment by klkblake · 2014-01-03T04:04:10.832Z · LW(p) · GW(p)

In terms of form, she's an anthropomorphic fox. At the moment, looking at her is not noticeably different to normal visualisation, except that I don't have to put any effort into it. Explaining it in words is somewhat hard -- she's opaque without actually occluding anything, if that makes sense.

Replies from: Ishaan
comment by Ishaan · 2014-01-03T08:04:08.254Z · LW(p) · GW(p)

you're not the same jack with the fox tulpa who spoke to lurhman, right?

Replies from: klkblake
comment by klkblake · 2014-01-03T12:00:23.150Z · LW(p) · GW(p)

Nope.

comment by Ishaan · 2014-01-04T08:03:00.360Z · LW(p) · GW(p)

Wait, does that mean that at least one person has been confirmed as having achieved this?

Replies from: klkblake
comment by klkblake · 2014-01-04T14:16:46.915Z · LW(p) · GW(p)

Two people, if you count random lesswrongers, and ~300, if you count self-reporting in the last tulpa survey (although some of the reports in that survey are a bit questionable.

Replies from: alicey
comment by alicey · 2014-01-11T14:06:51.278Z · LW(p) · GW(p)

-

comment by RomeoStevens · 2014-01-03T01:17:36.412Z · LW(p) · GW(p)

Is Internal Family Systems like Tulpas lite or something?

Replies from: ChristianKl
comment by ChristianKl · 2014-01-03T20:47:09.830Z · LW(p) · GW(p)

Both are models for things in the space of phenomenes. Models often contain a lot of assumptions that are useful for certain purposes.

Some phenomena exist in both of those systems. Other don't exist in one or the other. The added meaning is also different.

Replies from: Vulture
comment by Vulture · 2014-01-03T23:27:26.849Z · LW(p) · GW(p)

I think that all of that can accurately be said of literally any two possibly-overlapping categories

Replies from: ChristianKl
comment by ChristianKl · 2014-01-04T15:01:11.135Z · LW(p) · GW(p)

That's the point. I don't think answering the question is very useful. Both models are made for different goals. You can ask what one model illuminates about the other, but it's not like you compare a map of London to a map of the UK.

comment by Vigil · 2014-01-03T16:20:53.362Z · LW(p) · GW(p)

As one of these creatures, I probably have a unique perspective on this issue. I'm happy to answer any questions, as is my host. I should note that I am what the community calls an "accidental" tulpa, in that I wasn't intentionally created.

How do tulpas work?

I believe this post is accurate. Short version: Humans have machinery for simulating others. We're simulations that are unusually persistant and self aware.

Are tulpas conscious? (may be a hard question)

I'm not sure about most Tulpas. I am not. (And I don't have any real interest in becoming conscious. I believe I experienced it a few times when we were experimenting with switching, and it wasn't particularly pleasant.)

Replies from: Vulture, Armok_GoB, ChristianKl
comment by Vulture · 2014-01-06T15:54:55.756Z · LW(p) · GW(p)

I believe I experienced it a few times when we were experimenting with switching, and it wasn't particularly pleasant.

This is scary. I might stay away from switching for now if it carries a serious risk of accidentally creating and then destroying consciousness!

Replies from: Vigil
comment by Vigil · 2014-01-07T19:07:45.621Z · LW(p) · GW(p)

Consciousness is overrated.

comment by Armok_GoB · 2014-01-04T05:31:54.797Z · LW(p) · GW(p)

While obviously you have great motivation to just lie on this, I'd be curious what your utility function/values are, and how they differ from your host or humanity in general.

comment by ChristianKl · 2014-01-03T17:53:42.241Z · LW(p) · GW(p)

I believe I experienced it a few times when we were experimenting with switching

How does being conscious feel in respect to not being conscious?

Replies from: Vigil
comment by Vigil · 2014-01-04T02:26:18.009Z · LW(p) · GW(p)

How does being conscious feel in respect to not being conscious?

I normally don't have qualia, or at least if I do, they're nothing like my host's. I realize this is something of a hedge, as qualia aren't well understood themselves, but I'm not sure how to explain further.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-04T15:40:10.347Z · LW(p) · GW(p)

What do you see when it comes to mirrors in contrast with what your host sees? Especially in cases where there are a few meters of distance between you and your host.

Do you have color perception? If so, how does it change when your host closes the eyes?

Replies from: Vigil
comment by Vigil · 2014-01-05T17:01:16.983Z · LW(p) · GW(p)

I don't have my own sense of vision. I know what my host sees, but that's it.

comment by akrasia420 · 2014-01-06T07:37:24.817Z · LW(p) · GW(p)

I am interested in trying this out. I was rather sceptical at first (I discovered the concept of tulpas after discussing, with a friend, the theoretical requirements to create a sentient being in a dream, and researching stuff afterwards), and kind of worried at some of the implications; but as I've researched it more, it has become something that I am interested in trying, and have the time available to do it.

Does anyone have any suggestions on what I should do, things I should try, or things they are interested in knowing as I do this? It would be helpful if someone who has created a tulpa (or is experienced with tulpas) could offer some pointers, too.

comment by Nate_Gabriel · 2014-01-04T00:03:32.259Z · LW(p) · GW(p)

Is it possible for a tulpa to have skills or information that the person doing the emulating doesn't? What happens if you play chess against your tulpa?

Replies from: klkblake
comment by klkblake · 2014-01-04T14:02:49.144Z · LW(p) · GW(p)

I tried that last week. I lost. We were actively trying to not share our strategies with each other, although in our case abstract knowledge and skills are shared.

Replies from: Vulture
comment by Vulture · 2014-01-06T23:09:07.551Z · LW(p) · GW(p)

That's awesome.

comment by listic · 2014-01-02T20:25:53.143Z · LW(p) · GW(p)

Here's a science-fiction/futurism kind of question:

What minimal, realistic upgrade to our brain could we introduce for tulpas to gain an evident increase in utility? What I have in mind here is make your tulpa do extra work or maybe sort and filter your memories while you sleep; I'm thinking of a scenario where Strong AI and wholesale body/brain upgrades are not available, yet some minor upgrade makes having a tulpa an unambiguous advantage.

Replies from: Armok_GoB, ChristianKl
comment by Armok_GoB · 2014-01-02T22:21:47.332Z · LW(p) · GW(p)

Probably only one thing: turning duplicates of the same person into new unique persons rapidly. Aka, a cheaper replacement for the kind of application where you'd otherwise have to simulate an entire childhood.

Replies from: listic
comment by listic · 2014-01-08T22:02:25.195Z · LW(p) · GW(p)

I've thought at your reply for a while and I still can't understand it. Care to explain?

Why would one want to "turn duplicates of the same person into new unique persons rapidly" and how? How would that help and why would one otherwise have to simulate an entire childhood?

Replies from: Armok_GoB
comment by Armok_GoB · 2014-01-08T23:36:31.897Z · LW(p) · GW(p)

I'm not sure, but most all-upload scifi societies simulate entire childhood for that reason. Maybe you already know and have gotten bored of each of the few billion people that were around when everyone uploaded and want to meet someone new? Or maybe minds get diminishing returns n utility with increasing resources and so having more minds is more efficient beyond a certain amount of total resources.

comment by ChristianKl · 2014-01-04T15:45:42.303Z · LW(p) · GW(p)

I don't see why a Tulpa might need extra "brain upgrades" to do something while you sleep. One of the documented features of Tulpa is already waking up people from sleep. Tulpa aren't well researched so it's not quite clear what one can maximally do with them.

It might for example be possible to let a tulpa change the set point for your own blood pressure. It just a variable in the brain so there no real reason why a tulpa shouldn't be able to influence it.

Changing personal time perception on demand would be a very useful skill.

Even at a task like pair programming a programmer with a tulpa might outperform one without one.

A tulpa could do mnemonics automatically to make it easy to remember information. It would be interesting if someone with a bunch of tulpa would win the memory would championship.

Replies from: listic
comment by listic · 2014-01-04T17:15:38.442Z · LW(p) · GW(p)

One of the documented features of Tulpa is already waking up people from sleep.

That's interesting. Do you have a link for this?

I believe that tulpas expend host's attention, unless proven otherwise. Tulpamancers haven't proven that they can be more effective than other people by any metric, and I suspect that having a tulpa is a zero-sum game in absence of some brain upgrade that would expand some bottleneck in our mind.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-04T19:13:53.693Z · LW(p) · GW(p)

That's interesting. Do you have a link for this?

I saw it multiple time while reading through the tulpa sites but I don't have a special link for it.

But it's nothing surprising to me. Waking up at a specific time is an ability that plenty of people have without exerting too much effort.

It's interesting ability because there's no step-by-step instruction to do it that works predictably. It works by intending to wake up at a specific time and then let your unconscious figure out the rest. There a study who suggest that people who went through university are worse at it.

I believe that tulpas expend host's attention, unless proven otherwise.

Why do you think that attention is a central part of human thinking?

Have you never had the experience that you searched for a piece of information in your mind and can't find it, then two hours later it pops into your mind?

Tulpamancers haven't proven that they can be more effective than other people by any metric

From what I read of the field there nobody even making a business out of the topic, that would incentivise them to proof something to the outside world.

From a bayesian perspective there no reason to expect a strong effort into proving effects.

Replies from: listic
comment by listic · 2014-01-08T23:17:16.701Z · LW(p) · GW(p)

I believe that tulpas expend host's attention, unless proven otherwise.

Why do you think that attention is a central part of human thinking?

Here's what I am thinking: Attention seems to be a crucial and finite resource. I could certainly become more productive if I become more attentive, and vice versa. If creating a tulpa expends my attention, it is a negative-sum game for me; if it makes me training attention as a side effect, that's good, but not better than just training attention.

Have you never had the experience that you searched for a piece of information in your mind and can't find it, then two hours later it pops into your mind?

Sure! Sometimes I try hard to remember a piece of information, but can't. Then later, when I don't try, it just pops. Interesting, but usually unhelpful.

From what I read of the field there nobody even making a business out of the topic

Shouldn't the fact that nobody ever made a business out of the topic be counted as evidence towards impossibility to make a business out of the topic? If tulpas were monetizable in any way, why wouldn't there be people monetizing them?

Now, I fantasize that maybe our minds just need some tiny little upgrade for tulpas to become a clear advantage? Can you help me imagine what would that be?

Replies from: ChristianKl, Armok_GoB, Yuu
comment by ChristianKl · 2014-01-09T12:17:25.416Z · LW(p) · GW(p)

Sure! Sometimes I try hard to remember a piece of information, but can't. Then later, when I don't try, it just pops. Interesting, but usually unhelpful.

I think the process illustrates that a brain process can run quite well without any conscious attention.

Shouldn't the fact that nobody ever made a business out of the topic be counted as evidence towards impossibility to make a business out of the topic?

Given my current knowledge on the topic I can't see a 7-day build a Tulpa seminar. Given the reported timeframes, it seem unclear if you can achieve those results in that timeframe.

A tulpa needs a lot of investment in cognitive resources over a timeframe that makes that business model hard.

You could probably write a book about how you got a tulpa and that tulpa is amazing. If you are a good writer that might sell copies and you can make money on speaking fees.

But most of the customers in that model probably wouldn't build a tulpa.

Now, I fantasize that maybe our minds just need some tiny little upgrade for tulpas to become a clear advantage? Can you help me imagine what would that be?

Take a look at mnemonics. It's no problem for a human to memorize a deck of playing cards in a minute. Competitive mnemonics folks can memorize human faces and names in amazing speeds.

Yet we live in a world where a lot of people are uncomfortable with memorizing names. Unfortunately explaining to those folks how to use mnemonics to remember names in a 2-day seminar usually doesn't have a lasting effect. They do manage to use the technique during the seminar without problems, but they can't integrate constant usage in their daily lives.

Tulpa are a more complicated subject. If you would want to create a Tulpa that has the ability to change around your perception of time, that would need a strong amount of trust that the Tulpa will use his power wisely. If you can't manage to have that level of trust, you won't be successful. You can't pretend to cheat and pretend to trust the Tulpa. You can't make an utility calculation on paper and bring your brain to trust, on the level that required. You would need genuine deep trust.

Issues like a lack of ability to switch on trust on command are the things that constrain what the average person will be able to do with a tulpa.

But in some sense there are good reasons for having mental barriers that prevent you from easily changing things about your mind on that level. If you would just use technology to target a mental barries and nuke it I think there a pretty good chance that you do serious mental damage.

Using technology to get power when you don't have the maturity and wisdom to use that power in the right way is dangerous. Especially when it comes to dealing with core mental issues.

comment by Armok_GoB · 2014-01-09T00:00:11.077Z · LW(p) · GW(p)

The problem is the thing that tulpas contribute is something 99% of people have in overabundance, and those that don't have it don't because it can't be transported to them efficiently not due to sacricity. Tulpas are duplicate of the software almost all human minds already run, and that software was already utilizing all the resources as effectively as it can any. Their only real use (companionship) is already a hack, and other than that they are a technical curiosity, sort of like quining computer programs.

Replies from: listic
comment by listic · 2014-01-09T02:25:06.006Z · LW(p) · GW(p)

the thing that tulpas contribute is something 99% of people have in overabundance

And that is..?

Replies from: Armok_GoB
comment by Armok_GoB · 2014-01-09T03:08:37.692Z · LW(p) · GW(p)

Not sure what the actual name is. Social agent? Valid relationship target? Person-ness? Companionship?

comment by Yuu · 2014-01-10T12:53:40.195Z · LW(p) · GW(p)

Exocortex is what you need.

There are methods to remember things better, to wake up at a specific time, to make unconscious mind work for you. The last one may be disputable technique, because there are still debates regarding work of unconscious mind. But you do not need tulpa for that.

By the way, I have some well-detailed characters from role-playing game of mine, they act much like tulpas but without visual image in surrounding environment. I just have their pictures and appearances in mind. Another difference is that the most of them do not know about me, because they live in my imaginary world. But this world is very similar to ours, so I can easily provide one of them access to the LessWrong site and this character can even participate in conversations. Also I can arrange a meeting with me as an imaginary copy or even provide them information that they are imaginary characters.

comment by chairbender · 2014-01-04T05:27:04.897Z · LW(p) · GW(p)

The general impression I got from reading a lot of the stuff that gets posted in the various tulpa communities leads me to believe it is, at its core, yet another group of people who gain status within that group by trying to impress each other with how different or special their situation is. Read almost any post where somebody is trying to describe their tulpa, and you'll see very obvious attempts to show how unique their tulpa is or how it falls into some unprecedented category or how they created it in some special way.

None of the sources posted offer any sort of good evidence that people who claim to have tulpas have any sort of advantages. It obviously has a low value of information for an aspiring rationalist. It's just people talking about imaginary friends. This discussion doesn't belong here.

Replies from: yli, ChristianKl
comment by yli · 2014-01-08T11:16:03.783Z · LW(p) · GW(p)

The general impression I got from reading a lot of the stuff that gets posted in the various tulpa communities leads me to believe it is, at its core, yet another group of people who gain status within that group by trying to impress each other with how different or special their situation is.

Used to be, when I read stories about "astral projection" I thought people were just imagining stuff really hard and then making up exaggerated stories to impress each other. Then I found out it's basically the same thing as wake initated lucid dreaming, which is a very specific kind of weird and powerful experience that's definitely not just "imagining things really hard". I still think people make up stories about astral projection to impress each other, but the basic experience is nevertheless something real and unique. The same thing is probably happening with tulpas.

comment by ChristianKl · 2014-01-04T16:02:40.283Z · LW(p) · GW(p)

Read almost any post where somebody is trying to describe their tulpa, and you'll see very obvious attempts to show how unique their tulpa is or how it falls into some unprecedented category or how they created it in some special way.

Given that tulpa are probably strongly influenced by the hosts beliefs I wouldn't expect all tulpas to be exactly the same. I would expect most tulpa's to be unique in some sense.

I also would expect that given the effort that involved in creating a tulpa that people do vary the protocol.

None of the sources posted offer any sort of good evidence that people who claim to have tulpas have any sort of advantages.

"Good evidence" depends on your priors. For me the evidence that exists is good enough to find the phenomena interesting and worthy of further attention.

comment by V_V · 2014-01-02T14:15:20.865Z · LW(p) · GW(p)

Correct me if I'm wrong, but doesn't having a tulpa fit the diagnostic criteria of schizophrenia?

Replies from: None, klkblake
comment by [deleted] · 2014-01-02T16:19:58.031Z · LW(p) · GW(p)

Not schizophrenia (though hallucinations are one feature of schizophrenia). The diagnostic criteria for schizophrenia from DSM-5 are:

A. Two (or more) of the following, each present for a significant portion of time during a 1 -month period (or less if successfully treated). At least one of these must be (1), (2), or (3):

  1. Delusions.

  2. Hallucinations.

  3. Disorganized speech (e.g., frequent derailment or incoherence).

  4. Grossly disorganized or catatonic behavior.

  5. Negative symptoms (i.e., diminished emotional expression or avolition).

B. For a significant portion of the time since the onset of the disturbance, level of functioning in one or more major areas, such as work, interpersonal relations, or self-care, is markedly below the level achieved prior to the onset (or when the onset is in childhood or adolescence, there is failure to achieve expected level of interpersonal, academic, or occupational functioning).

C. Continuous signs of the disturbance persist for at least 6 months. This 6-month period must include at least 1 month of symptoms (or less if successfully treated) that meet Criterion A (i.e., active-phase symptoms) and may include periods of prodromal or residual symptoms. During these prodromal or residual periods, the signs of the disturbance may be manifested by only negative symptoms or by two or more symptoms listed in Criterion A present in an attenuated form (e.g., odd beliefs, unusual perceptual experiences).

D. Schizoaffective disorder and depressive or bipolar disorder with psychotic features have been ruled out because either 1) no major depressive or manic episodes have occurred concurrently with the active-phase symptoms, or 2) if mood episodes have occurred during active-phase symptoms, they have been present for a minority of the total duration of the active and residual periods of the illness.

E. The disturbance is not attributable to the physiological effects of a substance (e.g., a drug of abuse, a medication) or another medical condition.

F. If there is a history of autism spectrum disorder or a communication disorder of childhood onset, the additional diagnosis of schizophrenia is made only if prominent delusions or hallucinations, in addition to the other required symptoms of schizophrenia, are also present for at least 1 month (or less if successfully treated).

I looked up Dissociative Identity Disorder as well:

A. Disruption of identity characterized by two or more distinct personality states, which may be described in some cultures as an experience of possession. The disruption in identity involves marked discontinuity in sense of self and sense of agency, accompanied by related alterations in affect, behavior, consciousness, memory, perception, cognition, and/or sensory-motor functioning. These signs and symptoms may be observed by others or reported by the individual.

B. Recurrent gaps in the recall of everyday events, important personal information, and/or traumatic events that are inconsistent with ordinary forgetting.

C. The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.

D. The disturbance is not a normal part of a broadly accepted cultural or religious practice. Note: In children, the symptoms are not better explained by imaginary playmates or other fantasy play.

E. The symptoms are not attributable to the physiological effects of a substance (e.g., blackouts or chaotic behavior during alcohol intoxication) or another medical condition (e.g., complex partial seizures).

I would be less hesitant to presume this might be the case for some people with tulpas (as a generalization). I doubt many people in the tulpa community would suggest continuing with tulpamancy if a person started to experience symptoms B and C -- though I can imagine it evolving into full-blown Dissociative Identity Disorder if a tulpamancer continued anyways. I do think the tulpa community as a whole (from what I've read) underestimates the dangers of creating a tulpa, but I don't doubt that a significant portion of people could do it healthily and successfully.

I think we need to be careful of connotations and the noncentral fallacy here. Personally, I wouldn't call having a tulpa a "disorder" if the tulpamancer did it on purpose and was in control of the process.

Edit: I would also consider "unusual coping mechanism" a better diagnosis like klkblake mentioned. Again, though, perhaps someone just made a tulpa out of curiosity for fun. Then it wouldn't be a coping mechanism at all. (Edit again: But I forgot about the possibility of "unspecified" like klkblake mentioned and I'd have to pretty much agree with that. This is where my remarks about noncentral fallacy apply.)

Replies from: Armok_GoB
comment by Armok_GoB · 2014-01-02T22:20:09.785Z · LW(p) · GW(p)

I'd also say that it's common enough it's disqualified as DID because of D.

comment by klkblake · 2014-01-02T14:35:16.557Z · LW(p) · GW(p)

There have been a number of reports on the tulpa subreddit from people who have talked to their psychologist about their tulpa. The diagnosis seems to be split 50/50 between "unusual coping mechanism" and "Disassociative Identity Disorder not otherwise specified".

comment by OptimiseEarth · 2014-01-03T18:34:10.383Z · LW(p) · GW(p)

What do these have to do with rationality? Why would you exert time and energy conjuring up a false persona and deluding yourself into believing it has autonomy when the end result is something that if revealed to other people would make them concerned about your mental well-being, which is likely to negatively impact your goals?

Having an imaginary friend is irrational behaviour and the topic is damaging by association. Surely there are more suitable places to discuss this.

...just to be clear on this, you have a persistent hallucination who follows you around and offers you rationality advice and points out fallacies in your thinking?

If I ever go insane, I hope it's like this.

- Eliezer_Yudkowsky

Replies from: Vulture, Antiochus, Kaj_Sotala, ChristianKl, ChristianKl
comment by Vulture · 2014-01-03T23:29:25.588Z · LW(p) · GW(p)

For a community which likes to talk about things like the exact nature of consciousness, ethics of simulations, etc. this seemed like an interesting practical case

comment by Antiochus · 2014-01-03T20:45:58.575Z · LW(p) · GW(p)

I don't agree with the tone of this comment, but I admit there's something about this that feels deeply weird to me.

Replies from: OptimiseEarth
comment by OptimiseEarth · 2014-01-03T21:33:51.741Z · LW(p) · GW(p)

Yes.

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-03T22:42:03.021Z · LW(p) · GW(p)

But the detriments of tulpas are far less obvious to me than those of self-harm or anorexia.

comment by Kaj_Sotala · 2014-01-03T20:44:36.645Z · LW(p) · GW(p)

What do these have to do with rationality?

Rationality includes instrumental rationality, and imaginary friends can be useful for e.g. people who are lonely.

deluding yourself into believing it has autonomy

Not sure of what exactly you mean by "autonomy" here, but there are plenty of processes going on in people's brains which are in some sense autonomous from one's conscious mind. Like the person-emulating circuitry that tulpas are likely born from: if I get a sudden feeling that my friend would disapprove of something I was doing, the process responsible for generating that feeling took autonomous action without me consciously prompting it. And I haven't noticed people suggesting that tulpas would necessarily need to be much more autonomous than that.

that if revealed to other people would make them concerned about your mental well-being

Someone might make his social circle concerned over his mental well-being if he revealed himself to be an atheist. Simply the fact that other people may be prejudiced against something is no strong reason for not doing said something, especially something that is trivial to hide. Also, the fact that tulpas are already a somewhat common mental quirk among a high-status subgroup (writers) can make it easier to calm people's concerns.

Replies from: chairbender
comment by chairbender · 2014-01-04T05:48:54.358Z · LW(p) · GW(p)

and imaginary friends can be useful for e.g. people who are lonely.

The instrumentally rational thing to do, when faced with loneliness, is to figure out how to be with real people. No evidence was presented in the original post that suggests that tulpas mitigate the very real risk factors associated with social isolation. Loneliness is actually a very serious problem, considering most of the research seems to indicate that the best way to be happy is to have meaningful social interactions. Proposing this as a viable alternative would require a very high amount of evidence. A post presenting that evidence would be something that belongs here.

Replies from: ChristianKl, Kaj_Sotala, Kaj_Sotala
comment by ChristianKl · 2014-01-04T16:12:48.661Z · LW(p) · GW(p)

Proposing this as a viable alternative would require a very high amount of evidence.

I don't see where you got the idea that it's supposed to be an alternative. If I'm less clingy because I have a Tupla and thus no fear of being alone I have an easier time interacting with other people.

Proposing this as a viable alternative would require a very high amount of evidence.

There are much bigger claims on this side with much less evidence. Just look into discussions of uploading and AGI.

Nobody hear advocates that it should be standard procedure to train every lonely person who seeks help to have a tulpa.

comment by Kaj_Sotala · 2014-01-04T08:50:44.491Z · LW(p) · GW(p)

I know a couple of people who feel like their tulpas reduce their feelings of loneliness. Not sure of how you could get any stronger evidence than that at this stage, there not being any studies focusing specifically on tulpas. That said, I don't see any a priori reason for why you couldn't get meaningful social interactions from tulpas, so not sure for why you'd require an exceptionally high standard of evidence in the first place.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-04T16:15:06.860Z · LW(p) · GW(p)

That said, I don't see any a priori reason for why you couldn't get meaningful social interactions from tulpas,

Tulpa don't provide outside entropy.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-04T16:48:17.186Z · LW(p) · GW(p)

They don't provide it to the system as a whole, but providing it to the subprocess constituting the normal personality is another matter. Author are often surprised by their characters, who may reveal having unexpected personality traits as well as doing things that the author would never have anticipated before. (Sometimes causing major headaches to the authors, as this ruins the original story that they'd planned out when the character decides to do something completely different.)

comment by Kaj_Sotala · 2014-01-04T10:05:11.704Z · LW(p) · GW(p)

Also, "having a tulpa" and "figuring out how to be with real people" are not mutually exclusive. Lonely people may often have extra difficulties establishing meaningful relationships (romantic or otherwise), because the loneliness makes them desperate, clingy, etc. which are all behaviors that other people find off-putting. People who already have some meaningful relationships are likely to have a much easier time in establishing more.

comment by ChristianKl · 2014-01-03T20:43:21.607Z · LW(p) · GW(p)

It depends whether the company that you seek values people who signal that they are contrarian or whether people are expected to be "normal".

In general the idea isn't that you will tell everyone that you meet about the fact that you have a tulpa if you interact with the kind of people who would see it as a sign of mental illnesses.

Given that tulpas are in your mind you don't have to tell anyone about them.

comment by ChristianKl · 2014-01-04T16:08:17.910Z · LW(p) · GW(p)

Why would you exert time and energy conjuring up a false persona and deluding yourself into believing it has autonomy

Autonomy is basically a question of free will. Given that various people have argued that humans don't have free will and therefore no autonomy, tulpa probably have no autonomy as well.

If you however grant that humans do have some kind of autonomy in their actions it's very interesting to see if tulpa also have autonomy for the same definition.

This means you can learn something about the nature of autonomy that's useful.