Bayes for Schizophrenics: Reasoning in Delusional Disorders

post by Scott Alexander (Yvain) · 2012-08-13T19:22:21.097Z · LW · GW · Legacy · 155 comments

Contents

155 comments

Related to: The Apologist and the Revolutionary, Dreams with Damaged Priors

Several years ago, I posted about V.S. Ramachandran's 1996 theory explaining anosognosia through an "apologist" and a "revolutionary".

Anosognosia, a condition in which extremely sick patients mysteriously deny their sickness, occurs during right-sided brain injury but not left-sided brain injury. It can be extraordinarily strange: for example, in one case, a woman whose left arm was paralyzed insisted she could move her left arm just fine, and when her doctor pointed out her immobile arm, she claimed that was her daughter's arm even though it was obviously attached to her own shoulder. Anosognosia can be temporarily alleviated by squirting cold water into the patient's left ear canal, after which the patient suddenly realizes her condition but later loses awareness again and reverts back to the bizarre excuses and confabulations.

Ramachandran suggested that the left brain is an "apologist", trying to justify existing theories, and the right brain is a "revolutionary" which changes existing theories when conditions warrant. If the right brain is damaged, patients are unable to change their beliefs; so when a patient's arm works fine until a right-brain stroke, the patient cannot discard the hypothesis that their arm is functional, and can only use the left brain to try to fit the facts to their belief.

In the almost twenty years since Ramachandran's theory was published, new research has kept some of the general outline while changing many of the specifics in the hopes of explaining a wider range of delusions in neurological and psychiatric patients. The newer model acknowledges the left-brain/right-brain divide, but adds some new twists based on the Mind Projection Fallacy and the brain as a Bayesian reasoner.


INTRODUCTION TO DELUSIONS

Strange as anosognosia is, it's only one of several types of delusions, which are broadly categorized into polythematic and monothematic. Patients with polythematic delusions have multiple unconnected odd ideas: for example, the famous schizophrenic game theorist John Nash believed that he was defending the Earth from alien attack, that he was the Emperor of Antarctica, and that he was the left foot of God. A patient with a monothematic delusion, on the other hand, usually only has one odd idea. Monothematic delusions vary less than polythematic ones: there are a few that are relatively common across multiple patients. For example:

In the Capgras delusion, the patient, usually a victim of brain injury but sometimes a schizophrenic, believes that one or more people close to her has been replaced by an identical imposter. For example, one male patient expressed the worry that his wife was actually someone else, who had somehow contrived to exactly copy his wife's appearance and mannerisms. This delusion sounds harmlessly hilarious, but it can get very ugly: in at least one case, a patient got so upset with the deceit that he murdered the hypothesized imposter - actually his wife.

The Fregoli delusion is the opposite: here the patient thinks that random strangers she meets are actually her friends and family members in disguise. Sometimes everyone may be the same person, who must be as masterful at quickly changing costumes as the famous Italian actor Fregoli (inspiring the condition's name).

In the Cotard delusion, the patient believes she is dead. Cotard patients will neglect personal hygiene, social relationships, and planning for the future - as the dead have no need to worry about such things. Occasionally they will be able to describe in detail the "decomposition" they believe they are undergoing.

Patients with all these types of delusions1 - as well as anosognosiacs - share a common feature: they usually have damage to the right frontal lobe of the brain (including in schizophrenia, where the brain damage is of unknown origin and usually generalized, but where it is still possible to analyze which areas are the most abnormal). It would be nice if a theory of anosognosia also offered us a place to start explaining these other conditions, but this Ramachandran's idea fails to do. He posits a problem with belief shift: going from the originally correct but now obsolete "my arm is healthy" to the updated "my arm is paralyzed". But these other delusions cannot be explained by simple failure to update: delusions like "the person who appears to be my wife is an identical imposter" never made sense. We will have to look harder.

ABNORMAL PERCEPTION: THE FIRST FACTOR

Coltheart, Langdon, and McKay posit what they call the "two-factor theory" of delusion. In the two-factor theory, one problem causes an abnormal perception, and a second problem causes the brain to come up with a bizarre instead of a reasonable explanation.

Abnormal perception has been best studied in the Capgras delusion. A series of experiments, including some by Ramachandran himself, demonstrate that Capgras patients lack a skin conductance response (usually used as a proxy of emotional reaction) to familiar faces. This meshes nicely with the brain damage pattern in Capgras, which seems to involve the connection between the face recognition areas in the temporal lobe and the emotional areas in the limibic system. So although the patient can recognize faces, and can feel emotions, the patient cannot feel emotions related to recognizing faces.

The older "one-factor" theories of delusion stopped here. The patient, they said, knows that his wife looks like his wife, but he doesn't feel any emotional reaction to her. If it was really his wife, he would feel something - love, irritation, whatever - but he feels only the same blankness that would accompany seeing a stranger. Therefore (the one-factor theory says) his brain gropes for an explanation and decides that she really is a stranger. Why does this stranger look like his wife? Well, she must be wearing a very good disguise.

One-factor theories also do a pretty good job of explaining many of the remaining monothematic delusions. A 1998 experiment shows that Cotard delusion sufferers have a globally decreased autonomic response: that is, nothing really makes them feel much of anything - a state consistent with being dead. And anosognosiacs have lost not only the nerve connections that would allow them to move their limbs, but the nerve connections that would send distress signals and even the connections that would send back "error messages" if the limb failed to move correctly - so the brain gets data that everything is fine.

The basic principle behind the first factor is "Assume that reality is such that my mental states are justified", a sort of Super Mind Projection Fallacy.

Although I have yet to find an official paper that says so, I think this same principle also explains many of the more typical schizophrenic delusions, of which two of the most common are delusions of grandeur and delusions of persecution. Delusions of grandeur are the belief that one is extremely important. In pop culture, they are typified by the psychiatric patient who believes he is Jesus or Napoleon - I've never met any Napoleons, but I know several Jesuses and recently worked with a man who thought he was Jesus and John Lennon at the same time. Here the first factor is probably an elevated mood (working through a miscalibrated sociometer). "Wow, I feel like I'm really awesome. In what case would I be justified in thinking so highly of myself? Only if I were Jesus and John Lennon at the same time!" A similar mechanism explains delusions of persecution, the classic "the CIA is after me" form of disease. We apply the Super Mind Projection Fallacy to a garden-variety anxiety disorder: "In what case would I be justified in feeling this anxious? Only if people were constantly watching me and plotting to kill me. Who could do that? The CIA."

But despite the explanatory power of the Super Mind Projection Fallacy, the one-factor model isn't enough.

ABNORMAL BELIEF EVALUATION: THE SECOND FACTOR

The one-factor model requires people to be really stupid. Many Capgras patients were normal intelligent people before their injuries. Surely they wouldn't leap straight from "I don't feel affection when I see my wife's face" to "And therefore this is a stranger who has managed to look exactly like my wife, sounds exactly like my wife, owns my wife's clothes and wedding ring and so on, and knows enough of my wife's secrets to answer any question I put to her exactly like my wife would." The lack of affection vaguely supports the stranger hypothesis, but the prior for the stranger hypothesis is so low that it should never even enter consideration (remember this phrasing: it will become important later.) Likewise, we've all felt really awesome at one point or another, but it's never occurred to most of us that maybe we are simultaneously Jesus and John Lennon.

Further, most psychiatric patients with the deficits involved don't develop delusions. People with damage to the ventromedial area suffer the same disconnection between face recognition and emotional processing as Capgras patients, but they don't draw any unreasonable conclusions from it. Most people who get paralyzed don't come down with anosognosia, and most people with mania or anxiety don't think they're Jesus or persecuted by the CIA. What's the difference between these people and the delusional patients?

The difference is the right dorsolateral prefrontal cortex, an area of the brain strongly associated with delusions. If whatever brain damage broke your emotional reactions to faces or paralyzed you or whatever spared the RDPC, you are unlikely to develop delusions. If your brain damage also damaged this area, you are correspondingly more likely to come up with a weird explanation.

In his first papers on the subject, Coltheart vaguely refers to the RDPC as a "belief evaluation" center. Later, he gets more specific and talks about its role in Bayesian updating. In his chronology, a person damages the connection between face recognition and emotion, and "rationally" concludes the Capgras hypothesis. In his model, even if there's only a 1% prior of your spouse being an imposter, if there's a 1000 times greater likelihood of you not feeling anything toward an imposter than to your real spouse, you can "rationally" come to believe in the delusion. In normal people, this rational belief then gets worn away by updating based on evidence: the imposter seems to know your spouse's personal details, her secrets, her email passwords. In most patients, this is sufficient to have them update back to the idea that it is really their spouse. In Capgras patients, the damage to the RDPC prevents updating on "exogenous evidence" (for some reason, the endogenous evidence of the lack of emotion itself still gets through) and so they maintain their delusion.

This theory has some trouble explaining why patients are still able to update about other situations, but Coltheart speculates that maybe the belief evaluation system is weakened but not totally broken, and can deal with anything except the ceaseless stream of contradictory endogenous information.

EXPLANATORY ADEQUACY BIAS

McKay makes an excellent critique of several questionable assumptions of this theory.

First, is the Capgras hypothesis ever plausible? Coltheart et al pretend that the prior is 1/100, but this implies that there is a base rate of your spouse being an imposter one out of every hundred times you see her (or perhaps one out of every hundred people has a fake spouse) either of which is preposterous. No reasonable person could entertain the Capgras hypothesis even for a second, let alone for long enough that it becomes their working hypothesis and develops immunity to further updating from the broken RDPC.

Second, there's no evidence that the ventromedial patients - the ones who lose face-related emotions but don't develop the Capgras delusion - once had the Capgras delusion but then successfully updated their way out of it. They just never develop the delusion to begin with.

McKay keeps the Bayesian model, but for him the second factor is not a deficit in updating in general, but a deficit in the use of priors. He lists two important criteria for reasonable belief: "explanatory adequacy" (what standard Bayesians call the likelihood ratio; the new data must be more likely if the new belief is true than if it is false) and "doxastic conservativism" (what standard Bayesians call the prior; the new belief must be reasonably likely to begin with given everything else the patient knows about the world).

Delusional patients with damage to their RDPC lose their ability to work with priors and so abandon all doxastic conservativism, essentially falling into a what we might term the Super Base Rate Fallacy. For them the only important criterion for a belief is explanatory adequacy. So when they notice their spouse's face no longer elicits any emotion, they decide that their spouse is not really their spouse at all. This does a great job of explaining the observed data - maybe the best job it's possible for an explanation to do. Its only minor problem is that it has a stupendously low prior, and this doesn't matter because they are no longer able to take priors into account.

This also explains why the delusional belief is impervious to new evidence. Suppose the patient's spouse tells personal details of their honeymoon that no one else could possibly know. There are several possible explanations: the patient's spouse really is the patient's spouse, or (says the left-brain Apologist) the patient's spouse is an alien who was able to telepathically extract the relevant details from the patient's mind. The telepathic alien imposter hypothesis has great explanatory adequacy: it explains why the person looks like the spouse (the alien is a very good imposter), why the spouse produces no emotional response (it's not the spouse at all) and why the spouse knows the details of the honeymoon (the alien is telepathic). The "it's really your spouse" explanation only explains the first and the third observations. Of course, we as sane people know that the telepathic alien hypothesis has a very low base rate plausibility because of its high complexity and violation of Occam's Razor, but these are exactly the factors that the RDPC-damaged2 patient can't take into account. Therefore, the seemingly convincing new evidence of the spouse's apparent memories only suffices to help the delusional patient infer that the imposter is telepathic.

The Super Base Rate Fallacy can explain the other delusional states as well. I recently met a patient who was, indeed, convinced the CIA were after her; of note she also had extreme anxiety to the point where her arms were constantly shaking and she was hiding under the covers of her bed. CIA pursuit is probably the best possible reason to be anxious; the only reason we don't use it more often is how few people are really pursued by the CIA (well, as far as we know). My mentor warned me not to try to argue with the patient or convince her that the CIA wasn't really after her, as (she said from long experience) it would just make her think I was in on the conspiracy. This makes sense. "The CIA is after you and your doctor is in on it" explains both anxiety and the doctor's denial of the CIA very well; "The CIA is not after you" explains only the doctor's denial of the CIA. For anyone with a pathological inability to handle Occam's Razor, the best solution to a challenge to your hypothesis is always to make your hypothesis more elaborate.

OPEN QUESTIONS


Although I think McKay's model is a serious improvement over its predecessors, there are a few loose ends that continue to bother me.

"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?

Likewise, how come delusions are so specific? It's impossible to convince someone who thinks he is Napoleon that he's really just a random non-famous mental patient, but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried). But him being Alexander the Great is also consistent with his observed data and his deranged inference abilities. Why decide it's the CIA who's after you, and not the KGB or Bavarian Illuminati?

Why is the failure so often limited to failed inference from mental states? That is, if a Capgras patient sees it is raining outside, the same process of base rate avoidance that made her fall for the Capgras delusion ought to make her think she's been transported to ther rainforest or something. This happens in polythematic delusion patients, where anything at all can generate a new delusion, but not those with monothematic delusions like Capgras. There must be some fundamental difference between how one draws inferences from mental states versus everything else.

This work also raises the question of whether one can one consciously use System II Bayesian reasoning to argue oneself out of a delusion. It seems improbable, but I recently heard about an n=1 personal experiment of a rationalist with schizophrenia who used successfully used Bayes to convince themselves that a delusion (or possibly hallucination; the story was unclear) was false. I don't have their permission to post their story here, but I hope they'll appear in the comments.

FOOTNOTES


1: I left out discussion of the Alien Hand Syndrome, even though it was in my sources, because I believe it's more complicated than a simple delusion. There's some evidence that the alien hand actually does move independently; for example it will sometimes attempt to thwart tasks that the patient performs voluntarily with their good hand. Some sort of "split brain" issues seem like a better explanation than simple Mind Projection.

2: The right dorsolateral prefrontal cortex also shows up in dream research, where it tends to be one of the parts of the brain shut down during dreaming. This provides a reasonable explanation of why we don't notice our dreams' implausibility while we're dreaming them - and Eliezer specifically mentions he can't use priors correctly in his dreams. It also highlights some interesting parallels between dreams and the monothematic delusions. For example, the typical "And then I saw my mother, but she was also somehow my fourth grade teacher at the same time" effect seems sort of like Capgras and Fregoli. Even more interestingly, the RDPC gets switched on during lucid dreaming, providing an explanation of why lucid dreamers are able to reason normally in dreams. Because lucid dreaming also involves a sudden "switching on" of "awareness", this makes the RDPC a good target area for consciousness research.

155 comments

Comments sorted by top scores.

comment by Kawoomba · 2012-08-13T06:47:42.854Z · LW(p) · GW(p)

Reminded me of The Three Christs of Ypsilanti:

To study the basis for delusional belief systems, [psychologist] Rokeach brought together three men who each claimed to be Jesus Christ and confronted them with one another's conflicting claims, while encouraging them to interact personally as a support group. Rokeach also attempted to manipulate other aspects of their delusions by inventing messages from imaginary characters. He did not, as he had hoped, provoke any lessening of the patients' delusions, but did document a number of changes in their beliefs.

While initially the three patients quarreled over who was holier and reached the point of physical altercation, they eventually each explained away the other two as being mental patients in a hospital, or dead and being operated by machines.

comment by MBlume · 2012-08-13T06:02:38.537Z · LW(p) · GW(p)

Where are we on selectively/temporarily/safely de-activating brain regions? Magnetic field to the RDPC sounds like it'd be fantastically fun at partiesextremely informative under the right circumstances.

Replies from: Luke_A_Somers, Cosmos, scav, handoflixue, Eugine_Nier
comment by Luke_A_Somers · 2012-08-13T18:26:58.463Z · LW(p) · GW(p)

Note to self: Do not attend any party organized by MBlume without making sure that all participants have signed an iron-clad NDA in advance.

Replies from: Kawoomba, magfrump
comment by Kawoomba · 2012-08-13T19:01:16.046Z · LW(p) · GW(p)

Don't worry, what happens in la la land stays in la la land.

comment by magfrump · 2012-08-14T21:30:36.273Z · LW(p) · GW(p)

Note to self: Always sign NDAs associated to parties thrown by MBlume.

comment by Cosmos · 2012-08-14T21:41:51.203Z · LW(p) · GW(p)

I had the exact same thought myself back in 2008, so I asked an experimental psych professor about this. At the same, he said that the TMS devices that we had are somewhat wide-area and also induce considerable muscle activation. This doesn't matter very much when studying the occipital lobe, but for the prefrontal cortex you basically start scrunching up the person's face, which is fairly distracting. Maybe worth trying anyway.

I've wanted to get my hands on a TMS device for years. Building one at home does not seem particularly feasible, and the magnetism involved is probably dangerous for nearby metal/electronics...

Replies from: jimmy, MaoShan, Yvain
comment by jimmy · 2012-08-15T23:13:45.728Z · LW(p) · GW(p)

Building one at home does not seem particularly feasible, and the magnetism involved is probably dangerous for nearby metal/electronics...

A few minutes on Google makes this seem very unlikely.

I'm scared as hell to induce currents in my brain without knowing the neurobiology of it, but I do understand the electrical engineering half, so if you want an electromagnet and driver, I'll help you build one.

comment by MaoShan · 2012-08-17T03:55:20.060Z · LW(p) · GW(p)

I had a very similar thought while reading this post. I have the Shakti system, maybe this weekend I'll target my RDPC with various frequencies and see what happens.

Replies from: MaoShan
comment by MaoShan · 2012-08-24T02:23:39.552Z · LW(p) · GW(p)

Follow-up: I didn't experience anything outside of the typical Shakti effects for me (a feeling similar to a strong nicotine buzz); however, there are many variables to tweak before I declare it a wash. I'll continue to experiment and post the final results somewhere.

Replies from: Document
comment by Document · 2016-09-14T16:19:43.160Z · LW(p) · GW(p)

Why not here?

comment by Scott Alexander (Yvain) · 2012-08-15T02:35:49.510Z · LW(p) · GW(p)

I don't know the technical differences between TMS and TDCS, but http://flowstateengaged.com/ looks promising.

Replies from: Cosmos
comment by Cosmos · 2012-08-15T16:08:06.453Z · LW(p) · GW(p)

TDCS isn't depolarizing neurons with magnetism, it doesn't disable brain regions at all. Instead it is running a direct current across them. This appears to permanently increase or decrease its level of excitability. o_O

comment by scav · 2012-08-15T09:13:45.966Z · LW(p) · GW(p)

I think safely de-activating that part of your brain while you are still awake and able to act on your beliefs is a contradiction in terms. I'd want an experienced psychiatric nurse present, personally. And a million quid.

comment by handoflixue · 2012-08-22T01:45:39.197Z · LW(p) · GW(p)

"Magnetic field to the RDPC sounds like it'd be..."

... fairly similar to high doses of psychadelics...?

comment by Eugine_Nier · 2012-08-14T22:44:22.228Z · LW(p) · GW(p)

Personally, I'm wondering how to use these as brainwashing devices. And then use my brainwashed slaves to TAKE OVER THE WORLD. BWAHAHAHAHAHA.

Sorry, got carried away there for a second. In any case, do you know where I can get my hands on one of these things?

comment by Risto_Saarelma · 2012-08-13T07:33:11.705Z · LW(p) · GW(p)

Would a neurologist who has thus far been immersed daily with the fact that all brains can fail in all sorts of interesting ways be hit just as bad with these delusions if given brain damage as someone who might have operated all their life under a sort of naive realism that makes no difference between reality and their brain's picture of it? What about a philosopher with no neurological experience but with a well-seated obsession with the map not being the territory?

Replies from: someonewrongonthenet, Richard_Kennaway
comment by someonewrongonthenet · 2012-08-14T03:11:52.624Z · LW(p) · GW(p)

Had to make an account to answer this one, since I can give unique insight

I'm an atypical case in that I had the Capgras Delusion (along with Reduplicative Paramnesia) in childhood, rather than as an adult. The delusions started sometime around 6-9 years of age. I hid it from others, partly because I halfway knew it was ridiculous, partly because I didn't want to let out that I was on to them...and it caused me quite a bit of anxiety, because I felt like I lost my loved ones and slipped into parallel universes every few days. I would try to keep my eyes on my loved ones, because as soon as I looked away and looked back the feeling that something was different would return.

Sometime around 12-14, I realized how implausible it was for any kind of impostor to conduct such large scale conspiracy, and how implausible it was that I was slipping into parallel universe. I told my parents what I was experiencing and admitted it was irrational. I forced myself to ignore the feeling every time it came (though it still bothered me). Eventually around 17 the feeling stopped bothering me altogether, although little twinges still occured from time to time.

I'm currently in what I would consider to be above average mental health, and many years later learned I the name of the delusions that had plagued me as a child. Prior to identifying them as monothematic delusions, I had thought that imposters and parallel universes might simply be a gifted child's equivalent of monsters under the bed. My parents thought it was from reading/watching too much fiction. I never suspected a neurological disorder until years later.

I'm not sure if I was able to see past the delusion because I'm an atypical case (no known brain injury), because I was a child, because my brain healed via biological mechanism, or because I'm intelligent...but I can tell you that my memory of the event involves me figuring out that the delusion was improbable and consciously working to bring it to an end.

So unless my memories are false (it was a long time ago) or I am engaging in mis-attribution, the answer to your question is that yes, in some cases it would be possible for someone to use rational thinking to overcome this kind of disorder.

Replies from: Risto_Saarelma, smk
comment by Risto_Saarelma · 2012-08-14T04:52:58.169Z · LW(p) · GW(p)

This is yet again a different scenario, but very interesting, thanks! It does occur to me now that there might be adult trauma patients who can see through the delusion, and never get diagnosed with it, since they don't start raving about impostor family members but just go, whoa, brain seems messed, better go see the stroke doctor.

Replies from: Vulture
comment by Vulture · 2014-01-25T21:40:52.076Z · LW(p) · GW(p)

This raises the obvious question: Could training in bayesian reasoning effectively increase the insight of delusional patients?

comment by smk · 2012-08-20T14:39:50.387Z · LW(p) · GW(p)

Some strangely common childhood beliefs:
Everyone except you is a robot
Your life is like the Truman Show

Replies from: RomanDavis, Rickasaurus, someonewrongonthenet, Richard_Kennaway
comment by RomanDavis · 2012-08-20T16:03:46.127Z · LW(p) · GW(p)

I ocassionally entertained ideas like that in the back of my mind. Truman Show, Teachers are aliens, Parents somehow know everything/ everything about me and are just fucking with me in the way that Zeus would to test character, except over a much longer Santa Clause/ Jesus esque period of time, the mothman is watching me, there are invisible monsters/ demons all around me and I need to be very sneaky not to be seen.

I'm not sure I believed them, exactly. Maybe I did. Maybe I didn't. I still do the same stuff sometimes, with equally wierd things. Whenever I start half way believing in god or track of thought in my bain giving arbitrary commands is the voice of God, I just start doing experiments against the rest of reality til the shadow of belief goes away, since they never line up with testable reality. I've never had actual hallucinations, though, as far as I know.

comment by Rickasaurus · 2012-08-21T04:05:05.959Z · LW(p) · GW(p)

For me it was that I suspected I was the robot. Never told anyone though.

comment by someonewrongonthenet · 2012-08-20T22:38:46.771Z · LW(p) · GW(p)

Hehe...to be honest I half-believed those too...not that everyone was a robot, but that everyone was a philosophical zombie. It wasn't until high school that I figured out that for all intents and purposes, I'm a philosophical zombie too.

But in my opinion, those really ARE normal childhood beliefs that are not the result of any neuropathology... beliefs that many philosophers still entertain in the form of solipsism.

comment by Richard_Kennaway · 2012-08-20T17:26:50.465Z · LW(p) · GW(p)

Some strangely common childhood beliefs:
Everyone except you is a robot

Do those turn into these when they grow up?

comment by Richard_Kennaway · 2012-08-13T12:13:48.111Z · LW(p) · GW(p)

Jill Bolte has provided a case study. She is a neurologist who had a stroke. Her experience is recounted in her TED talk and her book.

Replies from: MaoShan, Risto_Saarelma
comment by MaoShan · 2012-08-17T03:51:18.059Z · LW(p) · GW(p)

I have read the book (I recently received it from an elderly friend who hoarded books--I picked through about $20,000 worth of books and chose several hundred dollars worth), and it started off interesting, to hear of her personal experience of the stroke and its accompanying mind-states. She seems to have fought her way through various delusions, but not with any more success than other examples cited here. Yes, she is/was a neuroscientist. She also proudly proclaims that she tells her bowels "Good job! I am so thankful that you do exactly what you are meant to do!" every time she takes a dump, and concluded the book with some painfully New Age-y exhortations which gave me the same urge to roll around frothing at the mouth that I often experienced with clearly delusional Christian preachers in church.

comment by Risto_Saarelma · 2012-08-13T16:22:30.100Z · LW(p) · GW(p)

The Amazon page for the book doesn't describe her getting any of the sort of very specific delusions described in the OP though, just general debilitation and paradoxical feelings of euphoria.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-08-13T17:08:34.865Z · LW(p) · GW(p)

It's the closest we're likely to get, though, given the rarity of both neurologists and anosognosias.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-08-13T18:35:37.343Z · LW(p) · GW(p)

Well, neurologists are rare, but I think we do know how to create targeted brain lesions that can cause pretty specific symptoms.

Replies from: faul_sname
comment by faul_sname · 2012-08-13T20:09:10.541Z · LW(p) · GW(p)

Any volunteers?

Replies from: None
comment by [deleted] · 2012-08-14T17:23:22.798Z · LW(p) · GW(p)

I might. Anybody got $20,000,000?

Replies from: faul_sname
comment by faul_sname · 2012-08-14T23:45:34.994Z · LW(p) · GW(p)

Well, if we're going there I'll do it for $10M.

comment by TruePath · 2012-08-14T21:55:21.836Z · LW(p) · GW(p)

All of the theories presented in this post seem to make the implausible assumption that somehow the brain acts like a hypothetical ideally rational individual and that impairment somehow breaks some aspect of this rationality.

However, there is a great deal of evidence the brain works nothing like this. In contrast, it has many specific modules that are responsible for certain kinds of thought or behavior. These modules are not weighed by some rational actor that sifts through them, they are the brain. When these modules come into conflict, e.g., in the standard word/color test where yellow is spelled in red, fairly simply conflict resolution methods are brought into play. When things go wrong in the brain, either an impairment in conflict resolution mechanisms or in the underlying modules themselves, things will go wonky in specific (not general) ways.

Speaking from personal experience, being in a psychotic/paranoid state simply makes certain things seem super salient to you. You can be quite well aware of the rational arguments against the conclusion you are worrying about but it's just so salient that it 'wins.' In other words it also feels like there is just a failure in your ability to override certain misbehaving brain processes rather than some general inability to update appropriately. This is further supported by the fact that skizophrenics and others with delusions seem to be able to update largely appropriately in certain aspects, e.g., what answer is expected on a test, while still maintaining their delusional state.

Replies from: prase
comment by prase · 2012-08-16T20:06:51.217Z · LW(p) · GW(p)

This is generally a good comment, but I think the views of the original post and your comment are actually pretty similar. For example, seeing the brain as a rational Bayesian agent is compatible with the modular view. One module might store beliefs, another might be responsible for forming new candidate beliefs on the basis of sensory input, another module may enforce consistency and weaken beliefs which don't fit in... The "rational actor that sifts through [the modules]" could easily be embodied by one or several of the modules themselves. Whether this is a good model is a more complicated question (it certainly isn't perfect since we know people diverge from the Bayesian ideal quite regularly), but it is not implausible.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-08-19T04:44:45.267Z · LW(p) · GW(p)

However, even if there are modules that try to form accurate beliefs about some things or even most things (and there probably are), it's still true that taken in aggregate, your various brain modules push you to have beliefs that would be locally optimal in the evolutionary ancestral environment, not necessarily true. Many modules in our brain push us toward believing things that would be praised, avoiding things that would be condemned or ridiculed, etc.

It's too costly to be a perfect deceiver, so evolution hacked together a system where if it's consistently beneficial to your fitness for others to believe you believe X, most of the time you just go ahead and believe X.

In large realms of thought, especially far mode beliefs, political beliefs, and beliefs about the self, the net result of all your modules working together is that you're pushed toward status and social advantage, not truth. Maybe there aren't even any truth-seeking modules with respect to these classes of belief. Maybe we call it delusion when your near-mode, concrete anticipations start behaving like your far-mode, political beliefs.

comment by [deleted] · 2012-08-15T09:05:01.897Z · LW(p) · GW(p)

It is embarrassing to admit but I used to think I really had dog ears and a tail until I was about 16.

Well, at least older students found it completely adorable when I made noises...and the school authorities thought I was like smart or something and didn't really care either.

I don't really know the cause, I don't remember knowing about kemonomimi until a bit later but I had delusions not only about seeing these body parts in myself but also felt them. I thought I broke my tail once, for example.

comment by David_Gerard · 2012-08-13T08:54:30.990Z · LW(p) · GW(p)

I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?

Off the top of my head, that people believe what their brain tells them above any outside evidence, c.f. religious conversion originating from what, to the outside view, is clearly a personal delusion - but, from the inside view, is incontrovertible evidence of God.

It takes very good and clear thinking for the lens to actually see its flaws even when you don't have brain damage to the bit that evaluates evidence. I'm somewhat surprised a rationalist with schizophrenia actually managed this. Though TheOtherDave has mentioned being able to work out that a weird perception was almost certainly due to the stroke he was recovering from, and Eliezer mentions someone else managing it as well.

Replies from: CronoDAS, None
comment by CronoDAS · 2012-08-14T02:19:24.288Z · LW(p) · GW(p)

John Nash claimed that he recovered from schizophrenia because "he decided to think rationally" - but this only happened after he took medications, so...

comment by [deleted] · 2012-08-14T17:26:15.278Z · LW(p) · GW(p)

religious conversion from what to the outside view is clearly a personal delusion but from the inside view is incontrovertible evidence of God

For what it's worth, in order to understand the syntax of this phrase, I had to start over about five times.

Replies from: David_Gerard
comment by David_Gerard · 2012-08-14T20:17:27.009Z · LW(p) · GW(p)

Commas added!

comment by gwern · 2012-08-13T01:56:47.235Z · LW(p) · GW(p)

This provides a reasonable explanation of why we don't notice our dreams' implausibility while we're dreaming them - and Eliezer specifically mentions he can't use priors correctly in his dreams.

Have I ever mentioned my theory that it may be partially due to overloaded working memory?

comment by [deleted] · 2012-08-13T01:44:57.704Z · LW(p) · GW(p)

"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?

Maybe it's really hard to really get that you are a brain on an intuitive level. Human intuitions seem to be pretty dualistic (well, at least mine do). So 'you have brain damage' doesn't sound very explanatory unless you've spent lot of time convincing yourself that it should.

By the way, the last link is broken.

comment by SilasBarta · 2012-08-14T22:31:33.223Z · LW(p) · GW(p)

For example, one male patient expressed the worry that his wife was actually someone else, who had somehow contrived to exactly copy his wife's appearance and mannerisms. This delusion sounds harmlessly hilarious ...

It's harmless to claim that someone is observationally equivalent to his wife, but not his wife? When that kind of thing happens on a large scale, it's called "the debate about p-zombies".

Replies from: duckduckMOO, RomanDavis
comment by duckduckMOO · 2012-08-15T13:20:49.612Z · LW(p) · GW(p)

isn't claimed actual equivalence the problem with P-zombies. Someone being observationally equivalent but different is merely extremely unlikely (maybe she has an identical twin, maybe aliens etc.) P-zombies are supposed to be indistingishable in principle, which is impossible/requires souls that aren't subject to testing for distinguishability.

comment by RomanDavis · 2012-08-23T10:08:56.604Z · LW(p) · GW(p)

I don't think P Zombie debates are a reat sign of rationality either, but I think the debate itself probably does nearly zero harm, if you don't count wasted time.

Replies from: SilasBarta
comment by SilasBarta · 2012-08-24T00:04:43.998Z · LW(p) · GW(p)

"If you don't count wasted time"? Okay, but likewise, if you don't count her husband getting shot, Mrs. Lincoln really enjoyed the play...

Replies from: shokwave
comment by shokwave · 2012-08-24T00:12:56.444Z · LW(p) · GW(p)

That's not likewise.

Replies from: SilasBarta
comment by SilasBarta · 2012-08-24T00:39:04.550Z · LW(p) · GW(p)

How so? A bunch of philosophers blowing valuable time on a worthless debate is a major harm, almost as if they were forcibly held in unemployment but drew the same resources from society.

comment by handoflixue · 2012-08-22T01:43:28.511Z · LW(p) · GW(p)

For what it's worth, the "Super Base Rate Fallacy" seems to line up with my own experiences, except that there's sometimes an independent part of my mind that can go "Okay, I have 99.999% confidence that the floor will eat us. But what's the actual odds of that confidence, and what evidence did I use to reach it?". While I can't just dismiss the absurd confidence value as absurd, I can still (sometimes) do a meta-evaluation about the precise confidence.

It's sort of like how if a friend says that global warming is 99.99% likely to be true, I can't simply rewrite my friend to have 50% confidence. But I can question the evidence and see how he reached his conclusion, and if it's just "oh, I read a newspaper article that said it was real", my actual confidence will be vastly lower.

I only recently figured out this trick (and suspect LessWrong probably helped me develop it), so I couldn't say why it sometimes works and sometimes doesn't. I can say it's much harder to ignore paranoia about people, and much easier to ignore anything that would be easily objectively checked ("the floor will eat me", step on to floor, "the floor failed to eat me. Falsified!")

comment by Vaniver · 2012-08-13T18:23:33.363Z · LW(p) · GW(p)

It seems improbable, but I recently heard about an n=1 personal experiment of a rationalist with schizophrenia who used successfully used Bayes to convince themselves that a delusion (or possibly hallucination; the story was unclear) was false. I don't have their permission to post their story here, but I hope they'll appear in the comments.

I was under the impression that learning to recognize hallucinations was a standard component of schizophrenia therapy.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-08-14T01:41:34.166Z · LW(p) · GW(p)

Therapists can very very carefully try to talk patients out of their delusions, but I've always heard of it as a complicated long-term process and I've never before heard of Bayes being used directly.

Replies from: metaphysicist
comment by metaphysicist · 2012-08-19T22:40:54.924Z · LW(p) · GW(p)

You seem to be conflating the original schizophrenic state with the residual after the patients get antipsychotic medication: the latter may be readily amenable to reason; the former, the therapist would breach rapport with the patient, by challenging full delusions.

Medication is part of the standard treatment for schizophrenia--usually, the major part. Drawing conclusions about delusions from the residuals following treatment seems to shield you from what would be obvious had you observed unmedicated patients. Delusions aren't failures of Bayesian rationality: they involve, typically, accepting a few self-evident priors, and these are driven by intense affect.

comment by Cosmos · 2012-08-14T21:58:24.507Z · LW(p) · GW(p)

Yvain, it seems like some of this is potentially answered by how this interacts with other cognitive biases present.

Re: specific delusions, when you have an entire class of equally-explanatory hypotheses, how do you choose between them? The availability heuristic! These hypotheses do have to come from somewhere inside the neural network after all. You could argue that availability is a form of "priors", but these "priors" are formed on the level of neurons themselves and not a specific brain region: some connection strengths are stronger than others.

I would not wish brain damage on anyone, but should one of us have that unfortunate circumstance befall us I would be extremely inclined to go talk to them. I am so damn curious what this feels like from the inside! I am somewhat embarrassed to admit that the thought of having to build completely new neural connections to get around existing damage sounds like an insanely interesting challenge...

I also wonder about reasoning our way out of delusional states. The closest parallel that most people have access to would be the use of various psychoactives. I have heard multiple reports of people who have reasoned their way out of delusional conclusions on cannabinoid agonists and 5-HT2A agonists (and dopamine agonists, with lesser evidence).

The most difficult challenge would appear to be kappa opioid agonism, a dissociative state induced by the federally-legal herb salvia divinorum. Most users report being unaware they ingested a substance at all, no awareness of having a body, and no concept of self-identity, coincident with extreme perceptual distortions. I am no longer clear what Bayesian reasoning would even look like for some points in mindspace.

Edit: I thought of another relevant state: delirium induced by anticholinergics. Unlike 5-HT2A agonists where people do not confuse perceptual distortions for reality, in delirious states people do routinely believe that what they are perceiving is actually occurring. Unfortunately these states are widely regarded as unpleasant, and no rationalist I know personally has experimented with sufficiently large doses of anticholinergics.

Replies from: Yvain, MaoShan
comment by Scott Alexander (Yvain) · 2012-08-15T02:39:59.195Z · LW(p) · GW(p)

Availability heuristic seems related, but still doesn't explain why delusions are so much more fixed than ordinary conclusions.

I think dreams are also a good parallel for psychosis, but it's hard to tell how good without having been psychotic.

Replies from: Cosmos
comment by Cosmos · 2012-08-16T00:53:19.660Z · LW(p) · GW(p)

To continue with the bias theme, how about confirmation bias? They settled on the most available theory that fits all the facts, and then it becomes part of their identity, they begin to rally the soldiers. Is their delusion that they are Jesus really that much less sticky than someone's political party?

Replies from: prase
comment by prase · 2012-08-16T19:48:04.358Z · LW(p) · GW(p)

Seems unlikely. First, confirmation bias has its limits and normally is never capable beating direct observational evidence. Second, people basing their identity on their being Jesus sounds like a plausible idea, but identity based on the fact that my arm isn't paralysed not that much. Third, it takes some time to associate own identity and status feeling with an idea - one doesn't become political partisan overnight - while the anosognosic delusions emerge immediately after the brain is damaged (well, I suppose this is so, but I can easily be mistaken here).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-08-16T20:02:25.353Z · LW(p) · GW(p)

identity based on the fact that my arm isn't paralysed not that much

I dunno. During the period after my stroke where I was suffering from partial right-side paralysis, a lot of the emotional suffering I experienced could reasonably be described as caused by having my identity as a person whose arm wasn't paralyzed challenged. I would probably say "self-image" instead of "identity", granted, but I'm not sure the difference is crisp.

Replies from: prase
comment by prase · 2012-08-16T20:21:43.448Z · LW(p) · GW(p)

Interesting. Did thinking about the paralysis feel similar to (learning a good argument against your favourite political ideology / seeing your favourite sports team lose / listening to an offensive but true remark made by your enemy / any situation in which you fell victim to confirmation bias)?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-08-17T00:23:37.380Z · LW(p) · GW(p)

It did not feel especially similar to any of the examples you list.
The general case is harder to think about... I'm not sure.

comment by MaoShan · 2012-08-17T03:21:17.472Z · LW(p) · GW(p)

I can give some personal anecdotes regarding salvia if you are interested. If I had to come up with a rationalist explanation to the experience, I would say that the affected consciousness accepts, without question, fantastically generated priors as absolute truth, and largely ignoring actual external sensory input, and even then modifying it to fit the delusion.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-08-19T04:01:57.828Z · LW(p) · GW(p)

Do you think it's at all feasible for someone under the influence of salvia to record their thoughts as they occur? Would it help if they do so often? For example, I write a stream-of-consciousness monologue every day on 750words.com. Would I be competent enough to write down what I'm thinking while under the influence of the drug?

If you lose all sense of self, would you still be able to understand the concept of another person? For example, I wonder if it would be possible for someone under the influence of salvia to answer questions about their mental state.

Considering the description, I'd guess that even if you were physically capable of talking to someone or writing down your experiences, you probably won't be inclined to, am I right? Or if you did speak, you wouldn't be aware of it.

Sorry if these questions are intrusive; I'm very curious about this sort of thing.

Replies from: MaoShan, evand, Mitchell_Porter
comment by MaoShan · 2012-08-20T04:37:56.162Z · LW(p) · GW(p)

Considering that I provoked the questions, I don't consider them intrusive. First of all, due to the extreme distortion of sense of time, the whole episode may occur in less time than it would take to have a useful conversation. However, I have very vivid memories of my stream of consciousness--maybe one of the main reasons that it makes such an impression is that one remembers the whole thing, even if it is difficult to put into words. I'll recount here a few such memories; this is from quite a few years back, but many facets of it changed my mindset.

First of all, I began to feel a little bit dizzy and noticed a kind of echoing effect in the ambient sounds around me. Soon afterward, I got the impression that I was sweating profusely from my temples, and I reached up to see if it was only a feeling, or if it was actually sweat, but could not reliably analyze my hands, due to a sort of increasing pulsation feeling, like when you get up from sleep and straight into a brightly lit bathroom, but involving all of my senses. I began having difficulty moving around, due to the sensation that "down" was now where "north" used to be, so I had to sit down on the floor to avoid falling out the back door (I use the word sensation in an objective sense; at the time I truly believed that gravity had turned ninety degrees). Continuing to sit on the floor while feeling like I was pressed to the floor by centrifugal force, I became aware of whispering sounds all around the room. I discovered that the room, and all of waking life, was filled with ghosts, whispering to each other and observing the living.

Two things to clarify here: The sweating temples feeling happened consistently. Among this and other descriptions, many times "to hear" something also implies "to see" something, and yet it was not photonically visible. The best I can describe is like a subliminal HUD. Maybe more like what I imagine one might sense with echolocation. It's to know the shape of something without seeing it. Also, I am completely aware that these impressions are hallucinations, but actually experiencing it changed my thinking even so.

Another time, I "saw" that all words were made of the word EGGERHEXE. It was as if the air were filled with a 3-dimensional crossword puzzle, and at least one letter of every printed word intersected with EGGERHEXE perpendicularly (as in, only the place where they intersected was visible, making up our dimension of words; all the EGGERHEXEs were in another spatial dimension). Even if a word did not have any letters contained in EGGERHEXE, the curves from the "G" or the straight lines from the "X" were the intersection point. And most special of all, of course, was the printed word EGGERHEXE, which was fully and exclusively made of the intersecting versions of itself, making it the "word within words", which entered an infinite recursion. The whole time I was observing this, I heard "EGGERHEXE" whispered constantly.

Replies from: Multiheaded, OnTheOtherHandle
comment by Multiheaded · 2012-08-20T08:43:04.517Z · LW(p) · GW(p)

Another time, I "saw" that all words were made of the word EGGERHEXE...

Heh. This is a lot like how Erik Davis describes Jewish mystics viewing the Torah as a compressed encoding of all possible texts ever, and the Tetragrammaton, YHWH, as the source of all the words in the Torah.

Replies from: MaoShan
comment by MaoShan · 2012-08-21T03:23:54.571Z · LW(p) · GW(p)

Now we know what they were smoking!

Replies from: Multiheaded
comment by Multiheaded · 2012-08-21T08:46:46.301Z · LW(p) · GW(p)

Yeah, not exactly - Salvia divinorum is native to Mexico - but I've read scholars implying that the Middle Eastern mystics often used psychoactive mushroooms in addition to generic techniques like prayer and fasting.

Replies from: chaosmage, MaoShan
comment by chaosmage · 2012-09-18T21:21:46.638Z · LW(p) · GW(p)

Isn't it much more likely they were brain damaged in a more permanent way? Religious people who use psychoactives tend to openly praise their drugs much like they praise their gods (think soma, peyote, ayahuasca) - middle eastern mystics didn't do that. And with malnutrition, rampant child abuse and almost no health care, there's bound to have been enough brain damage around.

comment by MaoShan · 2012-08-22T01:26:47.876Z · LW(p) · GW(p)

That is also implied in The Transmigration of Timothy Archer. Or, maybe some of the Nephites returned to Jerusalem with a stash...

And of course, as Risto_Saarelma mentions in a comment further down, it may be possible to attain similar states through mental exercises without benefit of pharmaceutical remedies.

comment by OnTheOtherHandle · 2012-08-20T04:49:10.160Z · LW(p) · GW(p)

Thanks! That sounds fascinating, if scary. Did any of these experiences affect your beliefs and actions while sober? I've heard of people having life-changing revelations on LSD, for example, although I'd be skeptical of the accuracy of any beliefs suddenly revealed to people while tripping.

I can easily imagine more subtle and potentially helpful behavioral changes, though.

Replies from: evand, MaoShan, Mitchell_Porter
comment by evand · 2012-08-20T05:29:26.718Z · LW(p) · GW(p)

I have had mild but long-lasting effects from revelations under the influence of MDMA and 2C-E. The revelations were personal, not about the nature of reality. I would say that they could generally be described as resulting from a reduced avoidance of thinking about things that I already had plenty of information on, and had basically positive results. Both took some time to integrate afterwards, and the 2C-E trip was at times a somewhat unpleasant look at myself. The MDMA trip was unambiguously pleasant at the time, even considering that I spent time thinking about some fairly unpleasant stuff.

comment by MaoShan · 2012-08-21T03:38:26.974Z · LW(p) · GW(p)

That was something I failed to get across in my reply, I guess. I feel like I owe a part of my mental composition of today to those experiences, I mean, imagining infinity is not the same as experiencing infinity, and even though it was internally generated, the memories and impressions and rewired synapses are very real. I was fully aware when the effects wore off that it was not "revealed knowledge", but it exposed me to viewpoints and thoughts that I might not have otherwise had access to. My description of the events was my flow of thoughts during the events, not my "usual" philosophy. On a side note, as a child I had the unfortunate combination of truth-seeking and logic, and a strong neurological tendency toward magical thinking. Perhaps my familiarity with walking the line between Spock and Q allowed me the ability to interpret the otherworldly impressions with quiet detachment, while simultaneously benefiting from the sense of wonder they conveyed.

comment by Mitchell_Porter · 2012-08-20T05:20:23.422Z · LW(p) · GW(p)

LSD is a source of metaphysical spectacle and entertainment, not of edification. It will give you a lot to think about, but it's not a source of answers, and I mildly recommend against it if you value intellectual achievement.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-08-20T06:43:46.461Z · LW(p) · GW(p)

I've understood the claims of LSD therapy to be mostly about fixing psychological hang-ups, like the recent research claim that it helps with alcoholism. This is mostly a separate direction from both entertainment and intellectual achievement. Of course psychological well-being can indirectly lead to more intellectual achievement, and an altered psychological outlook can change the set of hypotheses you will entertain as the starting point for intellectual work. No idea whether the post-LSD hypothesis pool will necessarily be better than the pre-LSD one. If it's larger, then it might help discover some unlikely ideas that actually do pan out when you take the time to think through them off-LSD.

Incidentally, there are some interesting anecdotes that deep meditative states achieved by long-term meditators resemble the states you end up on LSD. At least MCTB alludes to this.

comment by evand · 2012-08-20T05:24:51.684Z · LW(p) · GW(p)

My personal experience with salvia is limited (2 times, one much more intense than the other), but here are my thoughts.

I don't think I would want to try to record a salvia experience while it was occurring. While I found the experience interesting, valuable, and rewarding, it was also scary, intimidating, and awe-inspiring. It is not something I would want interrupted by things like conversation or writing. The time dilation might well be too profound for that to even work well. Also, I found noises, light, and rapid changes in sense input to be distracting. Having other people move about the room was... scary. I did not experience the extreme disconnect with reality some people describe, but it was a different mindspace in a way that all other substances I've tried were not. Doing anything other than experiencing it to the fullest would seem inappropriate.

(It's possible many of these problems would fade with repeated use. I would consider such a result disappointing, and have no particular desire to attempt to produce it.)

In contrast, I would be happy to talk with anyone about the experience while on any of the other substances I've taken. Depending on mood, I might feel anxious or nervous about talking to someone who was sober, especially if they had no personal experience or were someone I did not know well. Some experiences I've had would make writing about them difficult, because of distractability, visual distortion, a tendency to stop and stare at the beauty of the pencil eraser, etc. Others would be easier. DiPT might be easier to write about than talk about; auditory distortion makes conversation difficult / distracting.

Have you read any books on the subject? There are many good ones out there. I could recommend a couple if you'd be interested, though I haven't read much (or partaken of the substances) lately.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-08-20T05:45:46.434Z · LW(p) · GW(p)

Thanks for the response! :) As you can probably tell, I'm trying to decide whether it's worth my while to dabble in psychoactive drugs.

I'm not actually very curious about the having the experience itself; it sounds scary and disorienting. I would, however, be willing to endure that scary experience if it's likely to teach me something interesting and important about myself, which is why I asked about recording it. I'm teaching myself lucid dreaming and meditation in the hopes that I'll be better aware of my own personal quirks/subconscious obsessions, for example. A sudden, massive shift in perception might help bring things to the forefront which I had avoided addressing before.

In your experience, do drugs as a whole actually help with that, given that I'm not all that interested in the experience for its own sake?

Edit: Actually, real science books would be even better, thanks. I previously avoided drugs because Drugs Are Bad, then because Drugs Are Dangerous, and now I figure I ought to do an accurate cost-benefit analysis. And because I'm biased to think drugs are awful things which awful people partake in, I should explicitly seek out some empirically supported benefits.

Replies from: evand
comment by evand · 2012-08-20T14:03:00.841Z · LW(p) · GW(p)

You're welcome!

I would say my experiences with Salvia were somewhat scary and disorienting, but not problematically so. I'm not quite sure how to describe what I mean here, but "scary" should be a very minor part of the description. I certainly felt no need to do anything about it at the time, or surprise that I didn't need to after the fact. Think scary as in "I go rock climbing, and looking down makes me a bit nervous". Except without the adrenaline, and otherwise in a completely different emotional context. I hope that puts it in perspective. Anyway, personally, I would not recommend starting with salvia, though I know a couple people that did exactly that and had good things to say about it.

I would say that drugs can help with what you're asking about, but that it isn't guaranteed. Of course, I didn't go into it hoping for such results at all, so it's probably far more likely that you'll get what you're looking for than not, imho. Set and setting matter a lot. On a related note, if you go into your experience expecting it to be scary, well, you'll probably get what you wished for. Basically, I think you should do this because you're expecting to enjoy the experience, and I think that's an entirely reasonable expectation. I'd also add that my description of salvia as being slightly scary does not apply to any other substance I've taken.

For starters on reading, I would suggest Phenethylamines I Have Known And Loved (aka PiHKAL) by Alexander Shulgin, and its sequel TiHKAL (Tryptamines ...). Alexander Shulgin is a scientist and basically rational thinker, with a strong interest in the human mind. He's a synthetic organic chemist, and personally invented, synthesized, and took what might literally be a majority of the synthetic psychedelics known.

comment by Mitchell_Porter · 2012-08-20T05:28:22.070Z · LW(p) · GW(p)

s/salvia/saliva/g for fun.

comment by CronoDAS · 2012-08-14T02:12:12.124Z · LW(p) · GW(p)

A similar mechanism explains delusions of persecution, the classic "the CIA is after me" form of disease. We apply the Super Mind Projection Fallacy to a garden-variety anxiety disorder: "In what case would I be justified in feeling this anxious? Only if people were constantly watching me and plotting to kill me. Who could do that? The CIA."

My mom (a psychiatrist) was listening to a continuing education program on schizophrenia, and the lecturer said that schizophrenia tends to develop slowly, and in stages; before a person ends up with delusions of persecution, they usually start out by feeling intense fear and anxiety that they can't come up with any explanations for.

Replies from: kentastic
comment by kentastic · 2012-08-15T06:07:45.928Z · LW(p) · GW(p)

Yes it can develop slowly, but also fast as hell, depends on what pulled the trigger. Its pretty relative, and it varies from person to person..

Also schizophrenia is not "one single" disease or diagnosis, its kind of many diagnosis under " schizophrenia". Very complicated and rare.

And just because you are delusional, dosent mean you're schizophrenic immediately.

Replies from: juliawise
comment by juliawise · 2012-08-24T02:06:02.407Z · LW(p) · GW(p)

Not that rare. ~1%.

comment by duckduckMOO · 2012-08-13T02:27:20.987Z · LW(p) · GW(p)

"Coltheart et al pretend that the prior is 1/100, but this implies that there is a base rate of your spouse being an imposter one out of every hundred times you see her (or perhaps one out of every hundred people has a fake spouse) either of which is preposterous."

What if their prior on not feeling anything upon seeing their wife is 0? What if most of the reason for reasonable people's prior on this being much lower it is low status, instrumentally bad, etc, but their rational sincere thinking about it prior is close to 50/50? I notice you called the idea preposterous and something reasonable people wouldn't take seriously which are both quite status-ey. So if their aversion to instrumentally bad ideas and/or their aversion to ideas people will think them crazy for gets switched off they can easily get the wrong answer. Perhaps a fear of being of being fooled, or a fight or flight paranoia spiral could be what makes them think so.

I have no idea if any of that is true.

Replies from: selylindi
comment by selylindi · 2012-08-14T17:10:49.509Z · LW(p) · GW(p)

Similarly, I think Coltheart's criticism described here was flawed because it made the prior too specific. How often do you see a person at a distance or facing away and you "recognize" them as a loved one, but then the person comes closer or turns around and you realize you were wrong? It's not often, but it happens enough that we all know that feeling of sudden non-recognition. I often see it in children who come up to me expecting to find their father. The likelihood ratio of priors doesn't have to be for "my wife" versus "an imposter", but could be for "my wife" versus "not my wife". If that is the case, then the brain-damaged person uses the imposter theory to explain the general "not my wife" endogenous evidence.

comment by quen_tin · 2012-08-13T20:41:40.571Z · LW(p) · GW(p)

I wonder if the same mechanisms could be invovled in conspiracy theorists. Their way of thinking seems very similar. I also suspect a reinforcement mechanism: it becomes more and more difficult for the subject to deny his own beliefs, as it would require abandonning large parts of his present (and coherent) belief system, leaving him with almost nothing left.

This could explain why patients are reluctant to accept alternative versions afterwards (such as "you have a brain damage").

Replies from: scav
comment by scav · 2012-08-15T08:56:36.703Z · LW(p) · GW(p)

It seems to me that many people who believe extremely improbable conspiracy theories may well have undiagnosed brain damage. But you probably couldn't get most of them to agree to come in for a brain scan.

comment by ialdabaoth · 2012-11-07T02:27:31.849Z · LW(p) · GW(p)

Prefrontal cortex damage can be really weird. I'd really like to see how these different syndromes manifest in an fMRI.

Contextual preface: my own brand of crazy tends to interfere with getting helped by professionals, so I've done a lot of amateur-level neurobiology research on my own, trying to pin it down. An "inability to update priors" does seem to be a component of it, but it seems primarily triggered by emotional intensity.

Anyone who would like to prod me with Science is extremely welcome to do so.

Replies from: Strange7, Strange7
comment by Strange7 · 2012-11-07T03:01:29.407Z · LW(p) · GW(p)

By what mechanism does it interfere with professional assistance?

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-07T03:11:18.491Z · LW(p) · GW(p)

Twofold:

  1. I tend to display resistance to authority of all kind (ESPECIALLY therapy), because as much as I try to behave as a rationalist, I appear to actually behave as if I believed that most human beings are strategizing explicitly to inflict maximum emotional harm on me, and that any human being who is "playing friendly" has a deeply sinister game that will either inflict maximum harm on me by either playing on my trustfulness ("haha! you thought I was trying to help you!") or playing on my lack of trust ("haha! I tricked you into distrusting a genuine path to getting better!"). I appear to believe that the question of which human beings want to befriend me, and which ones only want to trick me to inflict harm, is only determined after I have chosen who to trust. (Yes, I realize this is absurd.)

  2. I tend to shut down whenever I attempt to motivate to help myself, because as much as I try to behave as a rationalist, I appear to actually behave as if I believed that every choice I make will ALWAYS turn out - retroactively - to be the worst choice I could have made. (Yes, I realize this is absurd.)

Replies from: TimS
comment by TimS · 2012-11-07T04:25:46.919Z · LW(p) · GW(p)

You might look to structured social interactions to help fit your emotional reactions to your intellectual beliefs about social interactions. For example, board games have relatively limited variation in social interaction between people who rate you a 6 and those that rate you a 4 on a 10-point likeability scale. It's a chance to gain additional data at low risk. Look to places like Meetup.com (I'm not sure that's international). Boardgamegeek.com is a chance to see what you might like.

Regarding therapy, keep in mind that good fit between therapist and patient is very important. If you haven't gotten good value from therapy but are still willing to try it, finding a new therapist might yield benefit.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-07T04:44:25.249Z · LW(p) · GW(p)

You might look to structured social interactions to help fit your emotional reactions to your intellectual beliefs about social interactions. For example, board games have relatively limited variation in social interaction between people who rate you a 6 and those that rate you a 4 on a 10-point likeability scale. It's a chance to gain additional data at low risk.

Well, board games (and card games, and the like) run into a problem where I'm perceived as focused, smart, and competent, so everyone tends to team up to eliminate me quickly - so I tend to get a lot of people actually reinforcing the idea that groups conspire against me.

Regarding therapy, keep in mind that good fit between therapist and patient is very important. If you haven't gotten good value from therapy but are still willing to try it, finding a new therapist might yield benefit.

Yeah, back when I had money for therapy, I shopped around a lot. Anymore, well... you get what you pay for.

Replies from: Strange7, Kindly
comment by Strange7 · 2012-11-09T22:44:02.800Z · LW(p) · GW(p)

I'd recommend finding a game where the players are working together against an automated hostile environment, such as Zombicide. If it seems like you have a workable plan, the other players will go along with it out of self-interest if nothing else. (D&D /can/ work like that, but there are a lot of other tricky factors when it's a GM rather than a program)

As for emotional intensity... try to find some little ritual that relaxes you, like sitting still with your eyes closed and breathing slowly in and out ten times, and start doing it at semi-random times during the day. Once that becomes habitual, focus on remembering to go through the ritual whenever you start to get excited or upset. There is no plausible mechanism by which following these instructions as intended could cause kidney failure.

If self-improvement fails, what sorts of things do motivate you to act?

Absurdity is a tricky thing. Have you ever tried constructing an explicit formulation of your inferred emotional beliefs and (temporarily) acting as if it was an accepted part of your intellectual beliefs, with the goal of seeing it torn down?

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-10T04:56:39.939Z · LW(p) · GW(p)

I'd recommend finding a game where the players are working together against an automated hostile environment, such as Zombicide. If it seems like you have a workable plan, the other players will go along with it out of self-interest if nothing else. (D&D /can/ work like that, but there are a lot of other tricky factors when it's a GM rather than a program)

I've done stuff like this; in some situations, that works reasonably well, but in others I wind up send out flags that I'm too low-status to "deserve" being listened to, no matter how reasonable or workable my plans are.

If self-improvement fails, what sorts of things do motivate you to act?

For a very long time, fear motivated me to act, but that wore out. After that, shame motivated me to act, but that's almost fully eroded. I don't know what I'll have once shame runs out.

Have you ever tried constructing an explicit formulation of your inferred emotional beliefs and (temporarily) acting as if it was an accepted part of your intellectual beliefs, with the goal of seeing it torn down?

I have done exactly and explicitly this - I got the idea, weirdly enough, from Aleister Crowley via Robert A Wilson. Unfortunately, I'm VERY good at crafting mindsets / "reality tunnels" and following them - consciously embracing my inferred emotional beliefs tends to reinforce them, not tear them down. I can enter a sort of "1984" mode where holding onto my beliefs is explicitly more important than my own survival, and relish in the self-destructivity that the absurdity of my beliefs is inflicting upon me.

Replies from: Strange7
comment by Strange7 · 2012-11-11T19:25:02.330Z · LW(p) · GW(p)

Aha! In that case, possibly what you need is a code of honor. Lay down some rules of constructive behavior (I'd recommend studying a variety of historical precedents first, particularly the ways in which they can go wrong... Bushido, Ms. Manners, etc.) and pretend to be the sort of person who thinks that following those rules is the Most Important Thing.

Done correctly, you can stop worrying about the uncertainty of whether some other choice would have had a better outcome, since in any given situation there is only one honorable course of action. Simply calculate what the correct action is, and follow by rote. Under some circumstances honor may compel you to trust someone who most people would not, pass up opportunities for personal gain, dive into a frozen lake to rescue a complete stranger, openly defy the law, or otherwise engage in heroically self-destructive behavior, but it is entirely possible for the gains (from following a calculated strategy, and from other people learning to trust and rely on your consistent behavior) to predominate.

This may be controversial, but I would recommend against keeping an explicit, external record of how honorable or dishonorable your behavior has been. A journal or blog can be useful in other ways, but the plan here is eternal striving toward an ideal, not 3% improvement over last month.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-11T20:32:30.940Z · LW(p) · GW(p)

I actually have a code of honor, and operate explicitly as if those rules are the Most Important Thing.

Rule 0 is "Should does not imply can; should only implies must." - or, put another way, "Just because you cannot do something does not excuse you for not having done it."

Rule 1 is "Always fulfill other peoples' needs. If two people have mutually exclusive needs, failing to perfectly fulfill both is abject failure."

Rule 2 is "All successes are private, all failures are public."

Rule 3 is "Behave as if all negative criticisms of you were true; behave as if all compliments were empty flattery. Your worth is directly the lower of your adherence to these rules and your public image."

Past 3 the rule-sorting gets fuzzier, but somewhere around rule 5 or 6 is "always think the best of people", around rule 7 is "It's wrong to win a challenge", somewhere around rule 10 is "losers suck".

Replies from: Strange7, army1987
comment by Strange7 · 2012-11-13T04:00:32.789Z · LW(p) · GW(p)

Every rule I see there seems to be you shooting yourself in the foot. I was thinking of something which would produce exactly one correct course of action under most reasonable circumstances, whereas you seem to have quite rigorously worked out a system with fewer correct courses of action than that.

How comfortable are you with arbitrarily redefining your code, voluntarily but with external prompting? I mean, given the ambient levels of doom already involved.

comment by A1987dM (army1987) · 2012-11-13T13:03:30.591Z · LW(p) · GW(p)

Rule 0 is this one, and Rule 1 is a subcase of it, but rules 2 and (especially) 3 wouldn't work for me -- I seem to function better when my status and (especially) my self-esteem are high than when they're low. And I don't understand Rule 7.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T20:44:09.363Z · LW(p) · GW(p)

The thing is, my rules have evolved to deal with the fact that I've ALWAYS been low-status. Most of my rules have evolved to ensure that my self-esteem stays low, because as a child and young adult, I was repeatedly punished whenever my self-esteem exceeded that of my high-status superiors. So, for me, destroying my own self-esteem and status are defensive mechanisms, designed to prevent the pack from tearing me apart (sometimes literally and physically).

Also, rule 0 ("Do the impossible") is great if you're some kind of high-status wunderkind like Eliezer, but when you're some scrawny little know-it-all that no one WANTS to succeed, it's just an invitation to get lynched, or sprayed in the face with battery acid, or beaten with a lead pipe, or sodomized with a baseball bat.

And once you're in the domain of the "impossible", you lose access to even those systems that have been put in place explicitly to protect people from being sodomized with a baseball bat or sprayed in the face with battery acid, because the bad people want it to happen, and the good people are incapable of acknowledging that "modern society" is still that capable of savagery.

I've misspoke in some of my other threads - I'm not stupid, compared to most of the people here. I'm just optimized for things like "talk my way out of a police officer putting a gun in my face and joking that no one would care enough to look for the body", rather than things like "give a rousing TED talk". I'm more optimized for "figure out which pack of young college-age males is more likely to attempt to dislocate my shoulders as a game" than "figure out which group of venture capitalists is more likely to fund my start-up".

And frankly, looking at the world that way, I think I'd rather be dead than continue to perform in this environment. So all my attempts at "motivation" and "effort" get tainted by that evaluation.

Replies from: NancyLebovitz, Strange7, army1987, Abd
comment by NancyLebovitz · 2012-12-03T16:59:50.429Z · LW(p) · GW(p)

That makes your situation make more sense. You might find Scott Sonnon's work useful-- he started out from a situation roughly as bad as yours (possibly less death threats, but with relentless bullying, learning disabilities, and a connective tissue disorder) and was able to put a good life together, including high achievement He works specifically with lowering one's panic level.

comment by Strange7 · 2012-11-13T21:42:08.352Z · LW(p) · GW(p)

The resources on this site seem to be mostly oriented toward raising somewhat above-average nerds up to truly exceptional levels. Sounds like you need a different set of resources, for a different sort of step up, possibly something like 'feral/marginal' up to 'serving and being protected by a worthy master.'

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T21:46:11.057Z · LW(p) · GW(p)

Sure, but the problem is that I still have all the status-seeking instincts of an above-average nerd. I'm no good serving a master, worthy or otherwise. When I was younger, my problem was that every master I served was demonstrably less intelligent than I was, so I spent a lot of time trying to grant the wishes they would have made if they were smart enough to wish right, rather than granting the wishes they did make.

In status-oriented situations, this is a HUGE FUCKING MISTAKE, and taught me to understand that I am a bad samurai.

In the past few years, I've been ronin for so long that my bushido has gone rusty - and anyways, in this corporate market, no one wants a ronin in the first place.

Replies from: Strange7
comment by Strange7 · 2012-11-13T22:01:19.156Z · LW(p) · GW(p)

There are non-corporate jobs. Personally, I sort scrap metal.

Perhaps we could come up with a pitch for an autobiography disguised as an anti-self-help book, "How to completely cripple yourself in just six years, with no drugs, exercise, or gimmicks!" put it on Kickstarter and see how much money people throw at you?

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T22:07:45.628Z · LW(p) · GW(p)

laugh it at least has the charm of complete truth in advertising.

Replies from: Strange7
comment by Strange7 · 2012-11-13T22:47:11.421Z · LW(p) · GW(p)

I must disagree, based on the technicality that there was actually some strenuous physical exercise involved with the volunteer firefighting thing.

On a more serious note, would you actually like to try doing this? Whatever else is wrong, you're self-evidently capable of expressing yourself coherently and concisely in text. Most of the other prerequisites of being an author can (at least in principle) be handled as arm's-length transactions, which minimize the need for any sort of personal trust.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T22:52:08.233Z · LW(p) · GW(p)

I'm willing to entertain any idea; can you describe further? (note: private messages on this site have not been appearing reliably for me. Is there an easier process for identity exchange?)

Replies from: Strange7
comment by Strange7 · 2012-11-13T23:46:43.804Z · LW(p) · GW(p)

How silly are you willing to be about the identity-exchange thing? I could, for example, give you my username on Nightstar's forums, compromise of which would cost me nothing. You create an account there, send me a PM through that forum, I reply with some piece of information which you then repeat in reply to this comment, and (a secure channel having been established) I could then send you my e-mail address through Nightstar's private messages.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T23:51:54.293Z · LW(p) · GW(p)

Heh, that's a little more elaborate than necessary, I think. I'm bdill(at)asu(dot)edu; it shouldn't be too problematic to make that public.

comment by A1987dM (army1987) · 2012-11-13T21:28:08.553Z · LW(p) · GW(p)

OH GOD OH GOD OH GOD And I thought of myself as someone who used to be low-status...

Anyway, "do the impossible" was intended to be a paraphrase of your Rule 0, which apparently I had misunderstood.

comment by Abd · 2012-11-14T03:28:23.174Z · LW(p) · GW(p)

And frankly, looking at the world that way, I think I'd rather be dead than continue to perform in this environment. So all my attempts at "motivation" and "effort" get tainted by that evaluation.

A certain kind of personal trap has been laid out and described, quite well. There is a set of ideas or "takes" on reality that have been accepted as real, but ideas and takes are never real. The error is widespread and normal, even encouraged, but when the content goes awry, the results can be devastating.

The key in the above statement is "this environment." There is no "this environment." As Buckaroo Banzai said, "Wherever you go, there you are." Any environment contains ample evidence to support almost any interpretation, and our ability as human beings to invent interpretations is vast, so everywhere we look, we can find what we have believed.

We may imagine that the goal is to invent interpretations that are "true." But interpretations are neither true nor false. The problem with the value-laden interpretations being invented here is the effects they cause. There are useful interpretations, that empower us, and ones that don't.

There are two kinds of interpretations. The first, and fundamental kind, is predictive, it takes raw sensory data and predicts what is coming next. That's not the problematic kind, though if we get stuck in an inefficient predictive mode, believing our predictions are "true," confirmation bias can still strike. Still, this kind of interpretation can be readily tested.

The problem is in the second kind of interpretation, the division into good and bad, sane and insane, and hosts of these higher-level interpretations. They are much further from reality than the first kind of interpretation, and it is far more difficult to test them. How do we test if the world ("this environment") is actually good or evil, friendly or hostile?

We are continually creating our world, but we imagine that we are only discovering it. So we are easily victims of "how it is." Yet we make up "how it is"! That's a judgment, it is actually a choice.

We imagine that we are constrained in our choices by our identity, but the identity does not exist. That's ancient rationality. the self is an illusion. Let's put it this way: if it comes from causation from the past, that's not a choice, it's just a machine.

Is there anything other than the machine? You have a choice in how to answer this question! One of the choices is "No." That, then, will create you -- and continue to create you -- as a victim of the past, while at the same time, if you are normal, you still think that you are "real." That's actually inconsistent.

Far be it from me to confine anyone to only two choices, but there is at least another choice. "Yes," there is something else, which can be experienced. But it is not a "thing" other than the machine. We are machines, but what we don't know is the capacity of the machine. It may be that the machine can do things we never dreamed of.

Including, by the way, connecting with other people so that we are no longer limited by individual identity. Doing this may take training, it is not necessarily automatic for all of us, and especially not for those of us who were asocially intelligent. (Like me, for example.)

It's highly likely that our friend here has experienced situations like what he describes, and being caught in a belief that this defines his future is obviously painful. But what do those situations have to do with today and tomorrow, unless he keeps recreating them?

ialdabaoth, I hope you won't give up. I don't think you need to learn something new, exactly, you need to unlearn stuff that you have accepted routinely, and for a long time. Rather than MoreRight, you need to be LessWrong. See what remains when you start dropping stuff that maintains the trap, that doesn't help you.

You will continue to think the thoughts that you thought, but you don't have to believe them. The ancient technique is to identify them as what they are, made-up interpretations, chatter, coming from the past. Some will be useful, so use them. Many will be other than that. Keep your eyes open, you will know the difference. Test ideas, don't imagine that they are truth. They are tools.

comment by Kindly · 2012-11-07T05:46:39.603Z · LW(p) · GW(p)

Well, board games (and card games, and the like) run into a problem where I'm perceived as focused, smart, and competent, so everyone tends to team up to eliminate me quickly - so I tend to get a lot of people actually reinforcing the idea that groups conspire against me.

You could play games where this is not something people can really do. For example, Settlers of Catan would be a bad choice, but Apples to Apples would be a good one.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-07T09:05:04.160Z · LW(p) · GW(p)

You could play games where this is not something people can really do. For example, Settlers of Catan would be a bad choice, but Apples to Apples would be a good one.

Is there a good way to make such games enjoyable?

Replies from: TimS, Kindly
comment by TimS · 2012-11-07T17:42:04.203Z · LW(p) · GW(p)

Let's remember that the purpose of this activity is to give to a safe opportunity for you to have social interactions. Hopefully, this will help you be more comfortable with the idea that other people do not interact with you for the purpose of causing you distress. To that extent, beware trivial inconveniences.

Still, losing is no fun - you might not be able to force yourself to keep something that only might be helpful but is not enjoyable. Games have a variety of mechanics for preventing attack the leader mechanics based solely on player reputation.

First, you can anonymize player input. That's what Apples to Apples does. But it is a light party game (not my cup of tea).

Second, you can restrict the player's ability to target specific other players. Dominion works that way - generally, attacks target everyone at the table equally.

Third, you can pick games with much higher complexity. One of my favorite games, Brass, is at least an order of magnitude more complex than a simply game like Monopoly. You are unlikely to find that others target you simply because you are smart and analytical when it's almost a prerequisite to play. In fact, it might be worth some time looking at Boardgamegeek (warning: potential time-sink) to find interesting looking games where your analytic nature is unlikely to make you a target.

I really do think that practice is safe social interactions will provide helpful to you, both because it is providing data to adjust your social predictions and because improving social skills will make you more effective at avoiding unpleasant social interactions.

comment by Kindly · 2012-11-07T14:34:59.734Z · LW(p) · GW(p)

I've never tried forcing myself to like a game, but why do you think that you need to?

There are very many games in which you win by doing better than other players and you can't really make specific other players do worse. Odds are you'll like some of them.

There's Dominion or Race for the Galaxy. There's trivia games. In general, many games classified as "party games" are good, but not all: Mafia, for example, would be a terrible choice. There's cooperative games like Pandemic.

There's also two-player games (like chess) in which you at least won't have a group teaming up against you, or team games (like spades) in which you'll have (at least) one person on your side.

comment by Strange7 · 2012-11-20T00:25:20.714Z · LW(p) · GW(p)

Before I prod any further, what would your preferred outcome be?

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-20T01:03:20.540Z · LW(p) · GW(p)

In the most abstract? Some way to demonstrate to people (including myself) that I'm a sapient being that deserves respect, and not a worthless, lazy, broken, scary parasite.

More concretely, some mechanistic description of why I've had trouble operating within existing social norms, and why I tend to operate under different base assumptions than others - preferably a description that might suggest methods of interacting with the human world that allows me to maintain my dignity and self-respect, without having to immediately acknowledge my abject worthlessness and helplessness as a unilateral precondition for requesting assistance.

It would be nice if someone could point at a bit of my brain, or a specific pattern of answers on behavioral tests, and say "you follow this descriptive pattern which we've labeled X, whereas most people follow this other descriptive pattern which we've labeled Y. There's a lot of research that shows that X does not interact well with Y", in a way that isn't an obvious attempt to reinforce their own social assumptions against a threatening Other.

comment by juliawise · 2012-08-27T21:24:33.383Z · LW(p) · GW(p)

In the hospital where I worked, there was a woman who was able to articulate that it was very unlikely that her neighbor could read her mind. But, she reasoned, there were a lot of people in the world, so surely someone could read minds. And she had the bad luck to live next door to that person.

So sometimes people are able to acknowledge that their beliefs are statistically unlikely but still believe them.

comment by Kaj_Sotala · 2012-08-14T10:48:30.463Z · LW(p) · GW(p)

Feedback: I thought that this post was interesting and at times quite amusing. However, I didn't upvote (nor downvote) because I felt that the concerns you discussed under the open questions section were serious enough that this post could basically be summed up as "here are some theories which feel like they might be on the right track, but basically we're still clueless".

Replies from: orthonormal
comment by orthonormal · 2012-08-19T18:35:26.718Z · LW(p) · GW(p)

I want to see more posts that explain the current state of knowledge of interesting rationality-related fields, and that explicitly state what questions are still troubling. Thus I upvoted the post.

comment by Mart_Korz (Korz) · 2019-05-01T20:27:08.939Z · LW(p) · GW(p)

[I am unsure, whether it makes sense to write a comment to this post after such a long time, but I think my experience could be helpful regarding the open questions. I am not trained in this subject, so my use of terms is probably off and confounded with personal interpretations]

My personal experience with arriving at and holding abstruse beliefs can actually be well described by the ideas described in this post, if complemented by something like the Multiagent models of Minds [? · GW]:

For describing my experience, I will regard the mind as consisting loosely of sub-agents, which are inter-connected and coordinating with each other (as in Global Workspace Theory). In healthy equilibrium, the agents are largely aligned and contribute to a single global agent. Properties of agents include 'trust in their inputs' and 'alertness/willingness to update'.

Now to my description: For me, it felt as if part of my mind lost some of its input-connections from other parts, increasing its alertness (something fundamentally changed, thus predictions must be updated) and also crippling feedback from the 'global opinion'. This caused drifting behaviour of the affected sub-agent, as it updated on messy/incomplete input, while not being successfully realigned by other sub-agents. After some time, the impaired sub-agent would either settle on a new, misinformed model (allowing its alertness to settle) or keep grasping for explanations (alertness staying high, maybe because more alert-type input from other agents remained).

The rest of my mind experienced a sub-agent panicking and then broadcasting eccentric opinions in good faith, while either not being impressed by contradictions or erratically updating to warped opinions loosely connected to input from the other agents. As the impaired agent felt as if it would update to contradictions (but didn't), the source of the felt alertness ("something is very wrong") was elusive and it became natural to just globally adjust to the sub-agent to restore coherence. Thus, internal coherence was partially restored at the cost of deviating from common sense (creating an Ugh [LW · GW] Field [LW · GW] in confrontations with contradicting experiences).

Should my experience be representative, the decision for accepting a delusional idea is not solely based on it being optimal for describing global sensory input. Instead one of the sub-agents does not properly update to global decisions, but still dominates them whenever active as all other agents do keep updating*. In this view the delusion is actually the best sensory input explanation, conditioned on the impaired sub-agent being right.

*) There should be some additional responses like generally decreasing the 'trust in input' or possibly recognizing the actual source of the problem. The latter would need confronting the Ugh Field, which should take a lot of effort

comment by Epiphany · 2012-11-07T02:29:36.132Z · LW(p) · GW(p)

Related Research:

Harvard did a study on LLI (Low latent inhibition. It means that you don't block as much stimulus and can mean having a lot more ideas to sort through) and discovered that people with high LLI and high IQs tend to be more creative whereas people with low IQs and high LLI are more likely to be schizophrenic. This may be because people with higher IQs are able to evaluate a larger number of ideas whereas those with lower IQs may find themselves overwhelmed trying to do so.

This suggests that schizophrenic people could benefit from assistance with processing their ideas. It also suggests that teaching reasoning skills all by itself might not be enough for many of them. If a key part of the problem turns out to be that they're generating more weird ideas than they can process, it may be more useful to have someone to talk it over with.

Then again, if Bayes is faster than whatever technique they're using, it could theoretically bring a lot of them over that "sanity waterline" threshold if it makes them able to judge ideas faster than they generate them.

comment by Epiphany · 2012-11-07T02:07:11.652Z · LW(p) · GW(p)

A Related Experiment:

I once read about an experimental mental hospital for people with schizophrenic symptoms in California called Soteria House.

At Soteria house, the philosophy was to let the mental patients do whatever they wanted with the exception of hurting people. They got to run around naked if they wanted to, and there was a room for them to break things in (with breakable objects).

The staff was trained on a method to help the schizophrenics sort out reality from delusion. They were assisted by being told which things others couldn't see and were asked to interpret them as they would a dream. The result was that most of them were better in three months, were able to be independent in six months and a very low proportion of them (I think 15%?) had another schizophrenia episode.

This experiment was repeated at another location in California, though I forgot the name of the sister house. You can also check out Soteria Bern in Germay.

I think it may have been important that the emphasis was on "try interpreting that a different way" instead of "that isn't real" - because there is likely to have been some emotional or belief content in what they were experiencing that they needed to process (for the same reason we have to process feelings and can't just repress them). It was probably also very important that the patients didn't feel trapped. If you feel trapped, you're less likely to trust the people helping you. It might be hard for a person who is already confused about reality to tell whether someone is gaslighting them. It probably takes a lot of trust to accept this type of guidance.

They weren't using Bayes specifically to convince patients that their delusions weren't real, but I think this is still relevant because they were essentially getting the patients interpret delusions in a more rational way.

comment by torekp · 2012-08-14T00:40:56.557Z · LW(p) · GW(p)

Do you have any evidence of brain damage in schizophrenia that isn't explainable by drug use (including antipsychotics especially) and is fairly common among schizophrenics?

Regarding arguing oneself out of delusion, cognitive therapy for schizophrenia has a decent track record. More info on request, after my wife gets home (she's a psychologist).

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-08-14T01:38:34.732Z · LW(p) · GW(p)

See for example http://www.schizophrenia.com/research/schiz.brain.htm on structural brain damage. For functional brain damage, read the above-linked paper by McKay where he starts talking about change in patterns of prediction error signal activation in the right prefrontal cortex.

Replies from: torekp
comment by torekp · 2012-08-16T10:18:54.264Z · LW(p) · GW(p)

Here's a better source (PDF), link-chained from yours.

On brain changes due to drug use:

Medication-Matched Subjects. To address the possibility that neuroleptic exposure and/or lower IQ could have determined differential gray matter loss in the schizophrenics, we mapped 10 serially imaged subjects referred to the childhood schizophrenia study who did not meet diagnostic criteria for schizophrenia (labeled Psychosis NOS - Not Otherwise Specified - in DSM terms; (24)). These subjects received identical medication to the patients in this study through adolescence, primarily for control of aggressive outbursts, and at follow-up, none had progressed to schizophrenia (35) but all continued to exhibit chronic mood and behavior disturbance. While medication is unlikely to be responsible for a loss profile that moves across the brain, clozapine, for example, may increase Fos-immunoreactivity in the thalamus (36), and might, logically, modulate rates of cortical change. (In addition, brain regions important for motor function, including the basal ganglia, show increased volumes in response to some older, conventional neuroleptics, although these effects are renormalized after treatment with the atypical antipsychotics used in this study). As seen in Figure 6, while the non-schizophrenic group did show some subtle but significant tissue loss, this was much less marked than for the schizophrenics. Moreover, no temporal lobe deficits were observed in the PNOS group (Fig. 6), suggesting that the wave of disease progression into temporal cortices may be specific to schizophrenia, regardless of medication, and also regardless of gender or IQ. Intriguingly, the psychosis NOS subjects, who share some of the deficit symptoms but do not satisfy criteria for schizophrenia, exhibited significantly accelerated gray matter loss in frontal cortices relative to healthy controls, in approximately the same, but a less pervasive, region than schizophrenics (a significant loss of 1.9%±0.7%/yr. was detected in both left and right superior frontal gyri; p<0.03).

So the answer to my question appears to be that drugs may or may not be doing some brain damage, but not nearly as much as the whole change seen in schizophrenia.

comment by Will_Newsome · 2012-08-13T21:31:14.895Z · LW(p) · GW(p)

(Well-written post. There are more interesting subjects in the general 'schizophrenic reasoning' space though. If anyone ends up writing more on the subject I'd like if they sent me a draft; I know quite a bit, both theoretically and experientially.)

comment by Spurlock · 2012-08-13T12:37:53.708Z · LW(p) · GW(p)

but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried).

At the very least (pretending that there are no ethical concerns), it seems that you ought to be able to exaggerate a patient's delusions. "We ran some tests, and it turns out that you're Jesus, John Lennon, and George Washington!".

To this same question, I can't help but notice that the brain damage being discussed is right-side brain aka "revolutionary" brain damage. So if it turns out that it isn't possible to get a paranoid patient to switch from FBI to KGB, it might simply be a case of inability to discard hypotheses (it seems like the original delusion, the CIA, wouldn't count because for most of us "the CIA isn't following me" isn't an explicit belief). But then, I am not a neurologist or psychologist, so the pool of data I'm working with is 100% limited to that which has been written about by Yvain on LW :-)

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-08-14T01:44:33.781Z · LW(p) · GW(p)

The patient who believes he is Jesus and John Lennon will pretty much agree he is any famous figure you mention to him, but he never seems to make a big deal of it, whereas those two are the ones he's always going on about.

Replies from: Alicorn
comment by Alicorn · 2012-08-14T01:52:09.022Z · LW(p) · GW(p)

Are random people allowed to visit harmless psych patients with those patients' consent? This sounds fascinating.

Replies from: Tuukka_Virtaperko, juliawise
comment by Tuukka_Virtaperko · 2012-08-24T03:01:21.917Z · LW(p) · GW(p)

Hehe. I'm a psych patient and I'm allowed to visit LessWrong.

Replies from: Alicorn
comment by Alicorn · 2012-08-24T03:38:43.938Z · LW(p) · GW(p)

Do you have fascinating delusions you would like to let us try to do Bayes to?

Replies from: DaFranker
comment by DaFranker · 2012-08-24T03:55:11.660Z · LW(p) · GW(p)

A better phrasing might be to contextualize it from someone else's viewpoint. The person having the "delusions" might not perceive them as such, and might not find them particularly fascinating at all.

Replies from: Alicorn
comment by Alicorn · 2012-08-24T06:35:05.871Z · LW(p) · GW(p)

I think it was a fair response in context. I did write it tongue-in-cheek.

comment by juliawise · 2012-08-24T02:13:06.599Z · LW(p) · GW(p)

Can you think of a way to do this that would not feel like a freak show? Psych hospitals are full of staff who actually need to talk the the patients, plus students and interns and the patients' friends and family who visit. Almost all the patients get tired of being asked about how they're doing, since they have to explain how they're doing so many times a day to a lot of near-strangers. Introducing tourists seems like a bad plan.

Replies from: DaFranker
comment by DaFranker · 2012-08-24T03:53:55.964Z · LW(p) · GW(p)

I think the idea was to find psych patients willing to speak with one or more Bayesians about whatever interesting beliefs that got them there in the first place, and let them furiously jot down notes and do all kinds of arcane math in the process.

comment by Ronny Fernandez (ronny-fernandez) · 2012-09-11T03:57:41.934Z · LW(p) · GW(p)

"

"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?"

IMHO All human psychologies have a hard time updating to believe they're poorly built. We are by nature arrogant. Do not forget that common folk often "choose" what to believe after they think about how it feels to believe it.

(Brilliant article btw)

(eidt):"Likewise, how come delusions are so specific? It's impossible to convince someone who thinks he is Napoleon that he's really just a random non-famous mental patient, but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried). But him being Alexander the Great is also consistent with his observed data and his deranged inference abilities. Why decide it's the CIA who's after you, and not the KGB or Bavarian Illuminati?"

IMHO I think there are plenty of cognitive biases that can explain that sort of behavior in healthy patients. Confirmation bias, and the affective heuristic are the first to come to mind.

Replies from: TimS
comment by TimS · 2012-09-11T04:31:42.833Z · LW(p) · GW(p)

You have brain damage" is also a theory with perfect explanatory adequacy.

If you don't have the right understanding of how the brain works, I've not sure this thesis adequately explains.

By comparison, the expected observations from "Your car has engine damage" is a car that doesn't drive at all, not one that turns right but not left.

comment by wstrinz · 2012-08-23T14:53:22.611Z · LW(p) · GW(p)

Once I understood the theory, my first question was has this been explained to any delusional patient with a good grasp of probability theory? I know this sort of thing generally doesn't work, but the n=1 experiment you mention is intriguing. I suppose what is more often interesting to me is what sorts of things people come up with to dismiss conflicting evidence, since it is in a strange place between completely random and clever lie. If you have a dragon in your garage about something you tend to give the most plausible excuses because you know, deep down, the truth about the phenomenon so you can construct your explanation around that recognition of the way the world actually is. Delusional patients, by contrast, say things like "this is my daughters arm" that just don't make any sense, and indicate to us in an eerie way just how deeply they believe their delusions. I'm surprised that, given the contributions from the study of injured brains to neurobiology, there's not a bigger focus on the study of abnormal mental systems in cognitive science and decision theory, not that I'm the first person to wonder this or anything

comment by OnTheOtherHandle · 2012-08-19T03:56:02.147Z · LW(p) · GW(p)

Is it possible that what specific delusions a patient develops after their brain damage correlates with their experiences before the brain damage? Maybe paranoid schizophrenics in the US tend to think the CIA is after them, but those in Soviet Russia used to think the KGB was? How would these delusions have manifested in the past, before any such organizations existed? Perhaps some of them convinced themselves that God's wrath was being brought down upon them, or that Satan was haunting them.

Also, does Capgras delusion apply to everyone the patient has an emotional reaction to, or just their spouse/parents? If you were a very political person, and felt great pride/joy when looking at your favored political leader, and then you got Capgras delusion, would you assume they were replaced by aliens? What about your teachers, doctors, and friends?

Replies from: Nornagest, prase
comment by Nornagest · 2012-08-19T21:40:23.465Z · LW(p) · GW(p)

Also, does Capgras delusion apply to everyone the patient has an emotional reaction to, or just their spouse/parents? If you were a very political person, and felt great pride/joy when looking at your favored political leader, and then you got Capgras delusion, would you assume they were replaced by aliens? What about your teachers, doctors, and friends?

If there's been any research into this I haven't been able to find it; but a few people outside academia seem to have associated the "Paul is dead" meme from the Beatles era with the Capgras delusion. There's also a number of conspiracy theories that seem to fit the general pattern, including David Icke's reptilian humanoid theory.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-08-19T23:01:36.011Z · LW(p) · GW(p)

Interesting; thanks.

Also, do you know if Capgras delusion only wipes away all previous emotions you associated with faces, or if it also makes it impossible to form new emotions related to other faces? What if, for some reason, the spouse decided to go along with the charade that they were a different person, and managed to convince the Capgras patient to stay married to them anyway? Would the patient eventually form an emotional connection the way normal people do when they meet, date, and marry someone? Or if a Capgras patient had a child after the brain damage, would they associate their child's face with emotions while still considering their spouse and parents to be imposters?

Replies from: adavies42
comment by adavies42 · 2012-08-25T07:18:46.152Z · LW(p) · GW(p)

There is alleged to have been a Capgras patient who wasn't very happy with her marriage beforehand, but decided she liked the "imposter" much better. No cite, I think it was in a TED talk.

comment by prase · 2012-08-19T20:59:05.700Z · LW(p) · GW(p)

Maybe paranoid schizophrenics in the US tend to think the CIA is after them, but those in Soviet Russia used to think the KGB was?

It seems almost certain. In the least one should know about CIA's existence to have that sort of delusion.

Also, does Capgras delusion apply to everyone the patient has an emotional reaction to, or just their spouse/parents? If you were a very political person, and felt great pride/joy when looking at your favored political leader, and then you got Capgras delusion, would you assume they were replaced by aliens? What about your teachers, doctors, and friends?

This is a great question to test the emotional reaction hypothesis. I would add: what about their enemies? A negative emotional response is still an emotional response (well, maybe, I wouldn't be so surprised if negative and positive emotions were each associated with a different part of the brain).

comment by MaoShan · 2012-08-17T03:00:16.715Z · LW(p) · GW(p)

I suspect that, especially in dreams, and to a lesser degree in déjà vu, the output of place cells have the ability to be combined in novel ways that normally might be rejected when fully conscious. I am not aware that anything similar has been discovered regarding familiar people, but if so, that would work in a surprisingly similar way ("Don't I know you from somewhere?"), and would accommodate the typical example. What the unconscious mind composes as a shorthand template for my mother is later detailed, but still contains the "my mother" flag; although my fourth grade teacher has many similar qualities, she has the "my fourth grade teacher" flag. Maybe the reasoning that the RDPC enables is the choosing between simultaneous data streams, and diminished or overactive capabilities of the RDPC can cause delusions accordingly.

comment by complexmeme · 2012-08-14T18:01:14.011Z · LW(p) · GW(p)

"You have brain damage" is also a theory with perfect explanatory adequacy.... Why not?

This led me to think of two alternate hypotheses:

One is that the same problem underlying the second factor ("abnormal belief evaluation") is at fault, that self-evaluation for abnormal beliefs involves the same sort of self-modelling needed for a theory like "I have brain damage" to seem explanatory (or even coherent). The other is that there are separate systems for self-evaluation and belief-probability-evaluation that are both damaged in the case of such delusions.

One might take the Capgras delusion and similar as evidence that those systems at least overlap, but there's some visibility bias involved, since people who hold beliefs that seem (to them) to be both probable and crazy are likely to conceal those beliefs (see someonewrongonthenet's comment).

Replies from: MaoShan
comment by MaoShan · 2012-08-17T03:26:04.895Z · LW(p) · GW(p)

"Brain damage makes my brain stop working properly. If I have brain damage, I wouldn't be able to reason like this, therefore I cannot have brain damage. The CIA just told my doctor to say that I do."

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-12T01:28:34.610Z · LW(p) · GW(p)

There's a good check for this.

I have, every 2 years or so since 2002, taken a series of IQ tests and averaged the results together. (Side note: in 1997, an in-person IQ test rated me at 155. This isn't calibrated to the other tests, of course, but it's an interesting anecdote.)

In 2002, my IQ according to this process was 148. In 2004, it was 150. In 2006, it was 145. In 2009, it was 135. In 2011, it was 120. Today, it was 115.

I keep asking myself "now what", but I'm not even sure I'm qualified to answer that question anymore. (This will sound hilariously cliche'd, but... I don't FEEL any dumber. It's just become more and more frustrating to think about deep problems. I feel like my domain expertise is just as good as it ever was - but how the hell could I TELL, if the very instrument which measures my expertise is the instrument which is failing?)

Replies from: gwern, MaoShan
comment by gwern · 2012-11-12T04:06:00.856Z · LW(p) · GW(p)

I have, every 2 years or so since 2002, taken a series of IQ tests and averaged the results together.

All the same test? Those are troubling results indeed, since the 2pt change from 2002-2004 looks like a practice effect, but a 35pt fall is surely not a practice effect.

It's just become more and more frustrating to think about deep problems. I feel like my domain expertise is just as good as it ever was - but how the hell could I TELL, if the very instrument which measures my expertise is the instrument which is failing?

Presumably you'd measure your domain expertise by your domain results. That's how most experts get by: lots of domain knowledge, not so much need for fluid intelligence.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-12T04:31:09.631Z · LW(p) · GW(p)

The problem is that, in many situations, I was so poor at playing political games that I wound up accepting other people's political measurements of my domain expertise, instead of accurate, objective measurements. I've eventually developed a sort of neurotic "learned helplessness" that makes it nigh-impossible to accept accurate, objective measurements of any of my capacities, if they would have a positive connotation.

comment by MaoShan · 2012-11-12T02:21:27.458Z · LW(p) · GW(p)

Well, there you just said that you don't have the patience for those type of problems, which (unless your area of expertise is identifying patterns of lines) doesn't necessarily mean that you are not extremely well-suited to the work that you do. If you are worried about specific cognitive deficits, test for those--an IQ test is not going to help identify that.

comment by Eugine_Nier · 2012-08-14T00:03:52.268Z · LW(p) · GW(p)

There must be some fundamental difference between how one draws inferences from mental states versus everything else.

Talking about "drawing inferences from mental states" strikes me as a case of the homunculus fallacy, i.e., thinking that there's some kind of homunculus sitting inside our brains looking at the mental states and drawing inferences. Whereas in reality mental states are inferences.

Replies from: Yvain, TruePath, Kaj_Sotala
comment by Scott Alexander (Yvain) · 2012-08-14T01:40:29.397Z · LW(p) · GW(p)

Really? I don't see that at all. The same mental state can be both an inference and a premise for the next inference. For example, "I feel really tired lately -> Maybe I'm sick" seems pretty straightforward, as does "I am a guy and feel really attracted to other guys -> maybe I'm gay".

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-08-14T02:07:47.036Z · LW(p) · GW(p)

You're thinking of the inference as "I don't feel affection when I see her face -> She's not my wife". Whereas, another way to think about it is "Her face looks like [insert description of wife's face here] -> She's not my wife".

comment by TruePath · 2012-08-15T09:57:07.897Z · LW(p) · GW(p)

This objection points largely in the right direction but I don't think it's fair to accuse the view of adopting the homunculus fallacy. After all, the very suggestion is that our brains have circuitry that (in effect) performs Bayesian updating and that neurological damage and psychiatric conditions can cause this circuitry to misbehave. This is a way the brain could have worked. If the view adopted the homunculus fallacy then the Bayesian updating machinery couldn't, itself, be broken. It could only recieve bad input.

However, as I delineate in my comment, we have every reason to believe the brain doesn't have anything like a Bayesian updating module exercising control over all the other brain modules. Instead, the empirical evidence suggests a much simpler structure in which different brain regions vie to control our actions without any arbitration by some master Bayesian updating module. Otherwise, one couldn't explain our inclination to answer wrongly on tests that pit one part of the brain against another, e.g., our mistakes in identifying the color of text spelling the name of another color.

Also, to be pedantic the mental states aren't inferences. .The mental states merely determine behavior patterns that we can (sometimes) usefully describe as making certain inferences.

comment by Kaj_Sotala · 2012-08-14T07:35:30.267Z · LW(p) · GW(p)

You can have a module in a certain state and another module which draws an inference from that. No homunculus needed.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-08-14T22:38:35.279Z · LW(p) · GW(p)

Module A doesn't "draw an inference" from the state of module B, that would require module A to have a sub-module dedicated to drawing inferences from module B and evaluating their reliability. Module A simply treats the output of module B as an inference of similar weight to the one it itself makes.

Replies from: dlthomas
comment by dlthomas · 2012-08-21T17:25:57.301Z · LW(p) · GW(p)

But one or more drawing-inferences-from-states-of-other-modules module could certainly exist, without invoking any separate homunculus. Whether they do and, if so, whether they are organized in a way that is relevant here are empirical questions that I lack the data to address.

comment by [deleted] · 2014-06-13T08:26:24.798Z · LW(p) · GW(p)

" …modern man no longer communicates with the madman […] There is no common language: or rather, it no longer exists; the constitution of madness as mental illness, at the end of the eighteenth century, bears witness to a rupture in a dialogue, gives the separation as already enacted, and expels from the memory all those imperfect words, of no fixed syntax, spoken falteringly, in which the exchange between madness and reason was carried out. The language of psychiatry, which is a monologue by reason about madness, could only have come into existence in such a silence.

—Foucault, Preface to the 1961 edition[6]"