The Apologist and the Revolutionary
post by Scott Alexander (Yvain) · 2009-03-11T21:39:47.614Z · LW · GW · Legacy · 101 commentsContents
Footnotes None 101 comments
Rationalists complain that most people are too willing to make excuses for their positions, and too unwilling to abandon those positions for ones that better fit the evidence. And most people really are pretty bad at this. But certain stroke victims called anosognosiacs are much, much worse.
Anosognosia is the condition of not being aware of your own disabilities. To be clear, we're not talking minor disabilities here, the sort that only show up during a comprehensive clinical exam. We're talking paralysis or even blindness1. Things that should be pretty hard to miss.
Take the example of the woman discussed in Lishman's Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".
Why won't these patients admit they're paralyzed, and what are the implications for neurotypical humans? Dr. Vilayanur Ramachandran, leading neuroscientist and current holder of the world land-speed record for hypothesis generation, has a theory.
One immediately plausible hypothesis: the patient is unable to cope psychologically with the possibility of being paralyzed, so he responds with denial. Plausible, but according to Dr. Ramachandran, wrong. He notes that patients with left-side strokes almost never suffer anosognosia, even though the left side controls the right half of the body in about the same way the right side controls the left half. There must be something special about the right hemisphere.
Another plausible hypothesis: the part of the brain responsible for thinking about the affected area was damaged in the stroke. Therefore, the patient has lost access to the area, so to speak. Dr. Ramachandran doesn't like this idea either. The lack of right-sided anosognosia in left-hemisphere stroke victims argues against it as well. But how can we disconfirm it?
Dr. Ramachandran performed an experiment2 where he "paralyzed" an anosognosiac's good right arm. He placed it in a clever system of mirrors that caused a research assistant's arm to look as if it was attached to the patient's shoulder. Ramachandran told the patient to move his own right arm, and the false arm didn't move. What happened? The patient claimed he could see the arm moving - a classic anosognosiac response. This suggests that the anosognosia is not specifically a deficit of the brain's left-arm monitoring system, but rather some sort of failure of rationality.
Says Dr. Ramachandran:
The reason anosognosia is so puzzling is that we have come to regard the 'intellect' as primarily propositional in character and one ordinarily expects propositional logic to be internally consistent. To listen to a patient deny ownership of her arm and yet, in the same breath, admit that it is attached to her shoulder is one of the most perplexing phenomena that one can encounter as a neurologist.
So what's Dr. Ramachandran's solution? He posits two different reasoning modules located in the two different hemispheres. The left brain tries to fit the data to the theory to preserve a coherent internal narrative and prevent a person from jumping back and forth between conclusions upon each new data point. It is primarily an apologist, there to explain why any experience is exactly what its own theory would have predicted. The right brain is the seat of the second virtue. When it's had enough of the left-brain's confabulating, it initiates a Kuhnian paradigm shift to a completely new narrative. Ramachandran describes it as "a left-wing revolutionary".
Normally these two systems work in balance. But if a stroke takes the revolutionary offline, the brain loses its ability to change its mind about anything significant. If your left arm was working before your stroke, the little voice that ought to tell you it might be time to reject the "left arm works fine" theory goes silent. The only one left is the poor apologist, who must tirelessly invent stranger and stranger excuses for why all the facts really fit the "left arm works fine" theory perfectly well.
It gets weirder. For some reason, squirting cold water into the left ear canal wakes up the revolutionary. Maybe the intense sensory input from an unexpected source makes the right hemisphere unusually aroused. Maybe distoring the balance sense causes the eyes to move rapidly, activating a latent system for inter-hemisphere co-ordination usually restricted to REM sleep3. In any case, a patient who has been denying paralysis for weeks or months will, upon having cold water placed in the ear, admit to paralysis, admit to having been paralyzed the past few weeks or months, and express bewilderment at having ever denied such an obvious fact. And then the effect wears off, and the patient not only denies the paralysis but denies ever having admitted to it.
This divorce between the apologist and the revolutionary might also explain some of the odd behavior of split-brain patients. Consider the following experiment: a split-brain patient was shown two images, one in each visual field. The left hemisphere received the image of a chicken claw, and the right hemisphere received the image of a snowed-in house. The patient was asked verbally to describe what he saw, activating the left (more verbal) hemisphere. The patient said he saw a chicken claw, as expected. Then the patient was asked to point with his left hand (controlled by the right hemisphere) to a picture related to the scene. Among the pictures available were a shovel and a chicken. He pointed to the shovel. So far, no crazier than what we've come to expect from neuroscience.
Now the doctor verbally asked the patient to describe why he just pointed to the shovel. The patient verbally (left hemisphere!) answered that he saw a chicken claw, and of course shovels are necessary to clean out chicken sheds, so he pointed to the shovel to indicate chickens. The apologist in the left-brain is helpless to do anything besides explain why the data fits its own theory, and its own theory is that whatever happened had something to do with chickens, dammit!
The logical follow-up experiment would be to ask the right hemisphere to explain the left hemisphere's actions. Unfortunately, the right hemisphere is either non-linguistic or as close as to make no difference. Whatever its thoughts, it's keeping them to itself.
...you know, my mouth is still agape at that whole cold-water-in-the-ear trick. I have this fantasy of gathering all the leading creationists together and squirting ice cold water in each of their left ears. All of a sudden, one and all, they admit their mistakes, and express bafflement at ever having believed such nonsense. And then ten minutes later the effect wears off, and they're all back to talking about irreducible complexity or whatever. I don't mind. I've already run off to upload the video to YouTube.
This is surely so great an exaggeration of Dr. Ramachandran's theory as to be a parody of it. And in any case I don't know how much to believe all this about different reasoning modules, or how closely the intuitive understanding of it I take from his paper matches the way a neuroscientist would think of it. Are the apologist and the revolutionary active in normal thought? Do anosognosiacs demonstrate the same pathological inability to change their mind on issues other than their disabilities? What of the argument that confabulation is a rather common failure mode of the brain, shared by some conditions that have little to do with right-hemisphere failure? Why does the effect of the cold water wear off so quickly? I've yet to see any really satisfying answers to any of these questions.
But whether Ramachandran is right or wrong, I give him enormous credit for doing serious research into the neural correlates of human rationality. I can think of few other fields that offer so many potential benefits.
Footnotes
1: See Anton-Babinski syndrome
2: See Ramachandran's "The Evolutionary Biology of Self-Deception", the link from "posits two different reasoning modules" in this article.
3: For Ramachandran's thoughts on REM, again see "The Evolutionary Biology of Self Deception"
101 comments
Comments sorted by top scores.
comment by Psy-Kosh · 2009-03-12T01:31:16.070Z · LW(p) · GW(p)
A couple things: First, this is interesting and wild stuff.
Second: I seem to recall reading a bit more about the cold water in the ear thing. That it actually seems to act almost as the secret "force a hard reboot of the brain" button, and that it's actually managed to get some people out of comas.
Third: to make it work, the water has to be REALLY cold, as I understand it. Not just cold.
Fourth: IIRC, when it's actually done right, when whatever process that this triggers is, well, triggered... among other things, the recipient will vomit up everything in their stomach. Apparently this is some sort of automatic reaction. Anyone who's trying this on themselves, among other things, should probably make sure someone else is near by and make sure to be face down and stuff so they don't end up choking on their own vomit.
Replies from: kendoka↑ comment by kendoka · 2013-09-07T21:47:15.448Z · LW(p) · GW(p)
I am left-handed and suffer from depression and anxiety. Perhaps this is due in my case to the dominance of the right hemisphere, which processes more negative emotions, instead of the usual left side, which seems to process positive ones. As odd as it sounds, when I apply ice water to my right ear my symptoms are temporarily alleviated and I feel a positive mood lift.
Replies from: Psy-Koshcomment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T00:01:39.891Z · LW(p) · GW(p)
(Visualizes the Confessor trying to explain that to the Superhappies.)
The idea that the right hemisphere is the hypothesis-changer seems to me to be countered by the said evidence: it doesn't seem very compatible with the effects wearing off ten minutes later, and the patient denying having ever admitted to the paralysis. This seems to much more strongly fit a domain-specific ability to model certain facts, where the model substrate is sleeping, then wakened, then goes back to sleep again.
comment by BrandonReinhart · 2009-03-12T06:50:27.113Z · LW(p) · GW(p)
Wow, Google Scholar is awesome.
According to Wikipedia, vestibular stimulation has been used by audiologists to examine certain syndromes: depending on the temperature of the water your eyes turn in different directions.
From there it was apparently used in inquiries into vertigo. This study contains MRI results of individuals undergoing vestibular stimulation and this one is working to break down which parts of the brain are responsible for which effects of vestibular stimulation.
Looking at a few other studies' abstracts (I can't afford to buy all these studies!) leads me to suspect that a whole lot of brain parts are affected by the process of squirting water into your ear and that it may take some time to isolate which parts are responsible for which effects. There are also different types of effects: the effect of the temperature change on tissue, increased blood flow, blood flowing with modified temperature through certain areas, etc.
Is it too obvious to say one should be wary about practicing techniques drawn from neuroscience journals upon oneself?
There is one area (I found in overviewing the subject) where the cold water trick results in a specific result in a damaged brain but no related result in a functional brain: covert attention. People who have difficulty focusing attention on one side of their body are marginally corrected by the cold water trick biasing their attention toward the bad side. People who have no issues with covert attention don't become over-biased when subject to the cold water trick. The paper relates this (seemingly?) to the vertigo effect: "In particular, they argue against explanations of neglect solely in terms of a pathological misperception of body orientation within an otherwise normal neural representation of space."
Interesting related problem: Pusher Syndrome.
It may be that normal brains would experience no change in the function of rationality compared to arbitrarily damaged brains. The effects of vestibular stimulation are so varied that using it to affect some specific result sounds like the neuroscientific equivalent to hitting the TV on the side to clear static.
Replies from: TurnTrout↑ comment by TurnTrout · 2019-08-27T14:54:43.855Z · LW(p) · GW(p)
It may be that normal brains would experience no change in the function of rationality compared to arbitrarily damaged brains. The effects of vestibular stimulation are so varied that using it to affect some specific result sounds like the neuroscientific equivalent to hitting the TV on the side to clear static.
That's too bad; I was thinking it might be a nice way to kick-start a good ole Crisis of Faith [LW · GW].
ETA: And so did a million other people downthread.
comment by Paul Crowley (ciphergoth) · 2009-03-11T22:44:41.124Z · LW(p) · GW(p)
Wouldn't squirting cold water in the left ear of creationists (or other healthy subjects who are having trouble letting go of a belief) be an effective test of Dr Ramachandran's hypothesis? And, potentially, a genuinely useful rationality technique?
I'm now imagining sneaking up on some stubbornly irrational people in my own life, water pistol in hand...
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-03-11T23:07:25.746Z · LW(p) · GW(p)
It's a fair question, and if you read the article linked to under "posits two different reasoning modules", you know as much as I.
My thought is that there must be some reason this doesn't work, or else Ramachandran would have thought of it - he's famous for coming up with clever ways to test things other people thought were untestable (Google Ramachandran and synaesthesia for an example). Perhaps in normal life, the right hemisphere is as active as it's going to get, and if it hasn't overruled the left hemisphere already, it's not going to.
From the description of the technique, I think it's more complicated than just sticking water in the ear; I think it needs to go up into the vestibular system in the inner ear, which means it should probably only be done by a trained medical professional.
I feel really silly admitting this, but when I read this study I tried just pouring a lot of cold water into my ear and then quickly reviewing my opinions on controversial issues. Nothing unusual happened, so either the procedure requires a more complicated inner ear irrigation technique, or ear irrigation doesn't do anything special to non-anosognostic people, neither of which surprise me at all (guess it could also mean I'm just naturally right about everything :)
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-11T23:24:33.900Z · LW(p) · GW(p)
I'm off to try it anyway. 'Scuse me.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-11T23:44:10.207Z · LW(p) · GW(p)
Report: I used a glass of water mixed with ice, and a medicine dropper for delivery, lying down in bed with my left ear upward. The cold water did seem to immediately flood the ear (it popped/clicked, don't know medical term).
I tried thinking about two topics, my estimate of my own intelligence and a complicated AI issue I'm currently pondering. Neither produced any great revelation or change of heart.
I'll try to remember to test again the next time I'm currently in the middle of feeling torn on some topic, or worried that I might be rationalizing.
If all of this including the journal article is a tremendous prank along the lines of "How do you get a hundred rationalists to squirt cold water into their left ear?" it worked like a charm. You shouldn't feel embarrassed for trying it, though. A plate on a door affords pushing, a hypothesis affords testing.
Replies from: Hans, MichaelHoward↑ comment by Hans · 2009-03-12T01:30:05.602Z · LW(p) · GW(p)
Actually, the trick worked, but the effects had worn off by the time you wrote this message, which is why you deny having your opinion on the AI issue completely reversed in a shocking aha-erlebnis, for a brief ten minutes at least. Remember to videotape yourself the next time.
Replies from: gwern, Yvain, Eliezer_Yudkowsky↑ comment by gwern · 2012-02-27T01:08:03.100Z · LW(p) · GW(p)
Today and yesterday I tried it essentially as Eliezer described: put a glass of water with ice cubes in the freezer to cool, prepared my syringe (bought to feed a dying ferret), laid on my side, and set up my camera across from my face. I turned it on, inserted the syringe, and injected 10ml of ice-water.
The result both times? Substantial vertigo within 5-10s, lasting ~5m. (No feelings of vomiting, although I ride rollercoasters for fun and have gone skydiving, so this may not generalize.) During the first minute, I reviewed my beliefs on the usefulness of modafinil, whether I should accept an O'Reilly ebook offer, and then my general beliefs of atheism/materialism/determinism/utilitarianism/left-libertarianism. I did not find anything to object to that I was not already well aware of (eg. my cost-benefit analysis for modafinil may be off by 3 hours).
I reviewed the recordings 2 hours after the second try; the recordings matched my memories, with nothing worth noting.
Replies from: beoShaffer, army1987, crap↑ comment by beoShaffer · 2012-02-27T03:00:19.309Z · LW(p) · GW(p)
Have you had anyone else review the tapes to make sure that you simply denying the differences. They wouldn't necessarily be able to convince you, but it would provide good data for the rest of us.
Replies from: gwern↑ comment by gwern · 2012-02-27T03:07:09.714Z · LW(p) · GW(p)
No; I value my privacy and didn't want to forward the videos to any third party. (I knew someone would say, 'but what if you self-censored even a day later your response to the video?!' and decided the credibility sacrifice was worth making.)
Replies from: beoShaffer, ChristianKl↑ comment by beoShaffer · 2012-02-27T04:30:14.040Z · LW(p) · GW(p)
Reasonable, now we need to find someone with no sense of privacy to do it.
Replies from: MBlume↑ comment by MBlume · 2012-02-27T05:19:20.529Z · LW(p) · GW(p)
Alicorn: Mike, you're being summoned
Me: But I did that -- the water didn't do anything.
If this seems really important to anyone I can do it again with cameras -- I don't have much sense of privacy, but I do have one of moderate inconvenience.
Replies from: ChristianKl, beoShaffer↑ comment by ChristianKl · 2012-12-25T22:25:40.886Z · LW(p) · GW(p)
In case you do it, I would advocate that you spend the time after the water in a Skype conversation to let someone else help you to find your own blindspots.
↑ comment by beoShaffer · 2012-02-27T15:46:54.921Z · LW(p) · GW(p)
I wouldn't say really important, interesting, but not crucial.
↑ comment by ChristianKl · 2012-12-25T22:22:15.763Z · LW(p) · GW(p)
Have you asked yourself while you were under the effect cold water whether your believe in the importance of your privacy is warranted?
To me that seems like a question that's more likely to involve some form of Ugh-field than the question of whether Modafinil is effective.
Replies from: gwern↑ comment by gwern · 2012-12-25T22:26:16.631Z · LW(p) · GW(p)
I have a large public 'weird' commitment to modafinil; simply liking privacy is pretty normal. (And my own personal experience seems to have justified my preference.)
Replies from: ChristianKl↑ comment by ChristianKl · 2012-12-26T00:11:21.637Z · LW(p) · GW(p)
I myself don't have much problem to rationally access public commitments to ideas while I'm in private. It doesn't raise much cognitive dissonance inside myself. It doesn't take much emotional work to deal with the topics. Dealing with akrasia and social relations to other people seems to raise a lot more emotions.
simply liking privacy is pretty normal.
Liking privacy to the extend that there isn't a single person that you would trust to analyse the video is not normal.
I'm not saying "You are wrong to value privacy". I'm just saying that it's a topic that's more likely to bring you towards emotional barriers.
5 years ago I used a nickname and no image on the internet. I had some irrational fears of sharing my identity.
Replies from: gwern↑ comment by gwern · 2012-12-26T00:50:35.596Z · LW(p) · GW(p)
Liking privacy to the extend that there isn't a single person that you would trust to analyse the video is not normal.
There's no one on LW who I both trust and wish to waste their time analyzing on a lame video. It's just not that important. If you have a little camera, you can replicate the whole experiment in a few minutes and confirm what one would have expected from the weird patient group, and which was also confirmed by previous LWers trying the procedure out. There's no need to spend a lot of time debating a report, because any problem you have with it can easily be fixed in a few minutes by doing a better demonstration yourself: you can do it tonight and email off the video to, I dunno, Yvain or something, if my laziness and wish for privacy seems that bizarrely strong.
Replies from: ChristianKl↑ comment by ChristianKl · 2012-12-26T01:14:56.244Z · LW(p) · GW(p)
There are two different issues:
1) Not verifying the video. 2) Asking questions where there are probably no enough strong emotional effects.
As far as doing the experiment myself, at the moment getting rid of core rationalisation about my worldview isn't my main goal. At the moment stability is more important for myself then destablizing my belief system by kicking out stuff.
↑ comment by A1987dM (army1987) · 2012-02-27T11:59:27.805Z · LW(p) · GW(p)
But I suppose you (like most long-time LWers) had already very few rationalizations to start with. Maybe if a person with a median-or-larger amount of beliefs-in-belief and doublethink tried that, they'd be more likely to have a “Whom am I kidding? I know there's no dragon in my garage actually” moment.
Replies from: gwern↑ comment by crap · 2012-12-26T22:59:14.162Z · LW(p) · GW(p)
My understanding is that this only works for specific type of focal brain damage. I.e. if you had gross denial that you have a paralysed limb. I never heard that it e.g. relieves delusions in mental disorders, and i'd think everyday self deception is less similar to focal brain damage than to mental disorder.
Replies from: gwern↑ comment by Scott Alexander (Yvain) · 2009-03-13T00:04:45.983Z · LW(p) · GW(p)
The article made it clear that this would happen, but I never even considered it.
I conclude that possibly I was not as interested in trying the experiment as I thought, but rather wanted to be able to claim I was a good scientist who tests things that are easily testable. Good catch.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-13T01:26:47.047Z · LW(p) · GW(p)
Heh. Okay, next time I'll call my girlfriend to witness and have her post the results as well as me.
Replies from: Pavitra↑ comment by Pavitra · 2011-08-03T05:07:18.587Z · LW(p) · GW(p)
Did this retest ever happen?
Replies from: gwern, Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-26T23:09:29.711Z · LW(p) · GW(p)
Nope.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-12-27T13:53:49.610Z · LW(p) · GW(p)
Sounds like a great late-night-at-minicamp activity to me :)
↑ comment by MichaelHoward · 2009-03-12T00:13:27.584Z · LW(p) · GW(p)
As it's "caloric vestibular stimulation", ie. a temperature shock to the bits in the middle of the ear that sense movement and balance, I'd expect having your head upright at the time (not lying with your left ear up) to be important. Can anyone confirm?
Maybe it acts as a superstimulus to the "your off-balance, re-align yourself URGENTLY" reaction?
Replies from: bfoner↑ comment by bfoner · 2009-03-13T23:02:53.623Z · LW(p) · GW(p)
When this test is done to patients in a hospital, the patient is lying in bed on his back facing upward towards the ceiling. Ice cold water, 60 ml total, is introduced into one ear canal using a syringe. This is repeated in the other ear canal. The water runs out into a basin placed outside the ear to keep the bed dry. Severely brain damaged patients do not have any reaction to this test. This is a test used in examining patients undergoing brain death evaluation, so they are already on a ventilator.
Replies from: free_rip↑ comment by free_rip · 2010-11-08T08:23:46.256Z · LW(p) · GW(p)
Ouch - you lost me my motivation to follow this example at 'syringe'. I guess I'm more of a rationalist than a scientist - my desire to know whether this works (on me, in an unprofessional home-test anyway) is rated a lot lower value than my desire to not have a syringe of ice-cold water injected into my ears.
Replies from: Broggly↑ comment by Broggly · 2010-12-14T02:56:34.364Z · LW(p) · GW(p)
"Syringe", not "needle". It's just the plastic bit being used to squirt water into your ear, rather than a needle being used to pierce the eardrum.
Why, when I was a kid my mum, a doctor, used to give me and my brother (unused) syringes as water guns and it was great fun.
comment by HughRistik · 2009-03-12T17:15:11.161Z · LW(p) · GW(p)
Here's something I'm wondering about the water-squirting. If activating the right hemisphere leads to the ability to overturn current beliefs, why does the patient go back to their old beliefs after the effects wear off? Clearly the activation of the right hemisphere is doing something, but it seems like openness to new lines of reasoning, rather than an actual Kuhnian paradigm shift.
I wonder what would happen if you kept squirting the person's ear for a whole day? How long would you have to keep their right hemisphere activated for the paradigm shift in beliefs to be permanent (if that's even possible)?
comment by CronoDAS · 2009-03-12T02:53:09.593Z · LW(p) · GW(p)
This reminded me of something.
In the book Happiness: Lessons from a New Science by Richard Layard, the author goes into detail about how mood is strongly correlated with differential activation in the two hemispheres of the brain. The left forebrain is more strongly activated than the right forebrain when a person is happy, and the right forebrain is more strongly activated when a person is sad. (Ramachandran mentions that stroke victims with left brain damage frequently become depressed, while ones with right brain damage don't.)
If the left brain interprets data through the perspective of current theories and the right brain forces theory revision, and left brain activation is associated with happiness and right brain activation is associated with unhappiness, what does that say about happiness and rationality?
Replies from: AnnaSalamon, pjeby, bruno-mailly, Eliezer_Yudkowsky↑ comment by AnnaSalamon · 2009-03-12T04:16:09.373Z · LW(p) · GW(p)
CronoDAS may already have said this, but just to elaborate a bit: one might wonder if sadness increases useful theory revision, and thereby increases aspects of rationality. And one might conversely wonder if the modes of thinking that prompt useful theory-revision, rather than speeches for coherent social posturing, tend to directly increase sadness. (Not because they cause us to notice sad things, but because hanging out in those modes of thought is itself a sadness-associated activity, like frowning.)
By way of analogy: happiness causes smiling, actively working on projects, and perhaps socializing, and forcing yourself to smile, to start projects, or to socialize probably increases happiness. (I've seen studies backing up the active work effect also, but I can't find them.)
Replies from: pjeby, Kenny↑ comment by pjeby · 2009-03-12T04:30:15.957Z · LW(p) · GW(p)
I'm not so sure myself. It seems to me like "theory revision" is fun. However, I suppose it depends on the precise sort of theory revision we're talking about. When I'm revising my theories because I'm wrong/losing, it's a bit more negatively charged of a state than just idly speculating. However, that mood doesn't necessarily last long, and is quickly replaced by the pleasure of an "aha".
A long time ago, my wife and I learned to refer to these situations as "growth opportunities" -- said with an ironic look and a bit of a groan -- but viewing them as such definitely improved our moods in dealing with them.
Thus, I find it difficult to believe in a hardwired causal connection from revision->sadness, even though it's easy for me to believe in a connection going the other direction.
Personally, I think the "Lisa Simpson Happiness Theory" (negative correlation between happiness and intelligence) arises from the mistaken tendency of intelligent people to assume that their "shoulds" exist in the "territory" (and not just in their own map), because they can come up with better arguments for their shoulds than for those of others. Intelligence is at least moderately correlated with this phenomenon, and this phenomenon is then highly correlated with people not wanting to be around you, which in turn is at least moderately correlated with phenomena such as "not having a life" and being unhappy. ;-)
Replies from: AnnaSalamon, None↑ comment by AnnaSalamon · 2009-03-12T05:05:29.351Z · LW(p) · GW(p)
I agree brainstorming-type idea change is fun. Exploration, and considering new avenues and potential projects, is plausibly happiness-inducing in general; such exploration and initiative (or successfully engaging others with the projects, and the hopes around the projects?) may be one of the core functions of happiness.
I also agree that deep personal change can be deeply satisfying and can have good aesthetics, bring fresh air and happiness, etc. And I agree that the negatively charged state of wincing at a mistake need not pervade through most belief-revision.
That said, I'd still assign significant (perhaps 40%) probability to there being a kind of thinking that is useful for parts of rationality and that directly causes sadness (not by an impossibly strong route -- you can frown and still be happy -- but that causes a force in that direction). Perhaps a kind of thinking that's involved in cutting through one's own social bluster to take an honest look at one's own abilities, or at the symmetry between one's own odds of being right, or of succeeding, and those of others in like circumstances. Or perhaps a kind of thinking that's involved in critically analyzing the strengths and weaknesses of particular theories or beliefs, rather than in exploring.
The evidence that's moving my belief is roughly: (a) correlation between unhappiness and willingness to actually update, among my non-OB acquaintance; (b) a prior (from other studies) that most effects of particular emotions can also be causes of those same emotions; (c) a vague notion that happiness might be for social interaction and enrolling others in one's ostensibly sure-to-succeed projects, while unhappiness might be for re-assessing. (What else might they be for? Rewards/punishments to motivate behavior doesn't work as an evolutionary theory for happiness and sadness; moods have pervasive, and so potentially costly, effects on behavior for long periods of time, in a way that brief, intense pain/pleasure doesn't. Those pervasive effects have to be part of what evolution is after.)
Replies from: pjeby, Nick_Tarleton↑ comment by pjeby · 2009-03-12T05:43:38.750Z · LW(p) · GW(p)
Interesting. Well, my experience, based on personal and student observation, is that contemplating "facing the truth" about a situation is painful, but actually facing it is a relief. It's almost as if evolution "wants" us to avoid facing the truth until the last possible moment... but once we do, there's no point in having bad feelings about it any more. (After all, you need to get busy being happy about your new theories, so you can convince everyone it's going to be okay!)
So unhappiness may result from merely considering the possibility that things aren't fitting your theories... while remaining undecided about whether to drop the old theories and change.
In other words, while the apologist and the revolutionary are in conflict, you suffer. But as soon as the apologist gives up and lets the revolutionary take over, the actual suffering goes away.
This seems to me like a testable hypothesis: I propose that, given a person who is unhappy about some condition in their life, an immediate change of affect could be brought about by getting the person to explicitly admit to themselves whatever they are afraid is happening or going to happen, especially any culpability they believe they personally hold in relation to it. The process of admitting these truths should create an immediate sensation of relief in most people, most of the time.
I feel pretty confident about this, actually, because it's the first step of a technique I use, called "truth loops". The larger technique is more than just fixing the unhappiness (it goes on to "admitting the truth" about other things besides the current negative situation), so I wasn't really thinking about it in this limited way before.
Meanwhile, although I do accept that, in general, affect-effects can also be affect-causes, I don't think there's as universal or simple a correlation between them as some people imply. For example, smiling does bias you towards happiness... but if you're doing it because you're being pestered to, it won't stop you from also being pissed off! And if you're doing it because you know you're sad and just want to be happy, you may also feel stupid or fake. Our emotional states aren't really that simple; we easily can (and frequently do!) have "mixed feelings".
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-03-12T06:18:41.737Z · LW(p) · GW(p)
I propose that, given a person who is unhappy about some condition in their life, an immediate change of affect could be brought about by getting the person to explicitly admit to themselves whatever they are afraid is happening or going to happen, especially any culpability they believe they personally hold in relation to it.
I believe this is both widely accepted and true.
See also Robyn Dawes and Robin Hanson on therapy, and Eliezer on Dawes.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T15:47:59.617Z · LW(p) · GW(p)
Well, I'm narrowing the hypothesis a bit: I'm stating that instead of talking to a math professor for some period of time, I'm guessing that you could cut the process a lot shorter by just getting straight to the damaging admissions. ;-)
Of course, there is also good evidence that simply writing about such things is beneficial, such as the study showing that 2 minutes of writing/day (about a personal trauma) improves your health.
I'm just seeing if we can narrow down to a more precisely-defined variable with greater correlation to positive results. That is, that the specific thing that needs to be included in the writing or talking is the admission of a problem and one's worst-case fears about it.
↑ comment by Nick_Tarleton · 2009-03-12T05:37:45.954Z · LW(p) · GW(p)
(a) correlation between unhappiness and willingness to actually update, among my non-OB acquaintance
Both of these could proceed from low status/self-esteem. In that case, I would expect the correlation to be stronger with updating in response to other's opinions than to new info or self-generated ideas. I can't tell about that, although I think I notice the same correlation, in myself as well as others. On the other hand, genuinely depressed people are (at least stereotypically, but in my limited experience actually) unwilling to update regarding the (nominal) reasons for their sadness.
What else could they be for?
Agreed that direct reinforcement is unlikely, but there are other possible complex reasons; e.g. evolutionary approaches to depression.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T06:14:21.486Z · LW(p) · GW(p)
What's also interesting about that idea, is that it might also be that chronically unhappy rationalists are contemplating the idea that rationality leads to unhappiness, while failing to accept it as a fact.
I mean, most of the people who go around saying that intelligence isn't correlated with happiness, are NOT saying this to mean, "therefore I will stop being so damn intelligent." (Certainly I wasn't, when I thought that.)
What they're really doing -- or at least what I was doing -- is using their unhappiness to prove their intelligence. That is, "look, I have a useful quality that should be acknowledged, and by the way, I'm making a big sacrifice for all of you by giving up my own happiness in search of the Truth -- you can thank me later". The supposed lamentation is really just a disguised bid for status and approval.
However, if they were to emotionally accept that their theories are not working (as I eventually did, after enough pain) then they'd start being unhappy a bit less often.
Another interesting hypothesis to test, even if it's not as much fun as squirting cold water in somebody's ear. ;-)
↑ comment by [deleted] · 2009-06-04T16:21:29.687Z · LW(p) · GW(p)
Personally, I think the "Lisa Simpson Happiness Theory" (negative correlation between happiness and intelligence) arises from the mistaken tendency of intelligent people to assume that their "shoulds" exist in the "territory" (and not just in their own map) . . .
Is that the same as the "Charlie Gordon Happiness Theory", where intelligence leads to arrogance as well as other people not knowing what you're talking about, which both lead to alienation, leading to being unhappy?
↑ comment by pjeby · 2009-03-12T03:18:17.663Z · LW(p) · GW(p)
Actually, it just sounds like when we're unhappy, we're more likely to be willing to revise our theories. It doesn't say anything about the rationality of the theories we used before, or the ones we're about to have.
Well, I suppose you could say it means the previous theories didn't produce a good result, but that's not necessarily correlated with the rationality of the theories. If a theory doesn't work the first time you try it, it doesn't necessarily make it wrong.
In any event, the person using "true" rationality will have fewer occasions of unhappiness over the long haul, since they will not have as many "opportunities" to revise their theories, due to low correlation with relevant realities.
(Of course, I happen to think that If you'll really be happier over the course of your life believing something false, then great, go for it. I just also believe that the probability of that actually being the case is very low... especially when compared to the greater pain of discovering the falsehood later.)
↑ comment by Bruno Mailly (bruno-mailly) · 2018-07-23T12:07:46.821Z · LW(p) · GW(p)
Wait... indoctrination/fanatization techniques rely on making the person miserable, right ?
...this is getting really uncomfortable.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T03:54:06.257Z · LW(p) · GW(p)
I've suspected that for a while, actually.
comment by eris · 2009-04-26T12:16:08.177Z · LW(p) · GW(p)
I had the cold water procedure done at a GP to flush out an earwax obstruction. It was absolutely horrible, and I don't recommend it for self-testing.
The flushing took a minute or two. Then there was a minute of starting to feel more and more strange, while everyone asked, "Are you all right?" Then, for the next five or ten minutes....
Notice the reference to REM. If you've ever been so drunk as to see the room spin... you have a slight idea of what my eyes were doing. Closing them didn't help. The staff made me lie down, which made no difference -- I was still clinging desperately to the wall, disoriented and frightened out of my wits.
Eventually the vertigo went away, but I still felt wonky for the next few minutes. Best analogy: being woken from deep sleep and asked to do calculus. Only awake.
Life-changing realizations: none. Perhaps you have to be on-topic at the time. I'm not going back to find out.
Hope this at least slows people down a bit.
[Also, blindsight is an interesting tangent.]
Replies from: lurker4, arfle↑ comment by lurker4 · 2010-05-03T01:36:16.087Z · LW(p) · GW(p)
Writing to confirm. My own experience was very similar but the extreme disorientation lasted only about two minutes with minor disorientation for another five or so. Not recommended.
Replies from: Technoguyrob↑ comment by robertzk (Technoguyrob) · 2011-12-21T20:51:30.942Z · LW(p) · GW(p)
Could that be because of impairment of cochlea functioning, which is the physical apparatus for proprioception?
Replies from: None↑ comment by arfle · 2010-07-27T01:41:28.244Z · LW(p) · GW(p)
I've also had this done, but my GP used warm water. No ill effects whatsoever. Obviously my hearing improved.
Replies from: simplicio, ChristianKl↑ comment by ChristianKl · 2012-12-25T22:14:15.211Z · LW(p) · GW(p)
Is there anything specific about your case, or is the same procedure likely to help a lot of people to improve their hearing?
Replies from: polarix↑ comment by polarix · 2012-12-28T20:55:23.945Z · LW(p) · GW(p)
This should only help people who currently have earwax obstructions.
Replies from: ChristianKl↑ comment by ChristianKl · 2012-12-28T21:24:24.325Z · LW(p) · GW(p)
How many percent of the population have earwax obstructions? How do I know whether I have one?
Replies from: HalMorris↑ comment by HalMorris · 2012-12-29T01:00:41.103Z · LW(p) · GW(p)
If it had happened for as long as you can remember then imagine your hearing is pretty poor. If it just showed up at some point in time, I think you'd notice it.
It's happened to me several times, and doctors flushed it out a couple of times though I've since learned to do it myself with a rubber bulb type syringe (and warm, not cold water). This is not a recommendation as I'm not a doctor and have no idea likely someone would be to do some damage, but if there is a risk, I either have the right touch or am just lucky.
When it happens, the ear with the problem hears pretty poorly and I feel something like I'm under water. Usually it comes and goes a few time, the experience being like your ears popping on a plane or when driving over a mountain. Then at some point it doesn't go away and I have to deal with it.
comment by Annoyance · 2009-03-12T19:00:00.006Z · LW(p) · GW(p)
Theodore Sturgeon once wrote a short story (entitled "The [Widget], the [Wadget], and Boff") about aliens conducting a research study on humans, trying to understand why they don't seem to possess a specific neural circuit that every other sentient lifeform known possesses.
They discover that humans do have this circuit, it merely remains inactive most of the time, even when it's needed. Their experiments on what conditions DO activate the circuit end up improving the lives of a group of people in a boarding house - or more accurately, getting the people to improve their own lives once the active circuit makes them realize what's wrong.
The metaphor he uses to explain the functioning of this circuit is suddenly losing your balance and instinctively reaching out to steady yourself... which is what his hypothetical circuit does, only when your life (for lack of a better term) is out-of-balance.
Sturgeon might have intuitively grasped something important, there.
Replies from: Annoyance↑ comment by Annoyance · 2009-03-12T20:05:12.316Z · LW(p) · GW(p)
I will further note that the sort of neurological findings discussed in the OP is consistent with the model that the left hemisphere has a narrow and specific focus, while the right is integrative and global.
Expressing thoughts is ways likely to encourage one hemisphere's techniques to dominate might be useful, as one previous commenter mentioned writing with his left rather than his right hand. Trying to avoid language might also be beneficial.
Since women's brains are less hemispherically specialized, I don't know that these sorts of experiments are likely to be effective with them. How well does the cold water thing work in women vs. men, I wonder?
comment by abigailgem · 2009-03-12T10:01:17.401Z · LW(p) · GW(p)
A friend of mine recommends writing with the non-dominant hand to access alternative brain functions. I have done this, and found myself disagreeing with myself.
Replies from: ciphergoth, Kutta↑ comment by Paul Crowley (ciphergoth) · 2009-03-12T18:00:09.794Z · LW(p) · GW(p)
What subject did you use as a test? I used my non-dominant hand to type this and the only difference is that it took much longer!
↑ comment by Kutta · 2009-07-16T10:06:51.181Z · LW(p) · GW(p)
I've learned to type very fast with my non-dominant hand (through online gaming) and never experienced such effect.
Replies from: abigailgem↑ comment by abigailgem · 2009-07-28T09:10:00.246Z · LW(p) · GW(p)
When I tried this technique, I did it very slowly. It was like asking whether a word to write felt right. Then I did a drawing which seemed to contradict what I had been thinking consciously shortly before.
I am not aware of research on the technique.
Replies from: byrnema↑ comment by byrnema · 2009-07-28T09:55:33.981Z · LW(p) · GW(p)
I'll share this anecdote, on the chance that it is relevant.
At a rate of about once every two years, I am jolted awake in a peculiar mental state in which I feel very convinced that I have discovered something profound, and all experience till then has been an illusion. The next morning I would feel normal and unable to recall what I was thinking. So I resolved to write down my thoughts the next time it happened in order to analyze the experience.
It happened again about 3 months ago. I rushed to my desk and began writing. To my astonishment, what my hand was writing (in this case, my dominant hand) was completely independent of what I was thinking. It looked like gibberish to me.
The next morning I inspected the sheet and found I had scribbled vague tautologies like, {"If A then A" , "Also B. Then A + B"}. (That morning I also remembered what the "profound" realization was: it was that causality was perfectly bi-directional.) These experiences tend to happen when I am deeply involved in a math problem that is foreign.
Later edit I wrote this comment in response to the parent by abigailgem, having not yet read Yvain's post. I just now read the post and find that my anecdote fits Dr. Ramachandran's model in a couple ways:
The hypothesis that my right brain is "turning on" to revise models is consistent with the fact that these experiences occur when I am working on a new math problem. Perhaps at night my right brain is sifting through hypothesis, and then my brain (which isn't very discriminating while asleep) wakes me up because it thinks it's discovered a much better model for my whole life.
It is consistent that in the morning I have no recollection of what I was thinking.
Obviously, my left brain is working here, trying to fit the data into the theory. I suppose I should symmetrically consider what does not fit.
But I only found another thing that did fit:
- When I tried to write down what I was thinking, I was unable to do so. This is consistent with my right brain being unable to communicate. When I instructed my hand to write, my left brain took over the task, but, without any context, just babbled some harmless tautologies.
So, for my own use, I add to the theory that I have some evidence that
I personally am unable to identify counter-evidence to things. I can only generate reasons on how something would fit, can only confabulate, so I would do better comparing two different models than evaluating one. I've suspected this for a while anyway. The only exception is if I can find a logical inconsistency, which is why I have only ever trusted my reasoning in a mathematical context.
The left brain is just a logical computer, based on my right hand scribbles (and the banana observation, below), and the right brain is what generates new but indiscriminately crazy ideas.
At this point, I can accurately be accused of babbling but this is the single moment where I have learned the most on Less Wrong.
The idea of my everyday reasoning and interactions just affording logical reasoning and being unable to decrease confidence in assumptions unless there is a logical inconsistency is extremely powerful. It explains why people rarely update their ideas, even in the face of contradicting evidence, and why upon coming to Less Wrong I felt convinced that I need only ascertain the consistency of a model. I felt (and still feel) that if belief in God is consistent, then there is no reason to update it. I suppose my left brain could suggest at any moment there is no God, and provide an alternate explanation for what God is currently explaining, but presumably it would need a reason to do so? Since theism is off-topic in this post, I've transplanted this question to the open forum here.
Replies from: cousin_it, Armok_GoB↑ comment by cousin_it · 2009-07-28T10:01:46.086Z · LW(p) · GW(p)
I'm reminded of the story about this junkie who had the Most Profound Idea Ever while stoned and hastily scribbled it down. This is what he read afterwards: "The banana is big, but the banana skin is even bigger."
ETA:
While working an the material I was reminded of a story George Orwell once told me (I do not recall whether he published it): a friend of his, while living in the Far East, smoked several pipes of opium every night, and every night a single phrase rang in his ear, which contained the whole secret of the universe; but in his euphoria he could not be bothered to write it down and by the morning it was gone. One night he managed to jot down the magic phrase after all, and in the morning he read: "The banana is big, but its skin is even bigger'.
-- Arthur Koestler, "Return Trip to Nirvana"
Replies from: byrnema, MarkusRamikin↑ comment by byrnema · 2009-07-28T10:16:03.067Z · LW(p) · GW(p)
That's funny. And it rings true, suggesting the story hasn't been significantly altered in the telling. There's something about it which tingles my "that's profound" sensor. It's a straight-forward physical example of a simple logical principle, that happens to be about bananas.
↑ comment by MarkusRamikin · 2011-11-12T11:04:18.521Z · LW(p) · GW(p)
.meh, dumb story anyway
comment by sprocket · 2009-04-26T19:27:57.927Z · LW(p) · GW(p)
First of all: Hi all.
I've been thinking about Ramachandran's theory a lot since reading first about it. One of the things it does very neatly, is offer a possible explanation of why psychedelics work the way they do.
Let me explain what I mean. One of the things that has always baffled me about psychedelics such as LSD, LSA or psilocybin (the active ingredient of "magic mushrooms") is that their actions seem far too specific to be caused by a simple substance.
The effect I am referring to is that for some people and in some contexts, they cause what is often called a spiritual experience, i.e., experience that is deeply meaningful to the user and possibly long-term world-view (and behaviour) altering.
Look for example at this study
There's also this active study which is the object of a 12 minute report available on Youtube
From my limited experience, and from what I observed in friends, I would say that psychedelics can be used to increase rationality, specifically by eliminating those sources of irrationality stemming from self-deception. They seem to allow the reexamination of deeply ingrained beliefs about the self and the world, that are beyond everyday reach.
I've always wondered about how the actions of such drugs could be so specific. Of course, this specific action is less suprising when you take for granted that simple "ear-flushing" can have similar effects, even if this applies only in connection with brain damage. The main idea of my post can be summed up as follows:
Maybe psychedelics tap into the same mechanisms that are involved in Anosognosia.
Did anybody else follow this train of thought? Or maybe a related idea concerning meditation (which is associated with a similar realm of experience as psychedelics)?
Replies from: eris↑ comment by eris · 2009-04-27T00:07:21.212Z · LW(p) · GW(p)
I also thought of this, yes. But it was more along the lines of psychedelics being extremely hit or miss. The only drug I know of that is ritually mass-prescribed for spiritual insight is ayahuasca, which I understand is also rather unreliable.
If I were to suggest a drug for denial-busting, it would be MDMA, hands down; it removes fear barriers. (I have no idea why people decided to use it for dancing, of all things.)
Replies from: nickdevlin, sprocket, hajh↑ comment by nickdevlin · 2010-10-28T18:53:30.754Z · LW(p) · GW(p)
Because people are afraid of dancing!
↑ comment by sprocket · 2009-04-27T17:26:40.640Z · LW(p) · GW(p)
I think if you make sure that there is no adverse "set and setting", the hit-chances might be pretty good.
Two quotes from an article describing a study.
"Twenty-two out of the 36 volunteers described a so-called mystical experience, or one that included feelings of unity with all things, transcendence of time and space as well as deep and abiding joy."
and
"In follow-up interviews conducted two months later 67 percent of the volunteers rated the psilocybin experience as among the most meaningful of their lives, comparing it to the birth of a first child or the death of a parent, and 79 percent reported that it had moderately or greatly increased their overall sense of well-being or life satisfaction. Independent interviews of family members, friends and co-workers confirmed small but significant positive changes in the subject's behavior and more follow-ups are currently being conducted to determine if the effects persist a year later. "
This is from a study where drug-naive participants received psilocybin. I think its the same study I linked to earlier.
comment by Tenoke · 2013-05-08T18:08:24.201Z · LW(p) · GW(p)
A recent study examined the effects of vestibular stimulation in 31 healthy right-handed adults. They asked the participants to estimate the likelihood that they will contract a series of diseases relative to their peers in 3 conditions (vestibular stimulation in left ear, right ear or no vestibular stimulation). The participants were overly optimistic across all conditions (they thought they are less likely to contract a disease than they actually were) but when the procedure was performed to their left ear, they were less optimistic and more realistic! (presumably because of activation of the pars opercularis of inferior frontal gyrus)
Replies from: gwern, bruno-mailly↑ comment by gwern · 2013-09-01T00:35:25.455Z · LW(p) · GW(p)
"Vestibular stimulation attenuates unrealistic optimism", McKay et al 2013:
Introduction: Unrealistic optimism refers to the pervasive tendency of healthy individuals to underestimate their likelihood of future misfortune, including illness. The phenomenon shares a qualitative resemblance with anosognosia, a neurological disorder characterized by a deficient appreciation of manifest current illness or impairment. Unrealistic optimism and anosognosia have been independently associated with a region of right inferior frontal gyrus, the pars opercularis. Moreover, anosognosia is temporarily abolished by vestibular stimulation, particularly by irrigation of the left (but not right) ear with cold water, a procedure known to activate the right inferior frontal region. We therefore hypothesized that left caloric stimulation would attenuate unrealistic optimism in healthy participants. Methods: 31 healthy right-handed adults underwent cold-water caloric vestibular stimulation of both ears in succession. During each stimulation episode, and at baseline, participants estimated their own relative risk of contracting a series of illnesses in the future. Results: Compared to baseline, average risk estimates were significantly higher during left-ear stimulation, whereas they remained unchanged during right-ear stimulation. Unrealistic optimism was thus reduced selectively during cold caloric stimulation of the left ear. Conclusions: Our results point to a unitary mechanism underlying both anosognosia and unrealistic optimism, and suggest that unrealistic optimism is a form of subclinical anosognosia for prospective symptoms.
↑ comment by Bruno Mailly (bruno-mailly) · 2018-07-27T07:33:42.631Z · LW(p) · GW(p)
Being in water can get one dead really fast. Especially cold one, especially if immersed up to the head. So it makes sense that in that case evolution would select for turning off optimism and on realism, and add a jolt on top.
The question is more "why do we have excessive optimism ?" I think it paid off to make one grab opportunities before one dies anyway of bad luck in a world where so many thing can kill.
Anyways, all mammals have the Diving reflex, that alters respiration (as a whole). Evidence that evolution can and did lead to detect immersion and have strong responses to it.
comment by gjm · 2020-03-05T12:44:47.675Z · LW(p) · GW(p)
A thing I am horrified not to have thought of when I first read this, or at any time in the ~11 years since (and, looking through the comments, it doesn't seem like anyone else did, which is also a bit horrifying):
If reality matches fairly closely with Ramachandran's metaphor and there's an actual brain subsystem localized somewhere in the left hemipshere that acts as "apologist" and another actual brain subsystem localized somewhere in the right hemisphere that acts as "revolutionary", we should expect left-hemisphere damage sometimes to have a sort of anti-anosognosic effect by suppressing the "apologist". Since this sort of apologism is a thing most of us do all the time about everything, anapologetic syndrome should have clearly discernible effects: the patient would lose the ability to confabulate nice explanations for not-so-nice things, in a way that ought to be noticeable since if we didn't need that ability to function effectively in society it seems like it'd be evolutionarily advantageous to lose it.
This might show up to some extent as depression, which Scott mentions is not uncommon in victims of left-hemisphere brain damage, but it seems much more specific.
You might think: no, this won't happen, because apologism is just how the brain works; so you have a revolutionary-module and the whole rest of your brain is the apologist. Or you might think: no, this won't happen, because the relevant module isn't really an "apologist" but an "explainer" that happens to work in a positively-biased way, so if that module went offline then you'd just completely lost the ability to make sense of the world. BUT both of these seem hard to square with the cold-water trick, which sure does seem as if it's briefly disabling or shaking up a localized apologism module. Maaaaaybe the apologist is the whole left hemisphere, and damaging bits of it doesn't do much, but cold-water-squirting somehow changes the state of the whole hemisphere?
(Could it instead be briefly waking up the damaged revolutionary module? No, because it's right-hemisphere damage that causes anosognosia. The damaged bit is not in the same part of the brain as you're squirting cold water near to.)
I notice that I am confused. Anyone got good suggestions?
Replies from: Hate9↑ comment by Ms. Haze (Hate9) · 2024-10-04T22:10:41.378Z · LW(p) · GW(p)
Ok so this is an old comment but apparently nobody responded to it, so: Tons of connections from the brain to the body are swapped. The left arm, leg, and many other things but importantly the left ear are all connected to the right hemisphere.
So, while I'm not certain as to what's happening here, it briefly waking up the damaged revolutionary module is much more likely than you seem to assume, here.
Hope that helps!
comment by gwern · 2012-02-22T18:14:29.842Z · LW(p) · GW(p)
Fulltext for the cold water experiment: http://dl.dropbox.com/u/5317066/2005-bottini.pdf
The method, for would-be experimenters:
CVS was performed by pouring 20 mL of iced water for 1 minute in the external ear canal. Left-brain-damaged patients received both left and right stimulations (in different occasions, with a time interval of at least 24 hours). In right-brain-damaged patients, only the left external ear was irrigated for ethical reasons, as right cold CVS may induce a worsening of neglect-related symptoms in these patients.3 CVS produced a brisk nystagmus in all patients, with the slow phase toward the side ipsilateral to the irrigation.
EDIT: After reading the paper, I resolved to try it myself; my results: http://lesswrong.com/lw/20/the_apologist_and_the_revolutionary/5xgn
comment by DanielFilan · 2023-09-18T20:40:15.891Z · LW(p) · GW(p)
It gets weirder. For some reason, squirting cold water into the left ear canal wakes up the revolutionary.
This link gets me "page not found", both here and on the oldest saved copy on the internet archive. That said, some papers are available here, here, here if you're at a university that pays for this sort of stuff, and generally linked to from this page. I'll be adding these links to the wayback machine, unfortunately when I go to archive.is I get caught in some sort of weird loop of captchas and am unable to actually get to the site.
comment by Mart_Korz (Korz) · 2020-12-02T12:17:31.327Z · LW(p) · GW(p)
After having read a few GPT-3 generated texts, its type of pattern-matching babbling really reminds me of what is here described as apologist. Maybe the apologist part of the mind just does not do sufficiently model-based thinking to catch mistakes that are obvious to an explicitly model-based way of thinking ("revolutionary")?
It seems very plausible to me that there are both high-level model-based and model-free parts in the human mind. This would also match the seemingly obvious mistakes in the apologists reasoning and explain why it is effectively impossible to get someone's apologist to realise their mistakes by talking to them (I would assume that for healthy people, the model-based thinking does inform/override the model-free thinking to a degree)
comment by zslastman · 2012-07-20T13:37:31.241Z · LW(p) · GW(p)
Do anosognosiacs demonstrate the same pathological inability to change their mind on issues other than their disabilities?
This seems like such an obvious gaping question that absence of evidence seems a lot like evidence of abscence. Surely any scientist who hypothesized a general effect on rationality would have thought to test other delusions, or mention that the patient displayed an inability to change her mind in other respects? This would be the difference between an intriguing oddity and a groundbreaking discovery. The patients arm is the least generalizable experiment he could have done - people are known to specifically develop delusions about their bodies that don't translate into general cognitive inflexibility (at least, I've never heard of it doing so, which I realize is flawed). I am extremely skeptical.
comment by haig · 2009-03-12T08:20:37.436Z · LW(p) · GW(p)
Some commenters said that in fact theory revision sessions such as brainstorming, etc. were actually pleasant to most rationalists and don't necessarily induce sadness. Indeed, I really enjoy arguing and learning new things, or else I wouldn't continue to do them. However, there is a difference between the loose juggling of ideas that we aren't very attached to and the type of continual self-checking of core beliefs that strict rationalists try to do. In order to operate effectively in the world and achieve goals, we need a solid belief foundation to pick goals to achieve and then choose strategies to achieve them. If you are continually attacking your core beliefs or at least remaining open to changing them, you never have a very solid model to base actions upon. The sine qua non of depression is an inability to function, you don't move towards goals, you can't even pick the goals in the first place. Contrast that with the stereotypical 'go-getter', the ambitious overachievers, who report higher happiness levels. They remain consistent in their beliefs and world-views for the most part. Your brain wants to move towards goals, and continually reassessing the foundation you base goals on is bound to cause problems. It's cliche' to mention the quintessential exemplar of this scenario, the depressed person struggling with existential angst. In her case, she is constantly assessing her most basic models: "Who am I? What's my purpose? etc."
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-03-12T09:17:53.370Z · LW(p) · GW(p)
Haig, there's a difference between:
(1) Updating your beliefs and action-patterns as new evidence comes in; deciding to earn money by the method that is actually most likely to be effective (according to the best evidence you know) instead of by your last year's guess as to how to most effectively earn money, for example.
and
(2) Having your beliefs and action-patterns be in a constant or immobilizing state of flux.
If you're a good rationalist, and you carefully research topics that matter for your goals, after a while you should in most cases have a fairly stable probability distribution as to what the world is like and how best to achieve your goals. (If you find your model flops first one way, and then another, and then another... your models are overconfident and are based too much on recent data, so you should replace them with a more spread-out probability distribution over how things might be.) This way, you get a stable model you can use to e.g. actually earn more money, and not just go through motions that you at one point thought would earn you more money.
I've been listening to high-quality entrepreneurship seminars as I exercise (audiobooks are a great way to get free bonus time), and many of them recommend rationality techniques like making your hypotheses explicit and actively searching for dis-confirming evidence. These are seminars made by and for stereotypical go-getters.
Replies from: fburnaby↑ comment by fburnaby · 2011-10-23T15:50:30.890Z · LW(p) · GW(p)
Anna,
After having read your response to haig, I still have trouble understanding how to avoid his/her "existential angst/analysis paralysis" problem. As you recommend doing, I started trying to understand the universe as a teen, in order to more effectively pursue my interests. This part has always been natural to me. But as I gradually learned during this pursuit -- and in large part thinks to this very site -- I'm a part of the universe that's very relevant to me, so I should be trying to model myself accurately.
This makes very much sense and seems to be a common realization among thoughtful humans: If my goal is to become happy (let's say in a fairly Aristotelian sense of the word), then deciding what sort of person to become in the first place becomes a very important question to answer correctly. Should I be habituating myself to be one of these 'go-getters'? Or to sitting on top of a mountain and entertaining visitors with mystical-sounding wise-cracks? We're right to wonder whether people should spend more than five minutes determining what to study at university, or which career to start. This seems even more important -- one level of action higher still.
LW offers two partial antidotes to this conundrum: I can rest assured that whatever I do, It'll be in the pursuit of status within my community. This modest bit of biological determinism is convincing, and actually helps. I'm also convinced that the kind of status-raising activities I pursue should be of the genuine do-gooding variety (that is to say it should follow a consequentialist sort of calculus). But this fall far short of constraining the problem: I feel -- and in large part because of the literature I've read here at LW -- that I'm so able to redesign many of my own desires in the first place. This community has done so much to make me aware that so many of my interests and so much of my self-model are very socially contingent, and because of this so much of my own personality is available for re-design. Such openness of possibilities at such high levels of action really do seem to motivate questions like Haig's question very strongly. "Who should I be? Which of them will be best?
comment by Asymmetric · 2011-11-29T01:56:52.551Z · LW(p) · GW(p)
Oliver Sacks wrote a book called "The Man Who Mistook His Wife for a Hat", which is all about right-brained anosognosia. It was first published in 1970, so it may be outdated, but it is relevant.
comment by CarlShulman · 2009-03-12T01:18:12.665Z · LW(p) · GW(p)
he linked book on confabulation suggests that those with confabulation conditions tended to display milder forms of the behavior in their pre-morbid states, i.e. that this is a trait with normal variation that could be studied. A study of people with false memories of alien abduction found that tests of hypnotic suggestibility, schizotypy, and depressive symptoms were all related to false-memory formation.
Strong tests of confabulation tendency, matched with brain-scan and biochemical/genomic information in populations of normals, extreme 'believers,' and comparative rationalists, could be very interesting indeed.
comment by CarlShulman · 2009-03-12T01:06:17.456Z · LW(p) · GW(p)
The linked book on confabulation suggests that those with confabulation conditions tended to display milder forms of the behavior in their pre-morbid states, i.e. that this is a trait with normal variation that could be studied. A study study of people with false memories of alien abduction found that tests of hypnotic suggestibility, schizotypy, and depressive symptoms were all related to false-memory formation.
Strong tests of confabulation tendency, matched with brain-scan and biochemical/genomic information in populations of normals, extreme 'believers,' and comparative rationalists, could be very interesting indeed.
comment by Capla · 2014-12-17T00:17:46.900Z · LW(p) · GW(p)
Now I want a cold-water-in-the-ear squirt, to see what it does to me. What am I habitually acting the apologist for? Is this the secret key to unlocking the next level of rationality and personal honesty?
Third: to make it work, the water has to be REALLY cold, as I understand it. Not just cold.
What do I need to do to do the clod water thing right?
comment by Entraya · 2014-02-18T18:15:25.675Z · LW(p) · GW(p)
I was just casually returning to continue reading this, took a bite of my muffin, instantly read the "I have this fantasy --" part and i blew crumbs over my desk. What a mess..
This subject is just fascinating. That's the only way i can really express myself right now. The oddities of a human mind makes for great puzzles and curious situations, and an understanding of how it works when it actually does things right