The Rubber Hand Illusion and Preaching to the Unconverted
post by Gram_Stone · 2014-12-29T12:56:24.585Z · LW · GW · Legacy · 38 commentsContents
38 comments
It seems that the CFAR workshops so far have been dedicated to people who have preconceptions pretty close in ideaspace to the sorts of ideas proposed on LW and by the institutions related to it. This is not a criticism; it's easier to start out this way: as has been said, in a different context and perhaps not in so many words, we should focus on precision before tractability. We're not going to learn a thing about the effectiveness of rationality training from people who won't even listen to what we have to say. Nevertheless, there will come a day when these efforts must be expanded to people who don't already view us as high in social status, so we still have to solve the problem of people being more concerned with both our and their social status than with listening to what we have to say. I propose that the solution is to divorce the consideration of social status from the argument.
There is a lot of talk of cognitive biases on LW, and for good reason, but ultimately what we are trying to teach people is that they are prone to misinterpreting reality, and cognitive biases are only one component of this. One of the problems with trying to teach people about biases is that people feel personally responsible for being biased; many people have a conception of thinking as an 'active' process, so they feel as though it reflects upon their character. On the other hand, many people conceive of perception as a 'passive' process; no one feels personally responsible for what they perceive. So, I propose that we circumvent this fear of character assassination by demonstrating how people can misinterpret reality through perception. Enter: the rubber hand illusion.
In case you're unfamiliar with this illusion, to demonstrate the rubber hand illusion, a subject sits at a table, a rubber hand is placed in front of them, oriented relative to their body as a natural hand would be, and a partition is placed between the rubber hand and their 'real' hand such that they are unable to see the 'real' hand. Then, the experimenter simultaneously 'stimulates' both hands at random intervals (usually by stroking each hand with a paintbrush). Then, the experimenter overextends the tips of a finger on each hand, the rubber hand about 90 degrees, and the 'real' hand about 20 degrees (it's not really overextension, and it wouldn't cause pain outside of the experiment's conditions). Measurements of skin conductance response indicate that subjects anticipate pain when this is done, and a very small selection of subjects even report actually experiencing pain. Also, (just for kicks) when subjects are questioned about the degree to which they believe their 'real' finger was bent, they overestimate, by an average of about 20 degrees.
As Dr. Vilayanur Ramachandran has demonstrated, the rubber hand illusion isn't the most general example of this sort of illusion: the human mind can even anticipate pain from injury to the surface of a table. In fact, there is evidence that the human mind's evaluation of what is and is not part of its body isn't even dependent upon distance: Dr. Ramachandran has also demonstrated this with rubber hands attached to unnaturally long rubber arms.
I think that there are also three beneficial side effects to this exercise. (1) We are trying to convince people that Bayesian inference is a useful way to form beliefs, and this illusion demonstrates that every human mind already unconsciously uses Bayesian inference all of the time (namely, to infer what is and isn't its body). To further demonstrate the part about Bayesian inference, I would suggest that subjects also subsequently be shown how the illusion does not occur when the rubber hand is perpendicular to the 'real' hand or when the 'stimulations' aren't simultaneous. (2) After the fact, the demonstration grants social status to the demonstrator in the eyes of the subject: "This person showed me something that I consider extremely significant and that I didn't know about, therefore, they must be important." (3) Inconsistencies in perception instill feelings of self-doubt and incredulity, which makes it easier to change one's mind.
Addendum: This post has been substantially edited, both for brevity and on the basis of mistakes mentioned in the comments, such that some of the comments now appear nonsensical. Here is a draft that I found on my desktop which as far as I can tell is identical to the original post: http://pastebin.com/BL81VQVp
38 comments
Comments sorted by top scores.
comment by chaosmage · 2014-12-29T18:47:30.762Z · LW(p) · GW(p)
From what I've read, it seems that a lot of people who arrive at LW and are convinced by the arguments here for rationality subsequently relate that the arguments herein 'shocked' them into changing their beliefs.
That's the kind of person that goes on to join LW and tell you. There are also people who read a sequence post or two because they followed a link from somewhere, weren't shocked at all, maybe learned something, and left. In fact I'd expect they're the vast majority.
It's clear that convincing the unconvinced (or 'not even partially convinced') is an open and hard problem
I claim the exact opposite. Every time I've invested an hour or two discussing some rationality topic with a newbie, they always came out (claiming to be) convinced. Of course I put in some work: I establish good rapport and compatible values, I adapt it to what they care about, I show them how they can use rationality to win - sure that helps. But rationality simply makes sense. I find it much easier to change people's minds about rationality than about, say, the NSA.
I like your idea of presenting the rubber hand illusion, and will try it. They're surprisingly cheap.
Replies from: Kawoomba↑ comment by Kawoomba · 2014-12-30T09:59:42.550Z · LW(p) · GW(p)
Every time I've invested an hour or two discussing some rationality topic with a newbie, they always came out (claiming to be) convinced. Of course I put in some work: I establish good rapport and compatible values, I adapt it to what they care about, I show them how they can use rationality to win - sure that helps. But rationality simply makes sense. I find it much easier to change people's minds about rationality than about, say, the NSA.
The relevant data isn't "I can convince people if I use all the social tricks in the book", it is instead "how much easier is it to convince them of their cognitive biases than of XYZ". Which makes it come down to this:
I find it much easier to change people's minds about rationality than about, say, the NSA.
I would bet on the exact opposite. Those guys make sure your family are safe, done. What's the other guy doing, still talking about goats and doors, isn't he just saying he's the smarter thinker (stupid elitist)? Well, I'm talking about knowing the people you love are better protected, having one less thing to worry about. Here are some pictures of people killed by the bad guys. Aren't you glad we deal with that stuff, for you? Thought so.
Replies from: chaosmage(...) the people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked and denounce the peacemakers for lack of patriotism and exposing the country to danger. It works the same in every country.
↑ comment by chaosmage · 2014-12-30T11:21:58.999Z · LW(p) · GW(p)
The relevant data isn't "I can convince people if I use all the social tricks in the book"
I do start from an expectation that raising the sanity waterline, especially inside my circle of friends and colleagues, has significant moral value, as well as long-term hedonic (quasi-monetary) value for me. So that makes it worth an investment of bit of focused discussion. Do you disagree? Because if not, by the same argument, it is worth doing right so the message is received and sticks.
What's the other guy doing, still talking about goats and doors, isn't he just saying he's the smarter thinker (stupid elitist)?
Two good points. I very much avoid the standard examples, because they're too hard to relate to to make the discussion interesting and worth remembering. I prefer to pick up some apparent confusion in the behavior of the person I'm talking to - something central with an extensive object level works best - and keep asking questions and throwing in stories of how I use to make similar mistakes and how I try to fix them.
Replies from: Kawoomba↑ comment by Kawoomba · 2014-12-30T11:35:36.567Z · LW(p) · GW(p)
So that makes it worth an investment of bit of focused discussion. Do you disagree?
It would be a rookie mistake to disagree with "this has value to me, so it is worth an investment", which is nearly tautological (barring fringe cases).
What I disagree with is "I find it much easier to change people's minds about rationality than about, say, the NSA." If you're at the right hierarchical spot on the totem pole and do all the right social signals, you can convince people of nearly anything, excepting things that could reflect negatively on themselves (such as rationality)*. The latter is still possible, but much harder than many of the bullshit claims which happen to reflect positively on someone's ego. That you happen to be right about rationality is just, you know, a happy coincidence from their point of view.
If you're up for it, I dare you to go out and convince someone of average reasoning skills of some bullshit proposition, using the same effort and enthusiasm you display for rationality. Then after that beware of rethinking your life and becoming a used car salesman.
* There is a special kind of shtick where the opposite applies, namely the whole religious "original sin, you're not worthy, now KNEEL, MAGGOT!"-technique. Though that's been losing its relevance, now that everyone's a special snowflake. New age, new selling tactics.
Replies from: ChristianKl, chaosmage↑ comment by ChristianKl · 2014-12-30T14:07:58.493Z · LW(p) · GW(p)
Political beliefs like beliefs about the NSA can reflect negatively on a person depending on their social circle.
Replies from: Kawoomba↑ comment by Kawoomba · 2014-12-30T14:52:13.478Z · LW(p) · GW(p)
Take a non-political wrong belief then. Same applies to selling sugar pills, I'm sorry, homeopathy. At least some people are earning billions with it.
Also, guarded statements such as "political beliefs can reflect negatively (...) depending on their social circle" are as close to null statements as you can reasonably get. I could substitute "can help you become an astronaut" and the statement would be correct.
I'm not sure who in their right mind would argue against "can ... under certain circumstances ..."-type social statements. It's good to qualify our statements and to hedge our bets, wary of blanket generalizations, but at some point we need to stop and make some sort of stand, or we're doing the equivalent of throwing empty chat bubbles at each other.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-30T15:08:43.061Z · LW(p) · GW(p)
Take a non-political wrong belief then.
I don't think that chaosmage accidently choose a political belief. Replacing it with a less controversial claim would be strawmanning the original post.
Replies from: Kawoomba↑ comment by Kawoomba · 2014-12-30T15:30:53.711Z · LW(p) · GW(p)
You don't think "convincing someone that homeopathy works" is controversial enough? Are you objecting to both political and non-political beliefs, and wouldn't that make the initial claim, you know, unfalsifiable?
For reference, the initial mention was:
Replies from: ChristianKlI find it much easier to change people's minds about rationality than about, say, the NSA.
↑ comment by ChristianKl · 2014-12-30T15:57:18.323Z · LW(p) · GW(p)
As far as homeopathy goes the belief is differently controversial for different people. Convincing the average new atheist that homeopathy works is very hard. There's identity involved. Convincing people who don't care on the other hand is easier.
Are you objecting to both political and non-political beliefs, and wouldn't that make the initial claim, you know, unfalsifiable?
I do think that chaosmage has experience in trying to change someone mind about the NSA. I do think that he found in his experience that it's easier to change someone's mind about rationality.
There nothing unfalsifiable about making that observation.
↑ comment by chaosmage · 2014-12-30T15:20:42.757Z · LW(p) · GW(p)
When I try to convince people of of rationality, I very much rely on people's natural distaste for inconsistency. This distaste seems quite primal and I don't think I've met anybody who doesn't have it. Of course people tolerate all sorts of inconsistencies, but it takes System 2 effort, while System 1 clearly prefers the simplicity that comes with consistency. Rationality is great at consistency.
Therefore, a lot of what I do is pointing out inconsistencies and offering rationality as a way of fixing them. So while their System 2 analyzes whether what I'm saying can be made to fit with what they already believe, their System 1 keeps pointing out this search for consistency "feels right". And when we're finishing up at the object level, I can surprise them by predicting they're having this feeling, explain why they do, and maybe go into the System 1 / System 2 paradigm and a recommendation for the Kahnemann book.
I don't see how I could adapt this method in order to convince people of random bullshit.
And I disagree with your claim that people can be convinced of nearly anything. It is easy to convince them of things that don't conflict with their world-view, but if they have the (thankfully now quite common) habit of checking Wikipedia, that will leave little room. You can wager social capital on your claim, but you'll lose that investment if they continue to disagree and you risk them faking agreement in order to salvage your relationship. And unlike the used car salesman, I'm not satisfied if they agree today and disagree tomorrow.
It would be a rookie mistake to disagree with "this has value to me, so it is worth an investment", which is nearly tautological
Obviously, which is why my question was about the valuation, not the consequence from it.
Replies from: Kawoomba↑ comment by Kawoomba · 2014-12-30T16:22:51.758Z · LW(p) · GW(p)
This reminds me of the HPMOR chapter in which Harry tests Hermione's hypothesis checking skills. What you're telling me is just what you'd expect if people were easily convinced of pretty much anything (with some caveats, admittedly), given social capital and social investment (which you have mentioned in your initial explanation). You have a specialized mechanism-of-action to explain your apparent success, one which indeed may not be easily adaptable to other ventures.
The problem is that it doesn't explain the ubiquitous occurrence of people being convinced of pretty much any topic you could imagine (requiring more specialized theories). From organized religion, cults, homeopathy, nationalism, anti-nationalism, consumerism, anti-consumerism, big weddings, small weddings, no weddings, monogamy, polygamy, psychic energy, Keynesianism, Austrian economics, rationality, anti-rationality, the list goes on. It doesn't matter that some of these happen to be correct when their exact opposite is also on the list, with plenty of adherents.
You have the anecdote, but looking at the human condition, I see plenty of data to the opposite. Though if you interpret that differently, please share.
I'm not satisfied if they agree today and disagree tomorrow.
Do you think that when (some time after your rationality talk with them) they display a bias in a real life situation, and you kindly make them notice that they did, that they'll agree and have learned a lesson? It's all good as long as it's in the abstract, and a friendly guy who is sharing a cool topic he's passionate about.
Which is good in a way, because just as the first rationality talk doesn't stick, neither does the first e.g. "convert to your girlfriend's religion", usually.
Also, what, in your opinion, are the relative weights you'd ascribe to your success, in terms of "social investment / social strategy" versus your System 2/System 1 approach?
I would be interested in you actually trying the real, falsifying experiment: convincing someone of something obviously false (to you). It's not hard, in the general case. Though, as you say, in recent years it has become slightly harder in some ways, easier in others: Far from creating one shared space, today's interconnectivity seems to have led to a bunch of echo-chamber bubbles, even if Wikipedia is a hopeful sign.
Then again, Wikipedia exists. As does the multi-billion (and growing) homeopathy market.
Replies from: chaosmage, ChristianKl↑ comment by chaosmage · 2014-12-30T18:36:30.792Z · LW(p) · GW(p)
looking at the human condition, I see plenty of data to the opposite.
I see it too - but you're only talking about the present. Put it into historical context and it indicates the opposite of what you think it indicates. The history of bullshit is that, while there is still too much of it, it started at vastly worse premodern levels that were really weird, and has been losing arguments ever since.
One of my favorite examples is the reports of the first Protestant ministers who went through the 16th century villages, talked to peasants about what they actually believed and, because they weren't evaluating their own work, actually recorded that. Turns out even after hundreds of years of Catholicism, these peasants had all sorts of ideas, and those ideas varied wildly from village to village or person to person. They'd have three gods, or eight, or even believe in forms of reincarnation. The Protestants went to homogenize that, of course, and in places like Brazil they still do. But even in Europe and the US, the number of distinct belief systems continues to decline.
Far from creating one shared space, today's interconnectivity seems to have led to a bunch of echo-chamber bubbles
Medieval villages and similarly unconnected societies are echo-chamber bubbles too. So the number of bubbles has been going down, sharply, and competition between ideas has clearly become tougher. If an exception like Homeopathy is growing, that means it has been unusually successful in the harder environment (a big part, in this case, was greatly reduced claims of effectiveness). But that shouldn't distract from the fact that lots of pseudotherapies that were comparable to it fifty years ago, such as Anthroposophic medicine and Orgone therapy, have gone down. And of course the quacks and witch doctors that we used to have before those were even more heterogenous and numerous.
And that's exactly what you'd expect to see in a world where whether someone accepts an idea very much depends on what else that someone already believes. People aren't usually choosing rationally what to believe, but they're definitely choosing.
There seems to be a hard-coded exception for young kids, who will believe any bullshit their parents tell them, and that, rather than active conversion, is how some religions continue to grow. Surely it also helps bullshit that isn't religion.
I'm obviously not doing this experiment you're talking about, because it is wildly unethical and incurs severe social cost. And even if it turned out I can convince people of bullshit just as well as I convince them of rationality, that wouldn't be relevant to my original assertion that convincing the unconvinced is not at all an "open and hard problem".
↑ comment by ChristianKl · 2014-12-31T19:59:13.761Z · LW(p) · GW(p)
The problem is that it doesn't explain the ubiquitous occurrence of people being convinced of pretty much any topic you could imagine (requiring more specialized theories).
It's quite easy to say in the abstract that people can be persuaded. It's quite different to see what effort it takes to convince another person in real life.
I think we all have conversation where we try to convince someone and fail.
comment by Kawoomba · 2014-12-29T13:31:39.079Z · LW(p) · GW(p)
Upvoted, you're talented wrt your writing style. Congratulations on clawing your way back to the lifeworld, after your foray into continental philosophy -- which is as close an intellectual equivalent to invading Russia in winter as I've ever heard of, with a corresponding rate of never returning.
However, try not to succumb to the classic introspective meta trap, no matter how sweet its Siren call; in the end we only look inward to look outward: not as a goal in itself but as an instrumental stepping stone only. What I'm saying is, less meta more [subject] matter. Especially when paragraphs of meta obscure your post's conclusion
I'm iffy on grouping your inner homunculus with "perception" rather than "cognition", but the principle is sound: Show people that the universe is a strange place (still a bit of a lie to children, if anyone it's us who are strange and 'the universe' which is normal) and demonstrate that their brain can deceive them in a way that doesn't instantly trigger the usual bullshit status squabbles.
If it only takes the loss of a Rubber hand to prevent a loss of face, that is a small price to pay to open someone's mind.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2014-12-29T13:48:28.948Z · LW(p) · GW(p)
However, try not to succumb to the classic introspective meta trap, no matter how sweet its Siren call; in the end we only look inward to look outward: not as a goal in itself but as an instrumental stepping stone only. What I'm saying is, less meta more [subject] matter. Especially when paragraphs of meta obscure your post's conclusion
Thanks for this. I've had this implicit anxiety that I haven't 'reached the bottom of the meta-rabbit hole.' When I wrote the post, I definitely wasn't explicitly thinking of introspection as a stepping stone to something more important.
I'm iffy on grouping your inner homunculus with "perception" rather than "cognition"
I don't understand this. Can you elaborate? I suspect that you mean that there is no meaningful distinction between perception and cognition, since perception (I would say) is a form of cognition. I should also say that I originally used the word "reasoning," rather than cognition. Do you think that I should revert the change? Really I meant that we should demonstrate how people's brains are flawed, rather than how people are flawed. In any event, at least the principle is sound.
What I'm saying is, less meta more [subject] matter. Especially when paragraphs of meta obscure your post's conclusion
Would you be willing to point out the things that you think are obscuring my conclusion? I'm not sure to what you're referring.
Replies from: Kawoomba↑ comment by Kawoomba · 2014-12-29T21:54:44.283Z · LW(p) · GW(p)
I suspect that you mean that there is no meaningful distinction between perception and cognition, since perception (I would say) is a form of cognition.
Yes ... and no (go away, Zizek!). The question isn't how to "correctly" partition the brain's different functions into disjoint, or overlapping, subsets / categories, nor how to label them. Each map you construct should be suited for the given purpose, and in this case talking about "perception" separately from "cognition" is -- far from 'paying rent' -- making a subtle mistake. That of a well meaning but slightly disingenuous parent buying into his own explanation.
Bear with me: The reason that the rubber hand illusion and related material may be a good preaching tool is because it neatly sidesteps the many tribal affiliations / social signals associated with correcting someone, it groups itself with "optical illusions" rather than "this is where you reason wrongly".
It's a situation a bit like (da-da-dumm!) sailing between Scylla and Charybdis: If you told people you really try to cure them of some of their many cognitive biases, the ones prone to being offended (cough everyone) would be so, and the distinct advantage of the smoke-and-mirrors (well, mirrors at least) 'perception experiment' would be lost. On the other hand, the PERCEPTION ONLY, NO COGNITION INVOLVED disclaimer could correctly be seen as disingenious, since we don't actually care about the perception aspect, not for our purposes, anyways. However, that's an acceptable price to pay (I'd imagine the ensuant conversation along the lines of "Remember back then, you really wanted to use the rubber [hand] to improve my cognition, didn't you?" - "Well, admittedly so ... sorry, but it was a good hook [with fingers, nonwithstanding], eh? Can we rejoin the love-pile, now?").
The reason I was iffy on us upholding that distinction was that we're not undergoing the experiment, and we should be clear on what we're doing (a function of cognition, rather than one revolving around perception), and which pedagogical trick we're exploiting for doing so. After all, we still need to be able to look at ourselves in the mirror. Or, you know, at some rubber facsimile of us.
A related concept is the aforementioned Wittgenstein's Ladder. It wasn't an important remark, but you did ask ...
Would you be willing to point out the things that you think are obscuring my conclusion?
Just referring to the paragraphs following your concluding remarks (albeit in "()" (brackets, that is)). While endnotes are unobtrusive, in this format a host of only tangentially related asides detracts from your central message; the coda should provide closure, not be an unrelated personal message to Eliezer.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2014-12-30T05:07:21.010Z · LW(p) · GW(p)
I had to read this many times for it to really sink in, so I'm going to try performing an ideological Turing test just to make sure that I'm on the same page as you (I assume that term applies to what I'm about to do even though the examples in that post had to do with political and theological arguments):
Our purpose, both in performing the experiment and educating people about cognitive biases, is to demonstrate that people can misinterpret reality. To make a distinction between the purposes of the two forms of demonstration -- besides being a useless exercise in this context because it doesn't allow us to anticipate experiences that we would not be able to anticipate without the distinction -- is to mislead the subject into thinking that our purpose in demonstrating illusions is not the same as our purpose in demonstrating biases. Even though we are making a specious distinction, to not make it would be worse because the subject would focus on both our and their social status rather than the argument, and therefore never learn enough to be able to understand that the distinction is specious. Because we (read: all of you and not me) already know that the distinction is specious, there is no reason to make the distinction here.
Once you let me know if I've paraphrased your explanation correctly, I'll edit the OP accordingly.
On the bit about the concluding remarks:
I don't understand what you mean when you say "(albeit in "()" (brackets, that is))." Do you mean that these sorts of end notes are usually enclosed in brackets rather than parentheses and it's a bit confusing because I've used so many parentheses throughout the entire post?
Also, I wasn't sure where else I could say the things that I said at the end of this post, and I didn't think that things like that would really matter in discussion, only on main. I always assumed that I would eventually remove them. I do think that I needed to say those things, especially because I'm pretty lost here having not yet read the sequences and being inexperienced; I don't know what tags are appropriate, I don't know how what I've written relates to the content in the sequences and other posts, I don't explicitly understand the purpose of all of the religious references and therefore how appropriate the title is, etc. Maybe you're specifically referring to formatting? Is there a way that I could format the notes to make them less obtrusive? As for the part specifically to Eliezer, when I wrote this, I considered what I'm proposing as a possible solution to the problem of, to use some of his words, "generalizing the initiation of the transition." So even though I can see how it seems unrelated, I think that if I had included the message to him in the rationalist origin story thread then he wouldn't understand the full context of why I came to that conclusion. On the other hand, I could have put what I said in this post in my comment on that thread, but it seems to me that this subject is deserving of its own post. From my perspective at the time, what I had to say in this post and what I had to say specifically to Eliezer were inextricable, so I put it here. Now I'm thinking maybe I could put the message at the end of my comment in that thread and just include a link saying "Read this first!" Does that clarify the message's purpose? Tell me what you think about all of that.
Also, when I edit the post, since this is in discussion, should I include notes on what I've changed?
Replies from: Kawoomba↑ comment by Kawoomba · 2014-12-30T09:46:32.805Z · LW(p) · GW(p)
Yes; your paraphrasing about covers it. Nicely done, if I may say so. Let me reemphasize that it was a minor point overall, but still one I thought worth mentioning (in passing), if only in a half-sentence.
I meant to say parentheses and just confused them with brackets (not a native speaker, or writer, for that matter). The point only being that a post in a "meta content - subject level content - meta content" format which sandwiches your important content in between meta remarks loses some of its saliency, parentheses or no.
You are doing fine, all the aspects we're discussing are minor nitpicks. There is no need to worry about the correct tags, or even to overly fret about the amount of meta that's along for the ride. Insight trumps in-group signalling. My remarks were about on the same order of importance as advising a really long post to include a "tl;dr" summary at the end. I often ready mostly the beginning of a post and the conclusion, to judge whether the rest is likely to be worth the time. In your case, that had the somewhat funny result of wondering what the hell your title was referring to, since all I saw was meta, hyperbolically speaking. So I read more of the middle parts to supposedly fill in the gap, imagine my surprise when I encountered a thoughtful and interesting analysis in there. So while it was my laziness more so than any fault on your part, that's why I brought it up.
tl;dr: Your post is fine, now go write new posts.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2014-12-30T09:52:38.957Z · LW(p) · GW(p)
Out of curiosity, what is your native language?
comment by ChristianKl · 2014-12-29T13:55:04.201Z · LW(p) · GW(p)
I may not be up to date as a brand new user, but it seems that the CFAR workshops so far have been dedicated to people who have preconceptions pretty close in ideaspace to the sorts of ideas proposed on LW and by the institutions related to it. This is not a criticism; it's easier to start out this way: as has been said, in a different context and perhaps not in so many words, we should focus on precision before tractability.
CFAR exist in the background of the realisation that's quite easy to want to be rational and read a list of mental biases but that this usually doesn't make people more rational. It thus important to develop techniques to reliably make people more rational and that includes us as well.
(3) We are trying to convince people that Bayesian inference is a useful way to form beliefs, and this illusion demonstrates that every human mind already unconsciously uses Bayesian inference all of the time (namely, to infer what is and isn't part of the human body)
Just because someone changes their beliefs doesn't mean they do Bayesian inference. Bayesian inference is a specific heuristic and I consider it unlikely that the body uses it for this purpose.
(2) It grants social status to the demonstrator in the eyes of the subject: "This person showed me something that I consider extremely significant and that I didn't know about, therefore, they must be important." I would say that this was a large component of the reason, if not the entire reason (as if I would explicitly know), that I kept reading LW.
You can't demonstrate that effect via text. The setup you describe needs single purpose equipment and probably a 1-to-1 with a experimentor.
The McGurk effect is much easier to demostrate if you want to show someone how is perception is flawed.
the rubber hand illusion isn't the most general example of this sort of illusion: the mind can even interpret the surface of a table as a part of the human body
I'm not really sure that the paper demostrates that. You could also say that the person has empathy with the table. Mimikry of body language leads in humans to a feeling of rapport.
I'm also uncomfortable with the semantics of "human body" in this case. I would guess that most of the participants wouldn't say that the table is part of their body.
I do have a qualia of extending myself past the borders of my body. It a quite complex area of phenomenology. It's very hard to talk about with people who have no references for the corresponding qualia.
Replies from: Gram_Stone, NancyLebovitz↑ comment by Gram_Stone · 2014-12-29T14:25:37.759Z · LW(p) · GW(p)
About the blockquotes in this comment, for some reason I can't separate your quotes from the paper's quotes if they're right after one another, so you'll have to pay attention. To be clear, my response will always follow your quote. I've looked at Markdown syntax documentation but I can't figure out how to fix this. I'd appreciate help from anyone.
CFAR exist in the background of the realisation that's quite easy to want to be rational and read a list of mental biases but that this usually doesn't make people more rational. It thus important to develop techniques to reliably make people more rational and that includes us as well.
I know what CFAR is and what it's for, I just said that because I didn't know if they had tried rationality training with anyone else but entrepreneurs and people with a lot of experience in mathematics. If this has changed, I'd appreciate it if someone told me.
Just because someone changes their beliefs doesn't mean they do Bayesian inference. Bayesian inference is a specific heuristic and I consider it unlikely that the body uses it for this purpose.
For one, I didn't say that Bayesian inference was the conscious process by which the person changed their beliefs.
Now, I'll begin by saying that I don't know an explicit thing about Bayesian inference. Despite that, I wrote that because I've seen this researcher cited elsewhere on the site and I assumed that if he used the adjective 'Bayesian' in one his papers, you all would want to know about it. From the paper, these are the things that I'm talking about:
Wheras Botvinick & Cohen (1998) interpret their results in terms of resolving incongruities between visual versus proprioceptive location of the hand, our table experiment would lead us to argue that the illusion arises mainly from the ‘Bayesian logic’ of all perception; the brain’s remarkable ability to detect statistical correlations in sensory inputs in constructing useful perceptual representations of the world—including one’s body.
We suggest that the principle underlying this illusion is Bayesian perceptual learning—that two perceptions from different modalities are ‘bound’ when they co-occur with a high probability.
The McGurk effect is much easier to demostrate if you want to show someone how is perception is flawed.
I had never heard of this, but I just read the introduction to the Wikipedia article to get an idea of it and apparently the McGurk effect is hit or miss. To my knowledge, everyone can experience the rubber hand illusion regardless of previous experience.
As for this:
I'm not really sure that the paper demostrates that. You could also say that the person has empathy with the table.
I really don't believe that one could say that. I may be wrong, but it seems that the paper actually addresses this:
The brain’s remarkable capacity for extracting statistical correlations in sensory input is most apparent in the table condition. In the hand experiments, given the visual similarity between the fake and real hand, it is not unreasonable for the brain to tolerate some level of discrepancy between the felt position of the hand and its apparent visual location. (Indeed, Graziano (1999) has shown specific cells in the macaque to be responsive to the visual appearance of both a monkey’s real hand and a proximate fake one.) This argument, however, is difficult to apply to the case of the table; indeed, we would argue that the assimilation of the table into the body image is dictated exclusively by the Bayesian logic underlying all perception; in this case the brain’s tendency to take advantage of statistical correlations (even when they do not ‘make sense’ from the cognitive point of view and contradict a lifetime of experience with our own bodies).
Mimikry of body language leads in humans to a feeling of rapport.
I don't understand how this is relevant.
I'm also uncomfortable with the semantics of "human body" in this case. I would guess that most of the participants wouldn't say that the table is part of their body.
I agree that it's improbable that a person would explicitly consider the table a part of their body. I also think that it's probably true that most of the participants wouldn't say that they can anticipate or feel pain due to injury to something that is not part of their body.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-29T16:32:21.321Z · LW(p) · GW(p)
I've looked at Markdown syntax documentation but I can't figure out how to fix this.
Separate paragraph by empty lines.
[re mimikry] I don't understand how this is relevant.
They get the effect by having a stimulus applied at the same time to both hands. If the real hand moves the fake hand moves as well in the same way. That's how you create rapport. If two people are in strong rapport and you hurt one of them, the other also feels hurt.
I also think that it's probably true that most of the participants wouldn't say that they can anticipate or feel pain due to injury to something that is not part of their body.
I don't think that's true. Any neurotypical person who has a decent level of empathy, should have experiences where they felt pain when another person got hurt.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2014-12-29T16:48:06.247Z · LW(p) · GW(p)
Separate paragraph by empty lines.
I did. I also tried putting a less-than sign on each line as suggested elsewhere. I don't know what's going on with that.
They get the effect by having a stimulus applied at the same time to both hands. If the real hand moves the fake hand moves as well in the same way. That's how you create rapport. If two people are in strong rapport and you hurt one of them, the other also feels hurt.
This is too vague for me to make heads or tails of it, but in any event, some subjects actually mistook the rubber hand for their real hand. I also said that some subjects felt physical pain. This is not a matter of empathizing with the pain of something else. And we're talking about a table. I don't know anyone who's ever empathized with a table.
I don't think that's true. Any neurotypical person who has a decent level of empathy, should have experiences where they felt pain when another person got hurt.
It sounds like this is just turning into a semantic argument about the definition of the word 'pain.' You know how you feel when you see someone else get a paper cut on their finger? That's not the kind of experience that I'm talking about. You know how your finger feels when you get a paper cut? That's the kind of experience that I'm talking about. You know how you feel when you trip and you're on your way to kiss the ground? That's the kind of anticipation that I'm talking about.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-29T17:42:13.628Z · LW(p) · GW(p)
You know how your finger feels when you get a paper cut? That's the kind of experience that I'm talking about.
Do you actually have experience with this experiment and what it feels like or does your information come from the paper?
Replies from: Gram_Stone↑ comment by Gram_Stone · 2014-12-29T17:59:15.486Z · LW(p) · GW(p)
I have not been subjected to the experiment. Even if I were, I would most likely not feel physical pain because only a small selection of subjects did. I do not believe that the terms 'pain' and 'anticipation of pain' are contestable or capable of being confused with empathy. I'm tapping out because I don't believe that this conversation is productive.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-29T22:07:28.533Z · LW(p) · GW(p)
While not having done this experiment in particular I do have experience in distinguishing a lot of the relevant qualia and what mimikry does for emotional transfer.
In a study they got 31/108 to feel pain when seeing images/clips.
The sensations they felt were most often described as “tingling”, followed by “aching”. Other descriptions included “sharp”, “shooting”, “throbbing”, “stabbing” and “tender”. The pain was described as lasting for “a few seconds”, “fleeting”, or “for a split second as soon as the picture appeared.”
That's a simple picture without any rapport building and more than the 20% in study you cited report feeling pain.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2014-12-30T22:54:37.864Z · LW(p) · GW(p)
I thought that it would be prudent to include here what I said to undermind:
I don't think that this is a case of empathy because, as I mentioned in my conversation with him below, some subjects reported mistaking the rubber hand for their 'real' hand:
Some subjects reported that the illusion was so convincing that they found themselves wondering why their hand was so white or how they had bruised their hand (there was a small ink smudge on the fake hand).
Some subjects also withdraw their 'real' hand from the experimenter as if it were at risk of injury:
[D]uring pilot work many subjects behaved as if they anticipated pain when the rubber finger was bent back: they laughed nervously, widely opened their eyes, flinched, and even pulled their real hand away from the experimenter (sufficient instruction prevented subject noise and movement during the experiments reported here).
↑ comment by NancyLebovitz · 2014-12-29T14:29:49.179Z · LW(p) · GW(p)
Are you sure the qualia is all that rare? I thought it was common for people who drive to think of their cars as extensions of themselves.
Or do you mean that you can feel the process of incorporating an object into your sense of self? I can believe that would be very rare.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-29T16:31:56.882Z · LW(p) · GW(p)
I thought it was common for people who drive to think of their cars as extensions of themselves.
I haven't driven in any car in the last 5 years, which is the time over which I have become more conscious about my perception, so I can't really tell. I would have to ask someone who has that experience and who also has a reference for hat I'm speaking about, to be certain.
Or do you mean that you can feel the process of incorporating an object into your sense of self? I can believe that would be very rare.
Yes, that's what's more what I mean but it's very hard to find appropriate words. Some people in that state will tell you that they lose their sense of self. It takes a bit of meditation to get there.
Replies from: Capla↑ comment by Capla · 2014-12-30T04:29:10.316Z · LW(p) · GW(p)
I have become more conscious about my perception
Is there a way to train this?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-30T15:08:55.106Z · LW(p) · GW(p)
The biggest influence for myself is Danis Bois's perceptive pedagogy (the former English name was somatic-psychoeducation). The problem is that it's not a well known method with little resources in English. Reading "The Wild Region of Lived Experience: Using Somatic-Psychoeducation" is unlikely very productive if you don't have previous knowledge.
Focusing by Eugene Gendlin is a book that helped a few people in the LW-sphere. It gives you a clear 6-step process. A clear process is useful for learning and the more you understand the subject domain, the more you can derivate.
comment by NancyLebovitz · 2014-12-29T13:49:30.901Z · LW(p) · GW(p)
We're not going to learn a thing about the effectiveness of rationality training from people who won't even listen to what we have to say.
Voted up for that.
Tentatively, it might work to teach rational methods of improving one's life in small ways with a gradual spread into more areas rather than starting with the idea of becoming rational about everything. That might attract people who are dubious about rationality (without knowing what it is) and/or don't want sweeping self-improvement.
The gradual and concrete approach might even have some rationality of its own-- it's a sort of informal testing of an idea.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2014-12-29T13:54:19.414Z · LW(p) · GW(p)
Tentatively, it might work to teach rational methods of improving one's life in small ways with a gradual spread into more areas rather than starting with the idea of becoming rational about everything. That might attract people who are dubious about rationality (without knowing what it is) and/or don't want sweeping self-improvement.
I didn't think of it this way. I agree. (Also, is it considered superfluous in this community to comment only for the sake of agreeing with someone? That's conceivable to me.)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-12-29T14:26:56.376Z · LW(p) · GW(p)
Thanks, and my feeling about commenting just to agree is that it's okay if it's fairly rare-- something like under 1 in 50 comments (overall, not just your comments) per thread. Otherwise, just upvote.
On the other hand, while I probably have a pretty good sense of what's acceptable here, the question might be worth taking to an open thread.
comment by undermind · 2014-12-30T17:07:49.351Z · LW(p) · GW(p)
I'm skeptical that experiments involving rubber hands are an effective way to gain social status.
You have some decent arguments (though ChristianKI's critiques show where they need work), but I think the weirdness factor is just too high. Even if someone were personally convinced, what happens when they try to tell their friends?
Mainly I found it very cool to read about Ramachandran and the table. It's especially interesting in the context of embodied cognition. If our mental lives are determined and made meaningful by the fact that we have physical bodies that we live in and have to make do stuff, how do we reconcile this with the notion that "There is a sense in which one’s body image is itself a ‘phantom’: one that the brain constructs for utility and convenience." ?
Replies from: Gram_Stone↑ comment by Gram_Stone · 2014-12-30T22:52:17.457Z · LW(p) · GW(p)
I'm skeptical that experiments involving rubber hands are an effective way to gain social status.
Maybe that was a joke, but just in case it wasn't: It's not something I can prove here, but I said that because I assumed that the demonstration would impress someone in the same way that an optical illusion or a magic trick does. I'm not so impressed by magic tricks these days (unless they're absolutely nuts of course), but I can imagine that happening to a lot of people. And I think that it can be leveraged further than your average magician leverages a magic trick because the people at CFAR wouldn't just stop at 'magic tricks'; they would have other interesting exercises to show you: "Now that you've seen how your perceptions can be affected by heuristics, I'm going to show you how your thinking can be affected by them."
You have some decent arguments (though ChristianKI's critiques show where they need work), but I think the weirdness factor is just too high. Even if someone were personally convinced, what happens when they try to tell their friends?
Where do my arguments need work? If you're talking about ChristianKI suggesting that this could be empathy rather than the anticipation of pain, I don't think that that's the case here because, as I mentioned in my conversation with him below, some subjects reported mistaking the rubber hand for their 'real' hand:
Some subjects reported that the illusion was so convincing that they found themselves wondering why their hand was so white or how they had bruised their hand (there was a small ink smudge on the fake hand).
Some subjects also withdraw their 'real' hand from the experimenter as if it were at risk of injury:
[D]uring pilot work many subjects behaved as if they anticipated pain when the rubber finger was bent back: they laughed nervously, widely opened their eyes, flinched, and even pulled their real hand away from the experimenter (sufficient instruction prevented subject noise and movement during the experiments reported here).
I was really imagining this in the context of a CFAR workshop. I'm not sure how it would go for people trying to show/tell their friends about it either. I'm willing to bet that the success rate would be positively correlated with the amount and quality of the rationality training that the experimenter had received. What exactly do you mean by the 'weirdness factor?' Like: "Hey man; why are you coming towards me with that rubber hand?" I think that it would be pretty rare for people to just refuse to see the demonstration, because then they would look afraid or close-minded.
Mainly I found it very cool to read about Ramachandran and the table. It's especially interesting in the context of embodied cognition. If our mental lives are determined and made meaningful by the fact that we have physical bodies that we live in and have to make do stuff, how do we reconcile this with the notion that "There is a sense in which one’s body image is itself a ‘phantom’: one that the brain constructs for utility and convenience." ?
Slightly related to this and pretty cool in my opinion: I was thinking about this as I was falling asleep, and I looked at my body, and for a few seconds it looked like it was part of the environment instead of 'me.' It was pretty amazing.
comment by TheAncientGeek · 2014-12-29T19:03:44.003Z · LW(p) · GW(p)
You may be able to condition people out of naive realism, but that doesn't mean you will be able to condition them into Lesswrongian rationality.
Replies from: JoshuaZ