Remind Physicalists They're Physicalists

post by lukeprog · 2011-08-15T04:36:29.438Z · LW · GW · Legacy · 76 comments

Contents

76 comments

Weisberg et al. (2008) presented subjects with two explanations for psychological phenomena (e.g. attentional blink). Some subjects got the regular explanation, and other subjects got the 'with neuroscience' explanation that included purposely irrelevant verbiage saying that "brain scans indicate" some part of the brain already known to be involved in that psychological process caused the process to occur.

And yet, Yale cognitive science students rated the 'with neuroscience' explanations as more satisfying than the regular explanations.

Why? The purposely irrelevant neuroscience verbiage could only be important to the explanation if somebody thought that perhaps it's not the brain that was producing certain psychological phenomena. But these are Yale cognitive science students. Somehow I suspect people who chose to study cognition as information processing are less likely than average to believe the mind runs on magic. But then, why would they be additionally persuaded by information suggesting only that the brain causes psychological phenomena?

In another study, McCabe & Castel (2008) showed subjects fictional articles summarizing scientific results and including either no image, a brain scan image, or a bar graph. Subjects were asked to rate the soundness of scientific reasoning in the article, and they gave the highest ratings when the article included a brain scan image. But why should this be?

I remember talking to a friend about free will. She was a long-time physicalist who liked reading about physics and neuroscience for fun, but she didn't read Less Wrong and she thought she had contra-causal (libertarian) free will.

"Okay," I said. "So the brain is made of atoms, and atoms move according to deterministic physical law, right?"

"Right," she said.

"Okay. Now, think about the physical state of the entire universe one moment before you decided to say "Right" instead of something else, or instead of just nodding your head. If all those atoms, including the atoms in your brain, have to move to their next spot according to physical law, then could you have said anything else than what you did say in the next moment?" (Neither of us understood many-worlds yet, so you can assume we're talking about a single Everett branch.)

She paused. "Huh. I'll have to think about that."

"Also, have you heard about those studies where brain scans told researchers what the subjects were going to do before the subjects consciously decided what they were going to do?"

"No! Are you serious?"

"Yup. Sometimes they could predict the subject's choice 10 seconds before the subject consciously 'made' the choice."

"10 seconds? Wow. I didn't know that."

I think that maybe the 'with neuroscience' explanations and brain scan images are more satisfying partly because they remind us we're physicalists. They remind us that reductionism marches on, that psychology is produced by physical neurons we can take pictures of.

Just like most people, physicalists walk around all day with the subjective experience of a 'unity of consciousness' and contra-causal free will and so on. If a physicalist isn't a researcher who studies all the latest successful reductions in neuroscience or biology or physics all week long, and doesn't read Less Wrong every day, then it's possible to get lost in the feel of everyday experience and thus be surprised by a headline like 'Brain Scanners Can See Your Decisions Before You Make Them.'

Sometimes even physicalists need to be reminded — with concrete reductionistic details — that they are physicalists. Otherwise their normal human anti-reductionistic intuitions may creep back in of their own accord. That's one reason it helps to study many sciences, so you have many successful reductions in your head, and see (at some resolution) the entire picture, from psychology to atoms. As Eliezer wrote:

Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole.

To her credit, my friend no longer believes in contra-causal free will.

76 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2011-08-15T11:38:47.319Z · LW(p) · GW(p)

I think that 'with neuroscience' explanations, and brain scan images in particular, are more satisfying because they remind us we're physicalists.

I don't see how you're justified in thinking that. It's too detailed a hypothesis to locate using that data.

Replies from: Tyrrell_McAllister, lukeprog
comment by Tyrrell_McAllister · 2011-08-15T12:58:28.299Z · LW(p) · GW(p)

An accusation of privileging a hypothesis will be more persuasive if you also point out other families of hypotheses that together still deserve the majority of the probability mass.

comment by lukeprog · 2011-08-15T18:47:32.208Z · LW(p) · GW(p)

But of course. It's just my guess, given these data and personal experience, kinda like when Eliezer made a guess about procrastination. It's the same guess that McCabe & Castel made.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-08-15T21:52:29.975Z · LW(p) · GW(p)

The wording you used doesn't reflect the extremely low probability. The hypothesis could be the best specific guess (which is still no good, just the best we have), and work as raw material for hypotheses that have more chance of actually capturing the situation (constructed by similarity to the first guess), but that can also be expressed by something like "my best guess is that something roughly like X might be happening", instead of "I think X is happening". If my best guess X is no good, I don't think that X is happening.

Also, there probably should be a new standard fallacy on LW, "appeal to Eliezer".

Replies from: lukeprog
comment by lukeprog · 2011-08-15T22:05:29.097Z · LW(p) · GW(p)

I updated my wording after your original comment on this topic. And I don't agree that it's probability is 'extremely low'. I don't think it's the only explanation, merely that it's often part of the explanation. It seems you're taking me to be making a stronger claim than I'm intending to make.

My link to Eliezer's post wasn't meant to justify my practice, only to put it in context.

comment by Richard_Kennaway · 2011-08-15T08:00:50.439Z · LW(p) · GW(p)

I think you're reading too much speculative detail into this. Is it any different from persuading people to buy your drugs by showing men in white coats and saying "studies have shown"?

Replies from: lukeprog, mytyde
comment by lukeprog · 2011-08-15T09:25:41.633Z · LW(p) · GW(p)

The McCabe & Castel study found that brain scan images were more persuasive than bar graphs. So it's not just 'studies have shown', but brain scan images in particular. The same goes for the Weisberg et al. experiments. The descriptions already said 'Studies show..." But the 'with neuroscience' descriptions that mentioned brain scans in particular were more persuasive.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-08-15T11:43:27.004Z · LW(p) · GW(p)

The McCabe & Castel study found that brain scan images were more persuasive than bar graphs. So it's not just 'studies have shown', but brain scan images in particular.

It's not "brain scan images in particular", it's "brain scan images are more persuasive than bar graphs". Do you know the effect of images of cute kittens or people in lab coats? You can't draw a hypothesis this detailed around one data point.

Replies from: lukeprog
comment by lukeprog · 2011-08-15T18:41:12.622Z · LW(p) · GW(p)

Sure, yes. Brain scan images in particular are more persuasive than bar graphs and no images. I shall fight the urge to feel as though you nit-pick everything I say to death and instead genuinely thank you for your correction. :)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-08-15T19:31:19.216Z · LW(p) · GW(p)

Upvotes indicate that this is a natural nitpick to make that is mostly Vladimir's-attitude-independent.

Replies from: lukeprog
comment by lukeprog · 2011-08-15T20:16:52.257Z · LW(p) · GW(p)

Vladimir, I really do appreciate corrections. As you've seen, I update posts in response to them.

It's just that if you say 100 negative things to me in a row without saying a single positive thing, I start to get the impression that you think everything I write is bad, and I should stop writing. (If you doubt my impression, scroll through your last 100 comments that were replies to me.)

That's why I hope to gain an accurate impression of people's reaction to my work - so I can decide whether to keep writing.

If I get nothing but negative feedback from people or from a particular person, then I have to take guesses as to whether this is because (1) the vocalized feedback presents an accurate picture of their assessment of my work, or whether it's because (2) their vocalized feedback does not present an accurate impression of their assessment of my work (that is, they generally appreciate my writing), but they decide to only vocalize negative comments and never (or rarely) vocalize positive comments.

Does that make sense?

Replies from: Vladimir_Nesov, Wei_Dai, Vladimir_Nesov, shminux
comment by Vladimir_Nesov · 2011-08-15T22:36:17.313Z · LW(p) · GW(p)

(I do in general tend to have more pessimistic beliefs than average, which at least on average happens to be on the right side of the bias. I also don't hesitate suspecting that people don't know what they are thinking or doing and why, even if they explicitly describe what they think they think. And I'm more willing than usual to risk offending people to their face, where I believe I can get away with it. So I'll point out if I think something is wrong, where many people would prefer to change the topic or agree politely, for the flaw might be small and the tradeoff between keeping the flaw intact and being impolite is ruled against the improvement. This could account for much of the difference in impression from my comments and others' comments.)

comment by Wei Dai (Wei_Dai) · 2011-08-17T09:13:42.573Z · LW(p) · GW(p)

It's just that if you [Nesov] say 100 negative things to me in a row without saying a single positive thing, I start to get the impression that you think everything I write is bad, and I should stop writing.

Would someone write 100 pieces of constructive criticism if they wanted you to stop writing? More likely they would just silently vote you down, or say "please stop writing".

Besides what Nesov already said, I think a major cause of the frequent nit-picking and misinterpretations is that you're following the (unfortunate, in my view) LW tradition of writing sequences that hide the overall point/conclusions until the end. I've made this complaint before (to someone else, but it's the same complaint).

In addition to what I said last time about telegraphing conclusions helping to avoid ambiguities, if I don't know what your overall conclusions are, then I can't tell which errors in a given post are relevant to your conclusions and therefore should be pointed out, and which can be safely ignored. And given how important this topic is, Nesov might think that it's safer to err on the side of too much rather than too little nit-picking. Also it's sometimes unclear which of your posts are meant to be part of your FAI-relevant meta-ethics sequence (as opposed to intended to help LWers improve their human rationality or are just of general interest to LW readers), so Nesov might unnecessarily hold them all to the same high standard intended for FAI-relevant discussion. For example, is your latest post "Are Deontological Moral Judgments Rationalizations?" supposed to be part of that sequence?

Replies from: lukeprog
comment by lukeprog · 2011-08-17T17:50:54.152Z · LW(p) · GW(p)

Fair enough. I'll try to make things clearer in some upcoming posts.

'Are Deontological Moral Judgments Rationalizations' is of course relevant to ethics but it's not technically part of my metaethics sequence.

comment by Vladimir_Nesov · 2011-08-15T22:12:15.791Z · LW(p) · GW(p)

Since you appear to either agree with particular items of my feedback, or alternatively I recognize my own confusion that led to disagreement, how does that make a bad impression of your work, or argue for stopping to write? I think I just don't have anything substantial to say on the topics you write about (as often turns out only in retrospect), so I only react to what I read, and where the reaction is positive, it's usually not useful to express it. As I said recently, I think your contributions are good LW material.

You just don't cover the topics I care about, and various reasons conspire to make me misinterpret some of your writings as saying something I believe to be wrong, but every time you point out that they shouldn't be interpreted the way that leads to the disagreement. The disagreement gets dissolved by stipulating more accurate definitions. This makes me suspicious a bit (that the reinterpretations are fake explanations of lack of some of the errors I point out, ways to protect the argument), but I mostly concede, and wait for the connection to normativity you hint at that should make your hidden position (and its relation to preceding material) clearer.

Replies from: lukeprog
comment by lukeprog · 2011-08-15T22:18:18.057Z · LW(p) · GW(p)

Thanks. This is helpful, and I believe it to be accurate. I do disagree with this part, though:

where the reaction is positive, it's usually not useful to express it

When I only get negative feedback, and yet my posts are upvoted, I don't know which parts are connecting with people. I only know which parts of my posts are upsetting to people, and which parts are wrong and need to be fixed.

Replies from: Vladimir_Nesov, Douglas_Knight
comment by Vladimir_Nesov · 2011-08-15T22:22:23.407Z · LW(p) · GW(p)

What kind of protocol do you envision? Detailed review is way too much work in most cases, a single perceived flaw is easy to point out, and parts that seem correct usually both cover most of the essay and are expected to be seen as correct by most readers.

(More detailed feedback could be gathered using a new software tool, I suspect, like voting on sections of the text, and then summarizing the votes over the text with e.g. its color. It would be more realistic than asking for a different social custom for the same reason normal voting works and asking for feedback about overall impression doesn't.)

Replies from: lukeprog
comment by lukeprog · 2011-08-15T22:52:02.613Z · LW(p) · GW(p)

One possible format is:

"I like X and Y. More like that, please. But I think B isn't quite right, because Z."

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-08-15T23:00:20.108Z · LW(p) · GW(p)

This could actually work... Fighting abundance of choice with sampling. I would modify it this way:

  • When making a correction or complaint as a top-level comment, choose one positive thing about the post, if any, and point it out first.

So this is a more informative form of "IAWYC, but..."

Replies from: lukeprog
comment by lukeprog · 2011-08-15T23:28:53.190Z · LW(p) · GW(p)

Exactly!

comment by Douglas_Knight · 2011-08-19T06:44:48.481Z · LW(p) · GW(p)

I think you are rationalizing. I think you simply want attention and praise and don't care so much about specific feedback. But I disagree with Vladimir: explicit personal attention and praise, while uninformative, are useful; they are better motivators than karma points.

I am also skeptical of people's ability to tell you useful things about what they liked in an article. No one is going to tell you that they were convinced by the irrelevant picture of a brain.

comment by Shmi (shminux) · 2011-08-15T20:35:41.444Z · LW(p) · GW(p)

You may need to check your priors.

Do Mr. Nesov's comments seem negative in general, not just to you? It is worth checking. How does he normally reply to the type of topics you cover? Maybe (apparently) being from the well known land of pessimism affects his style and you interpret a comment as negative when it is not meant to be so?

FWIW, I always click on your posts, though I rarely have anything to contribute.

Replies from: lukeprog
comment by lukeprog · 2011-08-15T20:57:11.174Z · LW(p) · GW(p)

Do Mr. Nesov's comments seem negative in general, not just to you?

Yes, though not always.

...at which point I should restate that I have a great deal of respect for Mr. Nesov's intelligence, rationality practice, and contributions to this site.

comment by mytyde · 2012-11-13T22:19:17.922Z · LW(p) · GW(p)

This is the crux. You can't take a small amount of empirical data, skip sociology, postulate a hypothesis which you don't intend to test, and then generalize from it. I'm not gonna downvote this thread because I don't think stating this hypothesis is bad; I just think its presentation is sloppy. Lukeprog, please don't take this too harshly; I make similar mistakes all the time.

comment by JenniferRM · 2011-08-15T05:11:11.079Z · LW(p) · GW(p)

One angle here (which you seem to implicitly advocate?) is that including pictures of brains and talking about brain components causes people to change their minds about cognitive/philosophical matters in specific directions. If the results of that exposure are positive then it seems like it might be a good PR strategy, with interesting pedagogical applications if you were trying teach certain lessons from psychology in a vivid and convincing way.

On the other hand it also seems that the effects could be explained by certain kinds of priming mixed with the representativeness heuristic rather than detailed evidence in support of the precise claims that are being made. That is to say: its not clear to me that this phenomena might not be a good way to explain things so much as a "physical brain fallacy" (roughly: just because the brain is physical, doesn't mean a particular claim about cognitive processes is true).

Imagine a control group who get a sales pitch (for a bad product) telling them that they should rationally calculate that it would fulfill their desires and make them happier to buy and use. The experimental group could get roughly the same pitch (for the same bad product) except their emotional reactions would be additionally described with reference to dopamine receptors and modulation of the amygdala and so on. If talking about the brain causes people to think the pitch is better and to buy a bad product it would be a bad thing rather than a good thing.

Having read the linked articles and knowing more about their details than me, does the inclusion of "brain talk" seem to function more like a fallacious trick or more like evidence? Does it look like the rhetorical technique can be used generically, or only to increase people's beliefs in things are probably true, related to neurological facts, and that they were already refusing to accept due to some philosophical/emotional confusion having to do with the physicality of the brain?

Replies from: lukeprog
comment by lukeprog · 2011-08-15T05:15:26.393Z · LW(p) · GW(p)

Brain talk acts more like the 'physical brain fallacy' trick. See the last sentence of Weisberg et al.'s abstract:

Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people’s abilities to critically consider the underlying logic of this explanation. We tested this hypothesis by giving naïve adults, students in a neuroscience course, and neuroscience experts brief descriptions of psychological phenomena followed by one of four types of explanation, according to a 2 (good explanation vs. bad explanation) ? 2 (without neuroscience vs. with neuroscience) design. Crucially, the neuroscience information was irrelevant to the logic of the explanation, as confirmed by the expert subjects. Subjects in all three groups judged good explanations as more satisfying than bad ones. But subjects in the two nonexpert groups additionally judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without. The neuroscience information had a particularly striking effect on nonexperts’ judgments of bad explanations, masking otherwise salient problems in these explanations.

Knowledge of what persuades others can be used for good or evil. I, of course, am hoping that examples of reductionism and so on will be used to persuade people of things that are probably true.

Replies from: christina
comment by christina · 2011-08-15T07:30:35.271Z · LW(p) · GW(p)

But if it can be used to explain both true and false hypotheses, it will be used to do both. If you are trying to convince people to be more rational, you should probably first convince them that 'explanations' that don't explain anything are not to be trusted.

Replies from: lessdazed, lukeprog
comment by lessdazed · 2011-08-15T17:55:51.587Z · LW(p) · GW(p)

But if it can be used to explain both true and false hypotheses, it will be used to do both.

And yet, I don't think that excluding brain talk is pure and neutral, Using non-brain talk undermines one's argument for both true and false hypotheses.

The message needs a messenger.

And yet! Some things are true, and others false. Using non-brain talk might be the generally less biasing thing, but it isn't a categorically unbiasing thing.

Replies from: christina
comment by christina · 2011-08-16T07:12:04.474Z · LW(p) · GW(p)

It was not my intention to emphasize the fact that these explanations mention the brain, but rather their lack of explaining power for the premise to be described/proved. It doesn't matter whether the statement that explains nothing is "neurological differences in brain structure are the underlying causes of schizophrenia" or "it is just common sense that the sky is blue". Neither statement appeals to me because the writer is not using these perfectly good words to explain anything. If I read an explanation, I want it to explain something. If you want to use 7 syllable words, fine. If you want to use only words an average five-year-old knows, that is also fine. If you want every phrase to be achingly brilliant poetry, I have no problem with that. But if the words convey nothing, I will not be amused.

What the article above does is convey information about the strategy of using certain words to not convey information (and also to convey information that doesn't necessarily support the main argument, but sounds like it does). I find it useful in the sense that it helps to realize that certain ways of not conveying information can exploit common blind spots in myself and others. I hope this realization helps me to notice these things more often in other's writing, so that I can decrease the credibility I give such statements. And I hope it will help me to notice it more often in my writing, so I can remove such statements.

Replies from: lessdazed
comment by lessdazed · 2011-08-16T07:35:21.930Z · LW(p) · GW(p)

If there is a valid explanation involving the brain, the brain is more likely to be cited than if there isn't a valid explanation involving the brain.

So the absence of the word "brain" in a given explanation weakly implies that the true explanation does not feature the brain! This implication is regardless of whether or not it is true.

So I think the only sentence of yours I disagree with is:

And I hope it will help me to notice it more often in my writing, so I can remove such statements.

I claim we're doomed, and can only choose from among biasing statements distorting along different vectors in the idea space in which the ideas of the person being persuaded are a point, and one's own ideas are a point.

Replies from: christina
comment by christina · 2011-08-17T05:12:44.090Z · LW(p) · GW(p)

I am unsure of the intent of the first three sentences you post above. I cannot figure out what relation they have to my post, although perhaps they are not intended as a response to it. I also am unsure what they are intended to illustrate. They seem to all be saying the same thing, and I cannot extract an explanation, description, or argument of any sort from them. If there was a point you wished to make with them that you would like me to understand, you will have to clarify.

Would you care to state the reason you think the existence of bias dooms us (I am assuming you mean humanity as a whole, here)? People can learn of the existence of bias. They will always have bias in some direction, but that doesn't mean that they can't learn to reduce their biases and better understand how the world works. Like an asymptote, one can get closer and closer to the truth, even if they cannot reach it. Do you feel the lack of perfection negates progress?

Replies from: lessdazed
comment by lessdazed · 2011-08-17T17:55:08.690Z · LW(p) · GW(p)

I have changed my mind, so I won't try and explain it, if that's OK?

I now hold to a more moderate but similar view; I will try and explain from scratch.

And I hope it will help me to notice it more often in my writing, so I can remove such statements.

So some words are biasing, and It may happen to be that for some concept, all relevant words are biasing. So "remove all biasing statements from my writing" is a bad heuristic where there are no unbiasing statements, so "remove all biasing statements from my writing when there is a less biasing statement I can use instead" is better.

Replies from: christina
comment by christina · 2011-08-18T04:20:57.444Z · LW(p) · GW(p)

Sure, that's up to you. If you prefer to explain only your new viewpoint, that's fine with me. But does your first statement cover the first three sentences of your previous post only, your initial response on bias only, or all of it? As I mentioned, I wasn't clear on the first three sentences at all. Still, feel free to explain or not explain however you wish.

I was at first confused by your inclusion(again) of my statement about removing non-informative concepts from my writing. Since I was not talking about removing all biasing statements from my writing, I wasn't sure why you interpreted it as such. Then I realized that lukeprog's article was talking about non-informative statements that also happen to be good at biasing readers in a certain direction. However, removing all non-informative statements of this sort is different from removing all biasing statements, which is how I read your interpretation of it. All biasing statements belong to a different set than the set of all non-informative statements. For example, "Policy A causes the unnecessary deaths of 400 people every year" is a highly biasing statement, but also contains information (which may or may not be true, but that is an entirely different concern). On the other hand, "Neurological reasoning occurs using the left side of the brain" could be used as a biasing non-informative statement in the context of convincing someone about a certain brain function (as discussed in lukeprog's article). Thus, the sets overlap but are not equal. I can see why you would think that removing all biasing statements is impossible. However, I think removing all non-informative statements (especially ones that happen to be strongly biasing) is not impossible, though perhaps difficult depending on the situation.

So it seems we essentially agree about non-biasing statements. They can be reduced, but not entirely eliminated. I am not sure what your position is on non-informative statements, however, as I don't think you have addressed that. Thanks for the explanation of your views and upvoted for clarifying your position. I think I might understand some of what you are trying to say now. But feel free to let me know if you disagree.

comment by lukeprog · 2011-08-15T07:46:02.321Z · LW(p) · GW(p)

Agreed. I try to teach people about technical explanation, too.

Replies from: christina, Normal_Anomaly
comment by christina · 2011-08-15T08:23:55.489Z · LW(p) · GW(p)

Incidentally, the link to the spray on clothing video was really cool. I want spray on clothing; then I wouldn't have to go through the tiresome chore of searching for clothes in a store. And they would always fit. And I could make it look however I wanted. I wish I could get some to experiment with now, to see if I could generate some clothing that would be thick enough for my tastes (I think that would be my only concern).

comment by Normal_Anomaly · 2011-08-21T16:24:13.593Z · LW(p) · GW(p)

That video is cool, but I don't see how it relates to the conversation you're having. Are you saying that the guy's explanation of how the spray-on clothing works is a good one, or a bad one, or is your point something else entirely?

Replies from: lukeprog
comment by lukeprog · 2011-08-21T17:18:45.666Z · LW(p) · GW(p)

HAHAHAHA. That was a copy/paste fail. I've updated the link to go where I meant it to go now; the spray-on clothing video has nothing to do with technical explanation. :)

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-08-21T23:24:10.818Z · LW(p) · GW(p)

Okay, that makes more sense. That's where I thought the link was going before somebody mentioned spray-on clothing in the reply.

For people who want to see the spray-clothing video, it's here.

comment by utilitymonster · 2011-08-16T14:57:26.492Z · LW(p) · GW(p)

A simple explanation is that using phrases like "brain scans indicate" and including brain scan images signals scientific eliteness, and halo effect/ordinary reasoning causes them to increase their estimate of the quality of the reasoning they see.

comment by christina · 2011-08-22T05:53:15.439Z · LW(p) · GW(p)

I've decided to upvote your article for presenting a concise summary of this topic and also, very importantly, including links to the full pdfs of the academic articles Weisberg et al. (2008) and McCabe & Castel (2008) which you used to support your argument. This is not always possible, but I think is always to be commended when it is.

However, having now read both of these articles in their entirety, I would like to point out what I believe to be an important omission in your summary of the Weisberg et al. (2008) article.

First, you state:

And yet, Yale cognitive science students rated the 'with neuroscience' explanations as more satisfying than the regular explanations.

You also state:

Somehow I suspect people who chose to study cognition as information processing are less likely than average to believe the mind runs on magic.

You conclude with:

Sometimes even physicalists need to be reminded — with concrete reductionistic details — that they are physicalists.

However, I obtain an entirely different conclusion from this article. First, I shall summarize what I believe to be the salient points of Weisberg et al. (2008):

  • Novices (ie. those who had not chosen to study neuroscience), found good explanations satisfying and bad explanations to be unimpressive. Add useless neuroscience and what happens? Good explanations remained pretty much just as good but bad explanations suddenly looked a whole lot better. As stated in the article:

Post hoc tests revealed that although the ratings for good explanations were not different without neuroscience (M = 0.86, SE = 0.11) than with neuroscience (M = 0.90, SE = 0.16), ratings for bad explanations were significantly lower for explanations without neuroscience (M = −0.73, SE = 0.14) than explanations with neuroscience (M = 0.16, SE = 0.16)

  • Sadly, students who chose to study neuroscience were only impressed if given explanations actually containing neuroscience, despite the fact that these additional frills were worthless:

Unlike the novices, the students judged that both good explanations and bad explanations were significantly more satisfying when they contained neuroscience, but the bad explanations were judged to have improved more dramatically, based on a comparison of the differences in ratings between explanations with and without neuroscience [t(21) = 2.98, p < .01]

  • Thankfully, neuroscience experts were not fooled. Bad explanations were still rated as bad. Also, the good explanations were rated as worse when the neuroscience information was included:

Good explanations with neuroscience (M = −0.22, SE = 0.21) were rated as significantly less satisfying than good explanations without neuroscience [M = 0.41, SE = 0.13; F(1, 46) = 8.5, p < .01]. There was no change in ratings for the bad explanations (without neuroscience M = −1.07, SE = 0.19; with neuroscience M = −0.87, SE = 0.21).

I think the problem is not that the neuroscience students needed to be reminded that the mind does not run on magic. I would instead say that they give too much value to the explanatory power of neuroscience, even when superior explanations exist. I do not think this is surprising given that these are people who are: a.) fairly untrained and b.) have chosen to study neuroscience, and therefore are more likely than the general population to have a highly favorable impression of it.

The conclusion I would draw from this article is that irrelevant neuroscience information should generally not be included unless you feel your argument is shoddy and your main intent is to convince novices that you are right. Only neuroscience students are more impressed with the addition of useless neuroscience in any explanation, and they are presumably not a large portion of the general population. I am certainly not confident that this result can be generalized to anyone with any passing interest in neuroscience, or anyone who wants to be a rationalist. Also, neuroscience experts will really not be impressed with the addition of such irrelevant frills. This says to me that the best way to promote the proper understanding of the way our minds work is to never include neuroscience information unless it logically adds value to an argument and to train students in specialized fields like neuroscience to value logic above the appearance of something being about neuroscience. Apparently, this happens already, since experts seem to hate the useless neuroscience information so badly that it taints even the good explanations in their eyes.

In regards to McCabe & Castel (2008), I wonder if people simply feel the concrete brain image is more comprehensible to them that the abstract graphs (both the simple and complex ones). It may even be that it takes them less time to understand the intent of the brain scan images because of this, therefore they like them better. I wonder if other images that also concretely demonstrate an effect would be rated higher than ones that don't. Also, perhaps the brain information is preferred because it conveys more information than the bar graph and but is also easier to grasp than the complex graph.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-16T00:18:50.628Z · LW(p) · GW(p)

But these are Yale cognitive science students. Surely they don't think the mind runs on magic, right?

Wow. I had no idea Yale cognitive science students had reached the astronomical level of competence where we ought to be surprised when they make simple mistakes. I assume that none of them are religious, either?

Replies from: lukeprog
comment by lukeprog · 2011-08-16T00:56:47.599Z · LW(p) · GW(p)

You're right, I said that incorrectly. I should have said that I suspect people are more likely to choose to study cognition as information processing if they think the mind doesn't run on magic. I'll try to fix my wording without being too verbose.

comment by Leon · 2011-08-16T08:07:41.866Z · LW(p) · GW(p)

I have another possible explanation, which I think deserves a far greater "probability mass'': images make scientific articles seem more plausible for (some of) the same reasons they make advertising or magazine articles seem more plausible -- i.e., precognitive reasons which may have little to do with the articles' content being scientific. McCabe and Castel don't control for this, but it is somewhat supported by their comparison of their study with Weisberg's:

The simple addition of cognitive neuroscience explanations may affect people’s conscious deliberation about the quality of scientific explanations, whereas the brain images may influence a less consciously controlled aspect of ratings in the current experiments.

"-Scientific content, -scientific images" includes most advertising, which is pretty obviously made more convincing through images. For an example of "+scientific content, -scientific images", think of the many articles in (say) New Scientist that are made more pleasant (and quite possibly more convincing) by more-or-less purely aesthetic graphics.

I can also think of some "less consciously controlled" reasons that are science-specific. Images of brain scans lend a kind of "hard science" sheen to the articles' claims -- in much the same way that CGI molecules spinning around hair follicles add to shampoo advertising's claims of sheen ("-scientific content, +scientific images"). McCabe & Castel again:

This sort of visual evidence of physical systems at work, which is typical of ‘‘harder’’ sciences like physics and chemistry, is not typically apparent in studies of cognition, where the evidence for cognitive processes is indirect, by nature. Indeed, it is important to note that while brain images give the appearance of direct measurement of the physical substrate of cognitive processes, techniques like fMRI measure changes in relative oxygenation of blood in regions of the brain, which is also indirect. Of course, it is unlikely that this subtlety is appreciated by lay readers.

In other words, images of brain scans create the impression that underlying physical mechanisms are better understood than they actually are. This is also an issue in pop science reporting:

[...] many cognitive neuroscientists have expressed frustration at what they see as the oversimplification of their data, and have suggested that efforts be made to influence media coverage of brain imaging research to include discussion of the limitations of fMRI, in order to reduce the misrepresentation of these data.

So how does this study pertain to physicalism? As I see it, this study underscores the ease with which intelligent people -- including physicalists -- can be fooled into thinking that scientific studies explain more than they do by the use of overly-concrete, hard-science-flavored imagery (and language). It shows how easy it is to jump from an image of a presumed physical substrate for some phenomenon to the belief that we better-understand that phenomenon. In other words, it shows how the impression of reductionism can function as a curiosity-stopper.

As I understand it, that is a common criticism of reductionism in practice.

Also, this is why I'm uncomfortable with the overuse of overly-precise terms from maths and science -- like referring to one's own "probability mass" on Less Wrong, or the Churchlands bemoaning their "seratonin levels" rather than saying they feel horrible (see here, p. 69). Sometimes an unwarranted science-y aesthetic can mislead.

comment by orthonormal · 2011-08-15T17:19:32.380Z · LW(p) · GW(p)

Is it a fair restatement to note that people (physicalists included) get quite different priming effects from 'mind' and 'brain'? The first makes us think of our subjective experience, the second makes us think of a physical object.

I've certainly noticed that reductionist arguments are more convincing to others when I use 'brain' in place of 'mind'.

Replies from: fubarobfusco
comment by fubarobfusco · 2011-08-16T05:12:43.200Z · LW(p) · GW(p)

Plenty of people who are ostensibly physicalists still seem to alieve that there is something spooky going on in the mind. They seem comfortable with the idea that physical-chemical-biological processes underlie the mind, without being ready to deal with the consequence that these processes constitute the mind.

comment by malthrin · 2011-08-24T14:51:36.467Z · LW(p) · GW(p)

Here's a piece of supporting evidence for your theory: http://www.economist.com/node/21526321

In particular, the second study. There were four statements of a patient's condition after a traumatic injury: 1) David is healthy and fully recovered. 2) David passed away. 3) David died, was embalmed at the morgue, and is now in the cemetery, in a coffin, underground. 4) David is in a persistent vegetative state.

Group A rated the cognitive function on options 1, 2, and 4. Group B rated the cognitive function on options 1, 3, and 4. Non-religious folks - i.e., materialists - did not rate 2 and 3 equally relative to 4. It seems reasonable to conclude that the more detailed phrasing of option 3 "reminded them" to look at the situation from a materialist perspective.

comment by Peterdjones · 2011-08-16T17:24:49.671Z · LW(p) · GW(p)

Of course not all libertarianism is "contra causal",and of course complete physical determinism isn't a fact

Replies from: gjm
comment by gjm · 2011-08-22T16:06:54.703Z · LW(p) · GW(p)

Your second link gives me an error: "The specified request cannot be executed from current Application Pool".

The first link doesn't appear to me to justify the statement that "of course not all libertarianism is 'contra causal'". The Wikipedia article makes reference to a class of libertarian theories that don't involve a non-physical mind overriding causality, but the only example of such a theory it says anything about is Kane's, and it's far from clear to me that Kane's notion of free will is really libertarian (for the reason given in the article and ascribed there to Randolph Clarke).

If you define "libertarianism" as meaning only that free will and strict determinism are incompatible, then I agree and probably Luke does too: "of course" libertarianism needn't be contra-causal. (Well ... I suppose it depends on exactly how you define "contra-causal". I'd have thought that with that definition of "libertarianism", it would be natural to define "contra-causal" as "not deterministic", and then libertarian free will --> contra-causal free will after all.) But someone who believes, e.g., that free will = determinism + chance is "libertarian" in that sense, and that's surely neither what Luke had in mind nor what most other people have in mind when they talk about (metaphysical) libertarianism.

(I don't much like the term "contra-causal", though. After all, libertarians commonly don't say that free choices are uncaused but that they (and not, e.g., any merely physical process) caused those choices. "Contra-physical" would get nearer to the heart of the matter.)

Replies from: Peterdjones
comment by Peterdjones · 2011-08-27T20:51:06.918Z · LW(p) · GW(p)

Your second link gives me an error:

Now amended.

The first link doesn't appear to me to justify the statement that "of course not all libertarianism is 'contra causal'". The Wikipedia article makes reference to a class of libertarian theories that don't involve a non-physical mind overriding causality, but the only example of such a theory it says anything about is Kane's, and it's far from clear to me that Kane's notion of free will is really libertarian (for the reason given in the article and ascribed there to Randolph Clarke).

It's far from clear to me that the objection sticks for the reasons also given in the article. But just about everything is disputable in philosophy. So there is no clear cut fact that libertarianism is "contra causal".

I don't much like the term "contra-causal", though

Neither do I.

comment by christina · 2011-08-15T07:58:08.484Z · LW(p) · GW(p)

On the subject of acausal free will, I wonder if it is simply a misinterpretation of the benefits that we can gain through our choices. While we don't have free will in the sense of our choices being uncaused, we do have more choices available to us, than say, a cow or a spider. So we have the freedom to choose better, even if not the freedom to choose whatever. So here I am hypothesizing that people interpret greater intelligence as greater acausal free will.

comment by lukeprog · 2012-10-06T05:27:50.243Z · LW(p) · GW(p)

Cognitive scientists share a number of basic assumptions... the most fundamental driving assumption of cognitive science is that minds are information processors... Almost all cognitive scientists are convinced that in some fundamental sense the mind just is the brain.... Few, if any, cognitive scientists are dualists, who think that the mind and the brain are two separate and distinct things.

Bermudez, Cognitive Science, page 6.

comment by Raemon · 2011-08-16T15:37:51.196Z · LW(p) · GW(p)

Thank you for the button-pressing report. I've been looking for something like that for a while. (Well, by "looking" I probably mean "sort of wishing I'd accidentally stumble upon it.")

comment by PhilGoetz · 2011-08-22T13:27:31.994Z · LW(p) · GW(p)

Nice, modulo Christina's comment below. But, I wouldn't place any stock in time-delay button-pressing experiments. They are only suprising if you both a) believe in free will, AND b) believe that the freely-willed action, and the conscious experience of making that decision, must be simultaneous.

There is no reason to expect this, and many reasons to expect it not to be the case, even if you believe in free will. I don't know if it's even meaningful to ask "when" a perception occurred - your brain may present you with a percept, and backdate it or forward-date it.

comment by Mitchell_Porter · 2011-08-15T07:14:40.770Z · LW(p) · GW(p)

It's just that physicalists take the controlled and repeated findings of physics and neuroscience as being stronger evidence than their own subjective experience is.

This is not a straightforward matter. There are people who deny the existence of colors, of time, of any sort of will, or any sort of subjective experience, on the basis of "physicalism" or "science". Physics and neuroscience actually contain no such thing as "subjective experience". Do you therefore conclude that there is no such thing at all? No, you believe it's there and it has a semi-obscure relationship to the physical facts, and you use what you hear from science to adjust your beliefs about the nature of that relationship, and also to adjust your attitude towards the naive beliefs that arise naturally from your experience.

What's actually going on in the minds of people who make private attitude adjustments in order to be good little physicalists, is itself best described as a subjective process. The complex of beliefs peculiar to this site, as recently summarized in the Gospel According to Zed, are a great case study in such self-shaping of subjectivity. It's subjectively experienced, it's subjectively willed, and it's done in response to a subjectively imagined idea of what Science is saying. This is true both for the more spectacular ontological commitments that people make and break, and for the epistemological adjustments performed in response to the psychological demonstration of fallibility of judgment. And I'd say it usually has about as much basis as the belief of someone who thinks there is a gay gene because they remember hearing about it on TV. Science can be wrong, the popular description of what science says can be wrong, and the private implications that a person draws from this can be wrong.

Replies from: lessdazed, Richard_Kennaway, lukeprog
comment by lessdazed · 2011-08-15T08:54:36.530Z · LW(p) · GW(p)

There are people who deny the existence of colors, of time, of any sort of will, or any sort of subjective experience

Do they deny the existence of rainbows?

comment by Richard_Kennaway · 2011-08-15T08:25:03.296Z · LW(p) · GW(p)

No, you believe [subjective experience is] there and it has a semi-obscure relationship to the physical facts

I think you understate the problem. The relationship is totally obscure.

On the one hand, there is the "it's all made of atoms" tradition that got started two centuries ago, which, together with the Baconian idea that you have to look at nature to discover anything about it, has proved enormously successful everywhere it has been applied, and continues to be so.

And on the other hand, there is subjective experience, the problem with which is that no-one has any idea at all of how such a thing could possibly exist in a world made of atoms. Not only do we not know how it arises, we cannot see any way it could possibly arise from atoms. The two things appear absolutely, utterly incompatible.

But hardly anyone looks squarely at that conflict and acknowledges it. Instead, they confabulate to fill the gap. Some invent ontologically fundamental mental entities, which amounts to no more than plastering labels like "soul" or "the divine" over their ignorance. Some deny the existence of subjective experience. Some give explanations only of how we come to talk about subjective experience, but leave the thing itself untouched -- p-zombie theories. Some come up with explanations that amount to finding a correlation with some observable physical phenomenon and identify it with that -- as if, in cruder terms, one were to explain the mind by saying it's the brain, or to take such fake explanation to the point of absurdity, one were to explain the mind by saying it's made of atoms.

This is more than just not knowing how it works: nobody knows how it could possibly work.

Replies from: lessdazed
comment by lessdazed · 2011-08-15T09:23:07.998Z · LW(p) · GW(p)

If one of the "confabulations" were true, how would you know?

Likewise for if no one knew how it worked, but thought they knew how it could possibly work; how would you know if they were right aside from having a full explanation?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-08-15T10:59:21.506Z · LW(p) · GW(p)

If one of the "confabulations" were true, how would you know?

Which one? Ontologically fundamental mental entities? Show me one that isn't an empty label. The other three -- denying the existence of subjective experience, p-zombie explanations, and interpreting correlation with a physical phenomenon as identity all miss the mark. They are not things that even could be explanations. That's probably not an exhaustive list -- it can't be, if there really is an explanation -- but vague hypotheticals don't help. Show me a purported explanation of the existence of subjective experience that isn't an example of one of these four fallacies and then there will be something to talk about.

Likewise for if no one knew how it worked, but thought they knew how it could possibly work; how would you know if they were right aside from having a full explanation?

Well, how would you know if someone was right about the mechanism of high-temperature superconductivity? You would look at whatever they did -- theoretical modelling, experiments, whatever -- and judge whether the reasoning and the experimental setup were sound. You would compare it with other work in the field. You might do theoretical and experimental investigations of your own.

This is intended to be a simple answer to a simple question. The same sort of processes are how you would judge any explanation of the existence of subjective experience.

Here, for example, is an imaginary explanation of consciousness: control systems are conscious! Firstly, even accepting that all living organisms are chock full of control systems, this is an example of fallacy no.4: finding a physical phenomenon apparently causally linked with consciousness and saying the two are the same. But leaving that aside, one can very easily find control systems in the human brain that are inaccessible to consciousness: motor control. When you move an arm you are not aware of the individual muscles you are operating. Even when you learn a complex motor skill like juggling, the processes by which the cerebellum learns the task are completely inaccessible to you. So there's a large and complex collection of control systems sitting right next to and intimately connected with what appears to be the physical substrate of consciousness, and is made of very similar stuff, yet itself is devoid of the property. This refutes the proposed explanation.

Easy, yes?

Replies from: lessdazed
comment by lessdazed · 2011-08-15T15:37:24.895Z · LW(p) · GW(p)

They are not things that even could be explanations

I don't understand what single thing, if any, disqualifies them. Tell me if I'm wrong, but I think you would agree they have unique issues, just as "being an empty label" is something that won't be wrong with, say, denying subjective experience.

You made a good point about the inexhaustibility of wrong explanations, which I suppose is true for everything. So I certainly don't ask for anything like a complete list of bad explanations and their problems! But of the other three you mentioned, do they share a problem, or what are their unique problems, or is it too complicated to explain in a comment? Can you explain why the other three are hopeless as well as you did for the first?

This is a thing it might be hard to do well. Were I called upon to support my claim that "'being an empty label" is something that won't be wrong with, say, denying subjective experience, I might not last long against an honest skeptic before resorting to profanity and threats of violence if they disagreed. "Because they say there is nothing so they are not saying that there is something where the "something" is literally no more than the thing. Because there is no thing. %@*!" But please try.

Well, how would you know if someone was right about the mechanism of high-temperature superconductivity? You would look at whatever they did -- theoretical modelling, experiments, whatever -- and judge whether the reasoning and the experimental setup were sound.

I'm trying to get at the difference between knowing about something that no one has a perfect model and knowing that no one has the correct framework to think about building a working model. From "This is more than just not knowing how it works: nobody knows how it could possibly work," building a working model shows one knows how something works, and absence of one is evidence someone does not know how something works.

But how does one distinguish the various ways to not have a perfect model? What evidence is there about whether people are working on something correctly, aside from a complete and finished explanation?

To put it another way, what stops one from being able to point at an unsolved problem, say one universally admitted to be unsolved, and declaring no one has any idea how to think about it, or that no one knows how it could possibly work, or similar?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-08-16T09:03:02.897Z · LW(p) · GW(p)

I don't understand what single thing, if any, disqualifies them. Tell me if I'm wrong, but I think you would agree they have unique issues, just as "being an empty label" is something that won't be wrong with, say, denying subjective experience.

You made a good point about the inexhaustibility of wrong explanations, which I suppose is true for everything. So I certainly don't ask for anything like a complete list of bad explanations and their problems! But of the other three you mentioned, do they share a problem, or what are their unique problems, or is it too complicated to explain in a comment? Can you explain why the other three are hopeless as well as you did for the first?

I feel a bit like I'm Eliezer expaining the instant failure modes of most AGI research (but not as smart), and that there could be a whole sequence of postings on the instant failure modes of explanations of consciousness.

Well, I don't think I can write those postings, or at least, devote the many days it would take me. Just some brief notes here amplifying the fallacies with examples.

What evidence is there about whether people are working on something correctly, aside from a complete and finished explanation?

A partial and unfinished explanation. But it must go some distance: it must suggest practical experiments and predict their results. (Thought experiments do not count.) Consider the four different fallacies I described by this standard:

  1. Empty labels: saying consciousness is "the soul", "a spark of the divine within us", "self-awareness", etc. fails to constrain expectations.

  2. Denying the existence of subjective experience: well, we do have it. At least, I do, and I've no reason to suppose I'm exceptional in this. (Those who seriously deny it might be exceptions in the other direction.) So this one has the virtue of constraining expectations but is instantly refuted by observation. It amounts to sticking one's fingers in one's ears and going "la la la can't hear you!" Arguments against the existence of subjective experience (consciousness, qualia, etc.) generally take the form of arguing against other people's arguments in favour. Since no-one has a good account of what it is, it is not difficult to demolish their bad accounts. This is like refuting the phlogiston theory to prove that fire does not exist.

  3. In the p-zombie category is Minsky's "society of mind", which gives a hypothetical account of how a system might come to talk to itself about itself in the ways that we do. But how we talk about ourselves and how we feel about ourselves are two different things, and the latter is left unaddressed. Besides, there are plenty of computer systems that talk to themselves about themselves, and we see no reason to attribute consciousness to them. In the form in which Greg Egan expressed the theory in his story "Mr. Volition", consciousness is the piece of brain that does consciousness, just as the cerebellum is the piece of brain that does motor control. This is no better than any other homunculus theory: it is either passing the buck or asserting that we are all philosophical zombies, beings that talk about consciousness without having it.

  4. Physical correlates: Neuroscience is always finding more and more physical correlates of mental phenomena, from the fact that gross lesions to various brain locations produce predictable patterns of cognitive impairment, to the results of live brain imaging during task performance. This is compelling evidence that the brain must be either the physical substrate of consciousness or an interface with something else. Neither alternative goes very far. We still don't know how the brain or anything else made of atoms could be a physical substrate for consciousness, however compelling the evidence that it is. Contrast this with the fact that we do know how ever-so-slightly impure silicon can be a substrate for computation. And the brain as an interface to something else fares even worse, as we have no idea what that something else could be. The soul? See (1).

So those are four basic ways in which attempts to explain consciousness can go wrong. I have yet to see an attempt that doesn't fail one or more.

Replies from: BobTheBob, soreff
comment by BobTheBob · 2011-08-18T14:00:52.285Z · LW(p) · GW(p)

The comments of yours I've read are always clear and insightful, and usually I agree with what you say. I have to disagree with you here, though, about your supposed second fallacy.

Arguments against the existence of subjective experience (consciousness, qualia, etc.) generally take the form of arguing against other people's arguments in favour. Since no-one has a good account of what it is, it is not difficult to demolish their bad accounts. This is like refuting the phlogiston theory to prove that fire does not exist.

I disagree. Arguments against qualia typically challenge the very coherence of anything which could play the desired role. It's not like trying to prove fire doesn't exist, it's like trying to prove there is no such thing as elan vital or chakras.

I deny the existence of UFOs. It's pretty clear what UFOs are - spaceships built and flown to Earth by creatures who evolved on distant planets - and I can give fairly straight-forward probabilistic reasons of the kind amenable to rational disagreement, for my stance.

I (mostly) deny the existence of God. Apologies if you're a theist for the bluntness, but I don't think it's at all clear what God is or could be. Every explication I've ever encountered of God either involves properties which permit the deduction of contradictions (immovable rocks/unstoppable forces and what-not), or are so anodyne or diffuse as to be trivial ('God is love' -hence the 'mostly'). There is enough talk in our culture about God, however, to give meaning to denials of His existence - roughly, 'All (rather, most of) this talk which takes place in houses of worship and political chambers involving the word 'God' and its ilk, involves a mistaken ontological commitment'.

Do I deny the existence of consciousness, or subjective experience? If my wife and I go to a hockey game or a play, we in some sense experience the same thing -there is a common 'objective' experience. But equally we surely have in some sense different experiences - she may be interested or bored by different parts than I am, and will see slightly different parts of the action than I. So clearly there is such a thing as subjective experience, in some sense. This, however, is not what is at issue. Roughly, what we are concerned about is a supposed ineffable aspect of experience, a 'what it is like'. I deny the existence of this in the sense in which I deny the existence of God. That is, I have yet even to see a clear and coherent articulation of what's at issue. You imply the burden of argument is with the deniers; I (following Dennett and many others) suggest the burden is with defenders to say what it is they defend.

Are qualia causally efficacious, or not? If they are, then they are in principal objectively detectable/observable, and hence not worthy of the controversy they generate (if they have a causally efficacious 'aspect' and a non-efficacious, one, then just factor out the causally efficacious aspect as it plays no role in the controversy). On the flip side, of course, if qualia are not causally efficacious, then they aren't responsible for our talk of them - they aren't what we're presently talking about, paradoxically.

It seems to me the best case for exponents of consciousness is to force a dilemma - an argument pushing us on the one hand to accept the existence of something which on the other appears to be incoherent (as per just above). But I have yet to see this argument. Appeals to what's 'obvious' or to introspection just don't do it - the force of the sort of arg above and the several others adduced by Dennett et. al., clearly win out over thumping one's sternum and saying 'this!', simply because the latter isn't an argument. The typical candidates for serious arguments in this vein are inverted spectrum or Black and White Mary type-arguments, but it seems to me they always just amount to the chest thumping in fancy dress. Would be interested to hear of good candidate arguments for qualia, though, and to hear any objections if you think the foregoing is unfair.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-08-27T14:05:13.716Z · LW(p) · GW(p)

Arguments against qualia typically challenge the very coherence of anything which could play the desired role. It's not like trying to prove fire doesn't exist, it's like trying to prove there is no such thing as elan vital or chakras.

I think there's some hindsight bias there, in the case of chakras. It is by no means obvious that these supposed centres of something-or-other distributed along the spine and in the head don't exist. One might be sceptical purely on account of the sources of the concept being mystical or religious, but the same is true of meditation, which has been favourably spoken of by rationalists. It's only by actually looking for structures in the places where the chakras are supposed to be and not finding anything that could correspond to them that the idea can be discarded. There is also (I think) the fact that different traditions assert different sets of chakras.

"Élan vital" was always a fake explanation for a phenomenon -- life -- that no-one understood. It's like a doctor listening to a patient's symptoms and solemnly making a diagnosis by repeating the symptoms back to the patient in medical Latin. No-one talks about élan vital now because the subject matter succumbed to investigation based on "stuff is made of atoms".

But consciousness is different -- we experience it. We have no explanation for it, just the experience -- the fact that there is such a thing as experience. "Consciousness", "sensation", "experience", "qualia", and so on are not explanations, just names for the phenomenon.

So clearly there is such a thing as subjective experience, in some sense. This, however, is not what is at issue.

To me, this is exactly what is at issue. We have subjective experience, yet we have no idea how there can possibly be such a thing. All discussions of this, it seems to me, immediately veer off into people on one side putting up explanations of what it is, and people on the other knocking them down. The fact of experience remains, ignored by the warring parties.

It seems to me the best case for exponents of consciousness is to force a dilemma - an argument pushing us on the one hand to accept the existence of something which on the other appears to be incoherent (as per just above).

There is no case to be made. Either you have this experience or you do not. I have it and I think that most people do. What people -- at least, those who do have subjective experience -- need to do first is recognise that there is a problem:

I have subjective experience.

It is impossible for there to be any such thing as subjective experience.

All of the argument is about proposed solutions to this problem. But refuting every solution to a problem does not solve the problem.

comment by soreff · 2011-08-16T14:21:16.565Z · LW(p) · GW(p)

In the p-zombie category is Minsky's "society of mind", which gives a hypothetical account of how a system might come to talk to itself about itself in the ways that we do. But how we talk about ourselves and how we feel about ourselves are two different things, and the latter is left unaddressed.

I find this class of explanations plausible, myself. I find it at least imaginable that my "feeling" of consciousness basically is the stream of potential reports about myself that I could voice, if there were an interested listener to voice them to. To put it another way: Are you quite sure that the way we feel about ourselves isn't the same as the way we talk about ourselves (except for the inhibition of actual vocalization)? How would one show that the stream of potentially vocalized self-reports isn't consciousness? What would distinguish them?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-08-17T00:39:09.913Z · LW(p) · GW(p)

Are you quite sure that the way we feel about ourselves isn't the same as the way we talk about ourselves (except for the inhibition of actual vocalization)? How would one show that the stream of potentially vocalized self-reports isn't consciousness? What would distinguish them?

I look around, and have visual experiences. These, it seems to me, are obviously different from any words I might say, or think but not say, about those experiences.

Replies from: soreff
comment by soreff · 2011-08-17T02:05:52.257Z · LW(p) · GW(p)

Good point! I might sketch a visual experience, but I don't ordinarily consider my visual experience to be a sequence of sketches, analogous to an ongoing interior monologue...

comment by lukeprog · 2011-08-15T07:47:36.132Z · LW(p) · GW(p)

No, it's not straightforward. I didn't want to get into all that, so I've removed that sentence from my post now.

comment by timtyler · 2011-08-15T07:37:53.832Z · LW(p) · GW(p)

"Okay. Now, think about the physical state of the entire universe one moment before you decided to say "Right" instead of something else, or instead of just nodding your head. If all those atoms, including the atoms in your brain, have to move to their next spot according to physical law, then could you have said anything else than what you did say in the next moment?"

The world can branch in a moment, so the answer should be "yes".

Replies from: lukeprog
comment by lukeprog · 2011-08-15T07:48:44.140Z · LW(p) · GW(p)

I don't think either of us understood many-worlds back then, so you can just interpret us as talking about a single branch.

Replies from: Vladimir_Nesov, timtyler
comment by Vladimir_Nesov · 2011-08-15T13:21:37.635Z · LW(p) · GW(p)

Free will still only makes sense when there's uncertainty about what's going on (that is resolved by your decisions), which is mostly interchangeable with there being many possible worlds (in your model), but if there are many (actual) worlds and no uncertainty (or only uncertainty independent of your decisions), free will doesn't happen.

In other words, many-worlds don't change anything about free will, in either direction. And to support free will, even a single branch must appear as a collection of possibilities that can't be ruled out.

Replies from: lukeprog, Peterdjones
comment by lukeprog · 2011-08-15T18:42:36.856Z · LW(p) · GW(p)

Many-worlds doesn't change anything about free will, but it does (under some interpretations) change the answer to the question "could you have said anything else than what you did say in the next moment?"

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-08-15T21:47:39.189Z · LW(p) · GW(p)

This sense of "could" seems mostly unrelated to the decision-theoretic "could", so the answer to the question changes only to the extent there's equivocation for the word.

comment by Peterdjones · 2011-08-16T17:35:41.747Z · LW(p) · GW(p)

OTOH, physical indeterminism does change something about free will.

Replies from: nshepperd, Vladimir_Nesov
comment by nshepperd · 2011-08-17T01:24:14.087Z · LW(p) · GW(p)

Yeah, it makes your actions random rather than predictable. A massive improvement!

comment by Vladimir_Nesov · 2011-08-16T18:13:30.565Z · LW(p) · GW(p)

Nope.

comment by timtyler · 2011-08-15T08:31:37.694Z · LW(p) · GW(p)

Well, OK. I hope you can see how this might not come across to the reader - and how it looks more as though you are trying to talk your lady friend around with some rather dubious facts.

Replies from: lukeprog
comment by lukeprog · 2011-08-15T08:55:07.799Z · LW(p) · GW(p)

You're right. I've added a parenthetical. Hopefully that will help.