Be careful with thought experiments

post by lukeprog · 2012-05-18T09:54:54.341Z · LW · GW · Legacy · 98 comments

Contents

98 comments

Thagard (2012) contains a nicely compact passage on thought experiments:

Grisdale’s (2010) discussion of modern conceptions of water refutes a highly influential thought experiment that the meaning of water is largely a matter of reference to the world rather than mental representation. Putnam (1975) invited people to consider a planet, Twin Earth, that is a near duplicate of our own. The only difference is that on Twin Earth water is a more complicated substance XYZ rather than H2O. Water on Twin Earth is imagined to be indistinguishable from H2O, so people have the same mental representation of it. Nevertheless, according to Putnam, the meaning of the concept water on Twin Earth is different because it refers to XYZ rather than H2O. Putnam’s famous conclusion is that “meaning just ain’t in the head.”

The apparent conceivability of Twin Earth as identical to Earth except for the different constitution of water depends on ignorance of chemistry. As Grisdale (2010) documents, even a slight change in the chemical constitution of water produces dramatic changes in its effects. If normal hydrogen is replaced by different isotopes, deuterium or tritium, the water molecule markedly changes its chemical properties. Life would be impossible if H2O were replaced by heavy water, D2O or T2O; and compounds made of elements different from hydrogen and oxygen would be even more different in their properties. Hence Putnam’s thought experiment is scientifically incoherent: If water were not H2O, Twin Earth would not be at all like Earth. [See also Universal Fire. --Luke]

This incoherence should serve as a warning to philosophers who try to base theories on thought experiments, a practice I have criticized in relation to concepts of mind (Thagard, 2010a, ch. 2). Some philosophers have thought that the nonmaterial nature of consciousness is shown by their ability to imagine beings (zombies) who are physically just like people but who lack consciousness. It is entirely likely, however, that once the brain mechanisms that produce consciousness are better understood, it will become clear that zombies are as fanciful as Putnam’s XYZ. Just as imagining that water is XYZ is a sign only of ignorance of chemistry, imagining that consciousness is nonbiological may well turn out to reveal ignorance rather than some profound conceptual truth about the nature of mind. Of course, the hypothesis that consciousness is a brain process is not part of most people’s everyday concept of consciousness, but psychological concepts can progress just like ones in physics and chemistry. [See also the Zombies Sequence. --Luke]

98 comments

Comments sorted by top scores.

comment by bryjnar · 2012-05-18T10:33:26.978Z · LW(p) · GW(p)

I think this radically misunderstands what thought experiments are for. As I see it, the job of philosophy is to clear up our own conceptual confusions; that's not the sort of thing that ever could conflict with science!

(EDIT: I mean that it shouldn't conflict with science; if you do your philosophy wrong then you might end up conflicting.)

Besides, Putnam's thought experiment can be easily tweaked to get around that problem: suppose that on Twin Earth cats are in fact very sophisticated cat-imitating robots. Then a similar conclusion follows about the meaning of "cat". The point is that if X had in fact been Y, where Y is the same as X in all the respects which we use to pick out X, then words which currently refer to X would refer to Y in that situation. I think Putnam even specifies that we are to imagine that XYZ behaves chemically the same as H2O. Sure, that couldn't happen in our world; but the laws of physics might have turned out differently, and we ought to be able to conceptually deal with possibilities like this.

Replies from: lukeprog, Richard_Kennaway, JonathanLivengood, jsalvatier, Xachariah
comment by lukeprog · 2012-05-18T16:38:05.142Z · LW(p) · GW(p)

the job of philosophy is to clear up our own conceptual confusions; that's not the sort of thing that ever could conflict with science!

I think this is wrong, and one of the major mistakes of 20th century analytic philosophy.

Replies from: prase, Tyrrell_McAllister, bryjnar
comment by prase · 2012-05-18T18:30:16.811Z · LW(p) · GW(p)

What is wrong, that the job of philosophy is to clear up conceptual confusions, or that philosophy could not conflict with science?

comment by Tyrrell_McAllister · 2012-05-18T20:04:39.560Z · LW(p) · GW(p)

I think this is wrong, and one of the major mistakes of 20th century analytic philosophy.

It is still worthwhile to clear up conceptual confusions, even if the specific approach known as "conceptual analysis" is usually a mistake.

Replies from: lukeprog
comment by lukeprog · 2012-05-19T04:57:41.957Z · LW(p) · GW(p)

Right. It's very useful to clear up conceptual confusions. That's much of what The Sequences can teach people. What's wrong is the claim that attempts to clear up conceptual confusions couldn't conflict with science.

Replies from: bryjnar, Tyrrell_McAllister, Tyrrell_McAllister
comment by bryjnar · 2012-05-19T17:14:31.042Z · LW(p) · GW(p)

Hm. Perhaps you're right. Maybe I should have said that it shouldn't ever conflict with science. But I think that's because if you're coming into conflict with science you're doing your philosophy wrong, more than anything else.

Replies from: lukeprog
comment by lukeprog · 2012-05-19T18:21:47.815Z · LW(p) · GW(p)

Would you mind adding this clarification to your original comment above that was upvoted 22 times? :)

Replies from: bryjnar
comment by bryjnar · 2012-05-19T21:54:37.424Z · LW(p) · GW(p)

Sure; it is indeed ambiguous ;)

comment by Tyrrell_McAllister · 2012-05-19T06:19:58.555Z · LW(p) · GW(p)

Hmm. I guess I agree with that. That is, dominant scientific theories can be conceptually confused and need correction.

But would 20th century analytic philosophy have denied that? The opposite seems to me to be true. Analytic philosophers would justify their intrusions into the sciences by arguing that they were applying their philosophical acumen to identify conceptual confusions that the scientists hadn't noticed. (I'm thinking of Jerry Fodor's recent critique of the explanatory power of Darwinian natural selection, for example -- though that's from our own century.)

Replies from: lukeprog
comment by lukeprog · 2012-05-19T18:03:40.489Z · LW(p) · GW(p)

No, I don't think the better half of 20th century analytic philosophers would have denied that.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-05-19T18:36:49.025Z · LW(p) · GW(p)

Just to be clear, I think that analytic philosophers often should have been more humble when they barged in and started telling scientist how confused they were. Fodor's critique of NS would again be my go-to example of that.

Dennett states this point in typically strong terms in his review of Fodor's argument:

I cannot forebear noting, on a rather more serious note, that such ostentatiously unresearched ridicule as Fodor heaps on Darwinians here is both very rude and very risky to one’s reputation. (Remember Mary Midgley’s notoriously ignorant and arrogant review of The Selfish Gene? Fodor is vying to supplant her as World Champion in the Philosophers’ Self- inflicted Wound Competition.) Before other philosophers countenance it they might want to bear in mind that the reaction of most biologists to this sort of performance is apt to be–at best: “Well, we needn’t bother paying any attention to him. He’s just one of those philosophers playing games with words.” It may be fun, but it contributes to the disrespect that many non- philosophers have for our so-called discipline.

comment by bryjnar · 2012-05-18T20:41:32.787Z · LW(p) · GW(p)

I don't think I'm committed to the view of concepts that you're attacking. Concepts don't have to be some kind of neat things you can specify with necessary and sufficient conditions or anything. And TBH, I prefer to talk about languages. I don't think philosophy can get us out of any holes we didn't get ourselves into!

(FWIW, I do also reject the Quinean thesis that everything is continuous with science, which might be another part of your objection)

comment by Richard_Kennaway · 2012-05-18T12:47:33.143Z · LW(p) · GW(p)

As I see it, the job of philosophy is to clear up our own conceptual confusions; that's not the sort of thing that ever could conflict with science!

It certainly can, if the job is done badly.

Agreed that Grisdale's argument isn't very good, I have a hard time taking Putnam's argument seriously, or even the whole context in which he presented his thought experiment. Like a lot of philosophy, it reminds me of a bunch of maths noobs arguing long and futilely in a not-even-wrong manner over whether 0.999...=1.

We on Earth use "water" to refer to a certain substance; those on Twin Earth use "water" to refer to a different substance with many of the same properties; our scientists and theirs meet with samples of the respective substances, discover their constitutions are actually diffferent, and henceforth change their terminology to make it clear, when it needs to be, which of the two substances is being referred to in any particular case.

There is no problem here to solve.

Replies from: bryjnar
comment by bryjnar · 2012-05-18T12:55:29.778Z · LW(p) · GW(p)

Well, sure, you can do philosophy wrong!

It sounds to me that you're expecting something from Putnam's argument that he isn't trying to give you. He's trying to clarify what's going on when we talk about words having "meaning". His conclusion is that the "meaning", insofar as it involves "referring" to something, depends on stuff outside the mind of the speaker. That may seem obvious in retrospect, but it's pretty tempting to think otherwise: as competent users of a language, we tend to feel like we know all there is to know about the meanings of our own words! That's the sort of position that Putnam is attacking: a position about that mysterious word "meaning".

EDIT: to clarify, I'm not necessarily in total agreement with Putnam, I just don't think that this is the way to refute him!

Replies from: Richard_Kennaway, shminux, gRR
comment by Richard_Kennaway · 2012-05-18T14:41:38.249Z · LW(p) · GW(p)

It still looks to me like arguing about a wrong question. We use words to communicate with each other, which requires that by and large we learn to use the same words in similar ways. There are interesting questions to ask about how we do this, but questions of a sort that require doing real work to discover answers. To philosophically ask, "Ah, but what what sort of thing is a meaning? What are meanings? What is the process of referring?" is nugatory.

It is as if one were to look at the shapes that crystals grow into and ask not, "What mechanisms produce these shapes?" (a question answered in the laboratory, not the armchair, by discovering that atoms bind to each other in ways that form orderly lattices), but "What is a shape?"

Replies from: Wei_Dai, bryjnar
comment by Wei Dai (Wei_Dai) · 2012-05-18T19:20:45.361Z · LW(p) · GW(p)

It is as if one were to look at the shapes that crystals grow into and ask not, "What mechanisms produce these shapes?" (a question answered in the laboratory, not the armchair, by discovering that atoms bind to each other in ways that form orderly lattices), but "What is a shape?"

Why aren't both questions valuable to ask? The latter one must have contributed to the eventual formation of the mathematical field of geometry.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-05-18T20:24:40.414Z · LW(p) · GW(p)

I find it difficult to see any trace of the idea in Euclid. Circles and straight lines, yes, but any abstract idea of shape in general, if it can be read into geometry at all, would only be in the modern axiomatisation. And done by mathematicians finding actual theorems, not by philosophers assuming there is an actual thing behind our use of the word, that it is their task to discover.

Replies from: Wei_Dai, Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-05-19T16:36:35.464Z · LW(p) · GW(p)

And done by mathematicians finding actual theorems, not by philosophers assuming there is an actual thing behind our use of the word, that it is their task to discover.

I don't mean to pick just on you, but I think philosophy is often unfairly criticized for being less productive than other fields, when the problem is just that philosophy is damned hard, and whenever we do discover, via philosophy, some good method for solving a particular class of problems, then people no longer consider that class of problems to belong to the realm of philosophy, and forget that philosophy is what allowed us to get started in the first place. For example, without philosophy, how would one have known that proving theorems using logic might be a good way to understand things like circles, lines, and shapes (or even came up with the idea of "logic")?

(Which isn't to say that there might not be wrong ways to do philosophy. I just think we should cut philosophers some slack for doing things that turn out to be unproductive in retrospect, and appreciate more the genuine progress they have made.)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-05-20T09:19:52.600Z · LW(p) · GW(p)

For example, without philosophy, how would one have known that proving theorems using logic might be a good way to understand things like circles, lines, and shapes (or even came up with the idea of "logic")?

How people like Euclid came up with the methods they did is, I suppose, lost in the mists of history. Were Euclid and his predecessors doing "philosophy"? That's just a definitional question.

The problem is that there is no such thing as philosophy. You cannot go and "do philosophy", in the way that you can "do mathematics" or "do skiing". There are only people thinking, some well and some badly. The less they get out of their armchairs, the more their activity is likely to be called philosophy, and in general, the less useful their activity is likely to be. Mathematics is the only exception, and only superficially, because mathematical objects are clearly outside your head, just as much as physical ones are. You bang up against them, in a way that never happens in philosophy.

When philosophy works, it isn't philosophy any more, so the study of philosophy is the study of what didn't work. It's a subject defined by negation, like the biology of non-elephants. It's like a small town in which you cannot achieve anything of substance except by leaving it. Philosophers are the ones who stay there all their lives.

I realise that I'm doing whatever the opposite is of cutting them some slack. Maybe trussing them up and dumping them in the trash.

I just think we should cut philosophers some slack for doing things that turn out to be unproductive in retrospect, and appreciate more the genuine progress they have made.

What has philosophy ever done for us? :-) I just googled that exact phrase, and the same without the "ever", but none of the hits gave a satisfactory defence. In fact, I turned up this quote from the philosopher Austin, characterising philosophy much as I did above:

"It's the dumping ground for all the leftovers from other sciences, where everything turns up which we don't know quite how to take. As soon as someone discovers a reputable and reliable method of handling some portion of these residual problems, a new science is set up, which tends to break away from philosophy."

Responding to the sibling comment here as it's one train of thought:

How might one know, a priori, that "What is a circle?" is a valid question to ask, but not "What is a shape?"

By knowing this without knowing why. That's all that a priori knowledge is: stuff you know without knowing why. Or to make the levels of abstraction more explicit, a priori beliefs are beliefs you have without knowing why. Once you start thinking about them, asking why you believe something and finding reasons to accept or reject it, it's no longer a priori. The way to discover whether either of those questons is sensible is to try answering them and see where that leads you.

That activity is called "philosophy", but only until the process gets traction and goes somewhere. Then it's something else.

Replies from: Vladimir_Nesov, Emile
comment by Vladimir_Nesov · 2012-05-20T10:09:27.741Z · LW(p) · GW(p)

That's all that a priori knowledge is: stuff you know without knowing why. Or to make the levels of abstraction more explicit, a priori beliefs are beliefs you have without knowing why. Once you start thinking about them, asking why you believe something and finding reasons to accept or reject it, it's no longer a priori.

This is a nice concise statement of the idea that didn't easily get across through the posts A Priori and How to Convince Me That 2 + 2 = 3.

comment by Emile · 2012-05-21T12:11:27.772Z · LW(p) · GW(p)

The problem is that there is no such thing as philosophy. You cannot go and "do philosophy", in the way that you can "do mathematics" or "do skiing". There are only people thinking, some well and some badly. The less they get out of their armchairs, the more their activity is likely to be called philosophy, and in general, the less useful their activity is likely to be. [...]

When philosophy works, it isn't philosophy any more, so the study of philosophy is the study of what didn't work. It's a subject defined by negation, like the biology of non-elephants.

I think there are useful kinds of thought that are best categorized as "philosophy" (even if it's just "philosophy of the gaps", i.e. not clear enough to fall into an existing field); mostly around the area of how we should adapt our behavior or values in light of learning about game theory, evolutionary biology, neuroscience etc. - for example, "We are the product of evolution, therefore it's every man for himself" is the product of bad philosophy, and should be fixed with better philosophy rather than with arguments from evolutionary biology or sociology.

A lot of what we discuss here on LessWrong falls more easily under the heading of "philosophy" than that of any other specific field.

(Note that whether most academic philosophers are producing any valuable intellectual contributions is a different question, I'm only arguing "some valuable contributions are philosophy")

comment by Wei Dai (Wei_Dai) · 2012-05-18T23:08:42.295Z · LW(p) · GW(p)

Circles and straight lines, yes, but any abstract idea of shape in general

How might one know, a priori, that "What is a circle?" is a valid question to ask, but not "What is a shape?"

comment by bryjnar · 2012-05-18T15:18:16.254Z · LW(p) · GW(p)

Well, we seem to have this word, "meaning", that pops up a lot and that lots of people seem to think is pretty interesting, and questions of whether people "mean" the same thing as other people do turn up quite often. That said, it's often a pretty confusing topic. So it seems worthwhile to try and think about what's going on with the word "meaning" when people use it, and if possible, clarify it. If you're just totally uninterested in that, fine. Or you can just ditch the concept of "meaning" altogether, but good luck talking to anyone else about interesting stuff in that case!

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-05-18T15:22:11.131Z · LW(p) · GW(p)

Well, I did just post my thinking about that, and I feel like I'm the only person pointing out that Putnam and the rest are arguing the acoustics of unheard falling trees. To me, the issue is dissolved so thoroughly that there isn't a question left, other than the real questions of what's going on in our brains when we talk.

Replies from: bryjnar, prase
comment by bryjnar · 2012-05-18T20:51:35.075Z · LW(p) · GW(p)

Okay, I was kind of interpreting you as just not being interested in these kinds of question. I agree that some questions about "meaning" don't go anywhere and need to be dissolved, but I don't think that all such questions can be dissolved. If you don't think that any such questions are legitimate, then obviously this will look like a total waste of time to you.

comment by prase · 2012-05-18T18:22:45.689Z · LW(p) · GW(p)

I feel like I'm the only person pointing out that Putnam and the rest are arguing the acoustics of unheard falling trees.

One person pointing it out suffices. (I tend to agree with your position.)

comment by Shmi (shminux) · 2012-05-18T14:47:54.015Z · LW(p) · GW(p)

His conclusion is that the "meaning", insofar as it involves "referring" to something, depends on stuff outside the mind of the speaker.

EY discussed this in depth in the quotation is not the referent.

comment by gRR · 2012-05-18T15:00:50.480Z · LW(p) · GW(p)

the "meaning", insofar as it involves "referring" to something, depends on stuff outside the mind of the speaker. That may seem obvious in retrospect, but it's pretty tempting to think otherwise

The idea produces non-obvious results if you apply it to, for example, mathematical concepts. They certainly refer to something, which is therefore outside the mind. Conclusion: Hylaean Theoric World.

Replies from: bryjnar, Nornagest
comment by bryjnar · 2012-05-18T15:12:48.613Z · LW(p) · GW(p)

Being convinced by Putnam on this front doesn't mean that you have to think that everything refers! There are plenty of accounts of what's going on with mathematics that don't have mathematical terms referring to floaty mathematical entities. Besides, Putnam's point isn't that the referent of a term is going to be outside your head; that's pretty uncontroversial, as long as you think we're talking about something outside your head. What he argues is that this means that the meaning of a term depends on stuff outside your head, which is a bit different.

Replies from: gRR
comment by gRR · 2012-05-18T15:38:36.975Z · LW(p) · GW(p)

There are plenty of accounts of what's going on with mathematics that don't have mathematical terms referring to floaty mathematical entities

Could you list the one(s) that you find convincing? (even if this is somewhat off-topic in this thread...)

What he argues is that this means that the meaning of a term depends on stuff outside your head, which is a bit different

That is, IIUC, the "meaning" of a concept is not completely defined by its place within the mind's conceptual structure. This seems correct, as the "meaning" is supposed to be about the correspondence between the map and the territory, an not about some topological property of the map.

Replies from: bryjnar
comment by bryjnar · 2012-05-18T20:48:44.724Z · LW(p) · GW(p)

Have a look here for a reasonable overview of philosophy of maths. Any kind of formalism or nominalism won't have floaty mathematical entities - in the former case you're talking about concrete symbols, and in the latter case about the physical world in some way (these are broad categories, so I'm being vague).

Personally, I think a kind of logical modal structuralism is on the right track. That would claim that when you make a mathematical statement, you're really saying: "It is a necessary logical truth that any system which satisfied my axioms would also satisfy this conclusion."

So if you say "2+2 = 4", you're actually saying that if there were a system that behaved like the natural numbers (which is logically possible, so long as the axioms are consistent), then in that system two plus two would equal four.

See Hellman's "Mathematics Without Numbers" for the classic defense of this kind of position.

Replies from: gRR
comment by gRR · 2012-05-19T01:37:42.910Z · LW(p) · GW(p)

Thanks for the answer! But I am still confused regarding the ontological status of "2" under many of the philosophical positions. Or, better yet, the ontological status of the real numbers field R. Formalism and platonism are easy: under formalism, R is a symbol that has no referent. Under platonism, R exists in the HTW. If I understand your preferred position correctly, it says: "any system that satisfies axioms of R also satisfies the various theorems about it". But, assuming the universe is finite or discrete, there is no physical system that satisfies axioms of R. Does it mean your position reduces to formalism then?

Replies from: bryjnar
comment by bryjnar · 2012-05-19T17:22:26.009Z · LW(p) · GW(p)

There's no actual system that satisfies the axioms of the reals, but there (logically) could be. If you like, you could say that there is a "possible system" that satisfies those axioms (as long as they're not contradictory!).

The real answer is that talk of numbers as entities can be thought of as syntactic sugar for saying that certain logical implications hold. It's somewhat revisionary, in that that's not what people think that they are doing, and people talked about numbers long before they knew of any axiomatizations for them, but if you think about it it's pretty clear why those ways of talking would have worked, even if people hadn't quite figured out the right way to think about it yet.

If you like, you can think of it as saying: "Numbers don't exist as floaty entities, so strictly speaking normal number talk is all wrong. However, [facts about logical implications] are true, and there's a pretty clear truth-preserving mapping between the two, so perhaps this is what people were trying to get at."

comment by Nornagest · 2012-05-18T17:01:50.580Z · LW(p) · GW(p)

Seems to me that you can dodge the Platonic implications (that Anathem was riffing on). You can talk about relations between objects, which depend on objects outside the mind of the speaker but have no independent physical existence in themselves; you need not only a shared referent but also some shared inference, but that's still quite achievable without needing to invoke some Form of, say, mathematical associativeness.

comment by JonathanLivengood · 2012-05-19T05:47:08.270Z · LW(p) · GW(p)

The robot-cat example is, in fact, one of Putnam's examples. See page 162.

Replies from: bryjnar
comment by bryjnar · 2012-05-19T16:55:56.265Z · LW(p) · GW(p)

Indeed, that's where I stole it from ;)

comment by jsalvatier · 2012-05-18T17:30:58.257Z · LW(p) · GW(p)

"that's not the sort of thing that ever could conflict with science!" do you mean to include psychology in 'science' if so, why would you care about it then?

Replies from: bryjnar
comment by bryjnar · 2012-05-18T20:56:14.809Z · LW(p) · GW(p)

Psychology could (and often does!) show that the way we think about our own minds is just unhelpful in some way: actually, we work differently. I think the job of philosophy is to clarify what we're actually doing when we talk about our minds, say, regardless of whether that turns out to be a sensible way to talk about them. Psychology might then counsel that we ditch that way of talking! Sometimes we might get to that conclusion from within philosophy; e.g. Parfit's conclusion that our notion of personal identity is just pretty incoherent.

Replies from: jsalvatier
comment by jsalvatier · 2012-05-18T22:23:46.673Z · LW(p) · GW(p)

I meant to suggest that any philosophy which could never conflict with science is immediately suspicious unless you mean something relatively narrow by 'science' (for example, by excluding psychology). If you claim that something could never be disproven by science, that's pretty close to saying 'it won't ever affect your decisions', in which case, why care?

Replies from: bryjnar
comment by bryjnar · 2012-05-19T17:11:17.849Z · LW(p) · GW(p)

I think of philosophy as more like trying to fix the software that your brain runs on. Which includes, for example, how you categorize the outside world, and also your own model of yourself. That sounds like it ought to be the stamping ground of cognitive science, but we actually have a nice, high-level access to this kind of thing that doesn't involve thinking about neurons at all: language. So we can work at that level, instead (or as well).

A lot of the stuff in the Sequences, for example, falls under this: it's an investigation into what the hell is going on with our mindware, (mostly) done at the high level of language.

(Disclaimer: Philosophers differ a lot about what they think philsophy does/should do. Some of them definitely do think that it can tell you stuff about the world that science can't, or that it can overrule it, or any number of crazy things!)

comment by Xachariah · 2012-05-18T20:08:25.823Z · LW(p) · GW(p)

Putnam's thought experiment can be easily tweaked to get around that problem: suppose that on Twin Earth cats are in fact very sophisticated cat-imitating robots.

That would be an even weirder version of Earth. Well, less weird because it wouldn't be a barren, waterless hellscape, but easier for my mind to paint.

A universe were cats were replaced with cat-imitating robots would be amazing for humans. Instead of the bronze age, we would hunt cats for their strong skeletons to use as tools and weapons. Should the skeletons be made instead of brittle epoxy of some kind, we would be able to study cat factories and bootstrap our mechanical knowledge. Should cats be self replicating with nano-machines, we would employ them as guard animals for crops bootstrapping agriculture; an artificial animal which cannot be eaten would have caused other animals to evolve not to mess with them. Should cats, somehow, manage to turn themselves edible after they die, we would still be able to look at their construction and know that they were not crafted by evolution; humanity would know that there was another race out there in the stars and that artificial life was possible. Twin-Eliezer could point to cats and say, "see, we can do this," and all of humanity would be able to agree and put huge sums of money into AI research.

And if they are cat-robots who are indeed made of bone instead of metal, who reproduce just like cats do, who have exactly the same chemical composition as cats, and evolved here on earth in the exact same way cats do... then they're just cats. The concept of identical-robot-cats is no different than the worthless concept of philosophical zombies. That's the whole point of the quote.

Replies from: bryjnar, DuncanS
comment by bryjnar · 2012-05-18T20:36:41.364Z · LW(p) · GW(p)

I feel like you're fighting the hypothetical a bit here.

Perhaps "cat" was a bad idea: we know too much about cats. Pick something where there are some properties that we don't know about yet; then consider the situation where they are as the actually are, and where they're different. The two would be indistinguishable to us, but that doesn't mean that no experiment could ever tell them apart. See also asparisi's comment.

Replies from: Xachariah
comment by Xachariah · 2012-05-18T22:35:40.412Z · LW(p) · GW(p)

I am most assuredly fighting the hypothetical (I'm familiar with and disagree with that link). As far as I can tell, that's what Thagard is doing too.

I'm reminded of a rebuttal to that post, about how hypotheticals are used as a trap. Putnam intentionally chose to create a scientifically incoherent world. He could have chosen a jar of acid instead of an incoherent twin-earth, but he didn't. He wanted the sort of confusion that could only come from an incoherent universe (luke links that in his quote).

I think that's Thagard's point. As he notes: these types of thought experiments are only expressions of our ignorance, and not deep insights about the mind.

Replies from: pragmatist, bryjnar
comment by pragmatist · 2012-05-18T23:00:33.746Z · LW(p) · GW(p)

What mileage do you think Putnam is getting here from creating this confusion? Do you think the point he's trying to make hinges on the incoherence of the world he's constructed?

comment by bryjnar · 2012-05-19T17:02:28.126Z · LW(p) · GW(p)

I'm not quite sure why it matters that the world Putnam creates is "scientifically incoherent" - which I take to mean it conflicts with our current understanding of science?

As far as we know, the facts of science could have been different; hell, we could still be wrong about the ones we currently think we know. So our language ought to be able to cope with situations where the scientific facts are different than they actually are. It doesn't matter that Putnam's scenario can't happen in this world: it could have happened, and thinking about what we would want to say in that situation can be illuminating. That's all that's being claimed here.

I wonder if the problem is referring to these kinds of things as "thought experiments". They're not really experiments. Imagine a non-native speaker asking you about the usage of a word, who concocts an unlikely (or maybe even impossible scenario) and then asks you whether the word would apply in that situation. That's more like what's going on, and it doesn't bear a lot of resemblance to a scientific experiment!

comment by DuncanS · 2012-05-18T21:04:05.650Z · LW(p) · GW(p)

Well you could go for something much more subtle, like using sugar of the opposite handedness on the other 'Earth'. I don't think it really changes the argument much whether the distinction is subtle or not.

comment by komponisto · 2012-05-18T19:10:45.829Z · LW(p) · GW(p)

I've always thought this argument of Putnam's was dead wrong. It is about the most blatant and explicit instance of the Mind Projection Fallacy I know.

The real problem for Putnam is not his theory of chemistry; it is his theory of language. Like so many before and after him, Putnam thinks of meaning as being a kind of correspondence between words and either things or concepts; and in this paper he tries to show that the correspondence is to things rather than concepts. The error is in the assumption that words (and languages) have a sufficiently abstract existence to participate in such correspondences in the first place. (We can of course draw any correspondence we like, but it need not represent any objective fact about the territory.)

This is insufficiently reductionist. Language is nothing more than the human superpower of vibratory telepathy. If you say the word "chair", this physical action of yours causes a certain pattern of neurons to be stimulated in my brain, which bears a similarity relationship to a pattern of neurons in your brain. For philosophical purposes, there is no fact of the matter about whether the pattern of neurons being stimulated in my brain is "correct" or not; there are only greater and lesser degrees of similarity between the stimulation patterns occurring in my brain when I hear the word and those occurring in yours when you say it.

The point that Putnam was trying to make, I think, was this: our mental concepts are causally related to things in the real world. (He may also, ironically, have been trying to warn against the Mind Projection Fallacy.) Unfortunately, like so many 20th-century analytic philosophers, he confused matters by introducing language into the picture; evidently due to a mistaken Whorfian belief that language is so fundamental to human thought that any discussion of human concepts must be a discussion about language.

(Incidentally, one of the things that most impressed me about Eliezer's Sequences was that he seemed to have something close to the correct theory of language, which is exceedingly rare.)

The real danger of thought experiments, including this one of Putnam's, is that fundamental assumptions may be wrong.

Replies from: Tyrrell_McAllister, Wei_Dai, bryjnar
comment by Tyrrell_McAllister · 2012-05-18T20:18:04.676Z · LW(p) · GW(p)

(We can of course draw any correspondence we like, but it need not represent any objective fact about the territory.)

I'm not sure how you can appeal to map-territory talk if you do not allow language to refer to things. All the maps that we can share with one another are made of language. You apparently don't believe that the word "Chicago" on a literal map refers to the physical city with that name. How then do you understand the map-territory metaphor to work? And, without the conventional "referentialist" understanding of language (including literal and metaphorical maps and territories), how do you even state the problem of the Mind-Projection Fallacy?

If you say the word "chair", this physical action of yours causes a certain pattern of neurons to be stimulated in my brain, which bears a similarity relationship to a pattern of neurons in your brain.

It is hard for me to make sense of this paragraph when I gather that its writer doesn't believe that he is referring to any actual neurons when he tells this story about what "neurons" are doing.

For philosophical purposes, there is no fact of the matter about whether the pattern of neurons being stimulated in my brain is "correct" or not; there are only greater and lesser degrees of similarity between the stimulation patterns occurring in my brain when I hear the word and those occurring in yours when you say it.

Suppose that you attempt an arithmetic computation in your head, and you do not communicate this fact with anyone else. Is it at all meaningful to ask whether your arithmetic computation was correct?

(Incidentally, one of the things that most impressed me about Eliezer's Sequences was that he seemed to have something close to the correct theory of language, which is exceedingly rare.)

Eliezer cites Putnam's XYZ argument approvingly in Heat vs. Motion. A quote:

I should note, in fairness to philosophers, that there are philosophers who have said these things. For example, Hilary Putnam, writing on the "Twin Earth" thought experiment:

Once we have discovered that water (in the actual world) is H20, nothing counts as a possible world in which water isn't H20. In particular, if a "logically possible" statement is one that holds in some "logically possible world", it isn't logically possible that water isn't H20.

On the other hand, we can perfectly well imagine having experiences that would convince us (and that would make it rational to believe that) water isn't H20. In that sense, it is conceivable that water isn't H20. It is conceivable but it isn't logically possible! Conceivability is no proof of logical possibility.

See also Reductive Reference:

Hilary Putnam's "Twin Earth" thought experiment, where water is not H20 but some strange other substance denoted XYZ, otherwise behaving much like water, and the subsequent philosophical debate, helps to highlight this issue. "Snow" doesn't have a logical definition known to us—it's more like an empirically determined pointer to a logical definition. This is true even if you believe that snow is ice crystals is low-temperature tiled water molecules. The water molecules are made of quarks. What if quarks turn out to be made of something else? What is a snowflake, then? You don't know—but it's still a snowflake, not a fire hydrant.

ETA: The Heat vs. Motion post has a pretty explicit statement of Putnam's thesis in Eliezer's own words:

The words "heat" and "kinetic energy" can be said to "refer to" the same thing, even before we know how heat reduces to motion, in the sense that we don't know yet what the reference is, but the references are in fact the same. You might imagine an Idealized Omniscient Science Interpreter that would give the same output when we typed in "heat" and "kinetic energy" on the command line.

I talk about the Science Interpreter to emphasize that, to dereference the pointer, you've got to step outside cognition. The end result of the dereference is something out there in reality, not in anyone's mind. So you can say "real referent" or "actual referent", but you can't evaluate the words locally, from the inside of your own head.

(Bolding added.) Wouldn't this be an example of "think[ing] of meaning as being a kind of correspondence between words and either things or concepts"?

comment by Wei Dai (Wei_Dai) · 2012-05-18T20:09:32.783Z · LW(p) · GW(p)

You've probably thought more about this topic than I have, but it seems to me that words can at least be approximated as abstract referential entities, instead of just seen as a means of causing neuron stimulation in others. Using Putnam's proposed theory of meaning, I can build a robot that would bring me a biological-cat when I say "please bring me a cat", and bring the twin-Earth me a robot-cat when he says "please bring me a cat", without having to make the robot simulate a human's neural response to the acoustic vibration "cat". That seems enough to put Putnam outside the category of "dead wrong", as opposed to, perhaps, "claiming too much"?

Replies from: None
comment by [deleted] · 2012-05-19T02:14:20.837Z · LW(p) · GW(p)

I may bit a bit in over my head here, but I also don't see a strong distinction between saying "Assume on Twin Earth that water is XYZ" and saying "Omega creates a world where..." Isn't the point of a thought experiment to run with the hypothetical and trace out its implications? Yes, care must be taken not to over-infer from the result of that to a real system that may not match it, but how is this news? I seem to recall some folks (m'self included) finding that squicky with regard to "Torture vs Dust Specks" -- if you stop resisting the question and just do the math the answer is obvious enough, but that doesn't imply one believes that the scenario maps to a realizable condition.

I may just be confused here, but superficially it looks like a failure to apply the principle of "stop resisting the hypothetical" evenly.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-05-19T19:14:49.088Z · LW(p) · GW(p)

I do worry that thought experiments involving Omega can lead decision theory research down wrong paths (for example by giving people misleading intuitions), and try to make sure the ones I create or pay attention to are not just arbitrary thought experiments but illuminate some aspect of real world decision making problems that we (or an AI) might face. Unfortunately, this relies largely on intuition and it's hard to say what exactly is a valid or useful thought experiment and what isn't, except maybe in retrospect.

comment by bryjnar · 2012-05-18T21:13:16.912Z · LW(p) · GW(p)

That's an interesting solution to the problem of translation (how do I know if I've got the meanings of the words right?) you've got there: just measure what's going on in the respective participants' brains! ;)

There are two reasons why you might not want to work at this level. Firstly, thinking about translation again, if I were translating the language of an alien species, their brain-equivalent would probably be sufficiently different that looking for neurological similarities would be hopeless. Secondly, it's just easier to work at a higher level of abstraction, and it seems like we've got at least part of a system for doing that already: you can see it in action when people actually do talk about meanings etc. Perhaps it's worth trying to make that work before we pronounce the whole affair worthless?

comment by asparisi · 2012-05-18T18:30:06.746Z · LW(p) · GW(p)

Putnam perhaps chose poor examples, but his thought-experiment works under any situation where we have limited knowledge.

Instead of Twin Earth, say that I have a jar of clear liquid on my desk. Working off of just that information (and the information that much of the clear liquid that humans keep around are water) people start calling the thing on my desk a "Jar of Water." That is, until someone knocks it over and it starts to eat through the material on my desk: obviously, that wasn't water.

Putnam doesn't think that XYZ will look like water in every circumstance: his thought-experiment includes the idea that we can distinguish between XYZ and water with, say, an electron microscope. So obviously there are some properties of XYZ that are not the same as water, or else they really would look the same under every possible circumstance.

The difference (which is where some philosophers make the mistake) is when you assume that the "thought-experiment" stuff looks like the "real" stuff in every possible circumstance. If Putnam had said that the difference between H2O and XYZ was purely ephiphenomenal or something like that, he'd be obviously wrong. For instance, if we looked at XYZ and it "fooled" us into thinking it was H2O (say, if we broke apart XYZ and got a 2:1 ration of oxygen to hydrogen and no other parts) then Putnam's argument wouldn't hold. (This is where p-zombies fail: it is stipulated that there is no experiment that can tell the difference.)

Putnam's main point was that we can be mistaken about what a thing is. Moreover, that when we can have two things (call them A and B) that we think are of the same type that we can not only be mistaken that A and B are of the same type, but that A could fit the type and B might not.

If this seems incredibly basic... it is. People make a big deal about it because prior to Putnam (and sometimes afterward) philosophers were saying crazy things like "the meanings in our heads don't have to refer to anything in the world," which essentially translates to "I can make a word mean anything I want!"

I agree with this to the extent that we shouldn't make the mistake that just because we have a model of something in our head means that our model corresponds to the real world. It's even stickier, because when a model doesn't conform we often keep the words around because they can be useful descriptions of the new thing we've fround. That can create confusion, especially during a period of transition. (Imagine someone saying that "Water cannot be H2O, because it is necessarily an Aristotelian element.") But thought experiments are very, very useful since all a "thought experiment" really is, is when you use the information already in your head and say, "Given what I already know, what do I think would happen in this circumstance?"

Replies from: bryjnar
comment by bryjnar · 2012-05-18T21:03:49.114Z · LW(p) · GW(p)

Putnam's main point was that we can be mistaken about what a thing is. Moreover, that when we can have two things (call them A and B) that we think are of the same type that we can not only be mistaken that A and B are of the same type, but that A could fit the type and B might not.

I think he's making a slightly different point. His point is that the reference of a term, which determines whether, say, the setence "Water is H2O" is true or not, depends on the environment in which that term came to be used. And this could be true even for speakers who were otherwise molecule-for-molecule identical. So just looking inside your head doesn't tell me enough to figure out whether your utterances of "Water is H2O" are true or not: I need to find out what kind of stuff was watery around you when you learnt that term! Which is the kind of surprising bit.

Replies from: pragmatist
comment by pragmatist · 2012-05-18T22:51:53.370Z · LW(p) · GW(p)

Yeah, this is basically right. Putnam was defending externalism about mental content, the idea that the content of our mental representations isn't fully determined by intrinsic facts about our brains. The twin earth thought experiment was meant to be an illustration of how two people could be in identical brain states yet be representing different things. In order to fully determine the content of my mental states, you need to take account of my environment and the way in which I'm related to it.

Another crazy thought experiment meant to illustrate semantic externalism: Suppose a flash of lightning strikes a distant swamp and by coincidence leads to the creation of a swampman who is a molecule-for-molecule duplicate of me. By hypothesis, the swampman has the exact same brain states as I do. But does the swampman have the same beliefs as I do? Semantic externalists would say no. I have a belief that my mother is an editor. Swampman cannot have this belief because there is no appropriate causal connection between himself and my mother. Sure he has the same brain state that instantiates this belief in my head. But what gives the belief in my head its content, what makes it a belief about my mother, is the causal history of this brain state, a causal history swampman doesn't share.

Putnam was not really arguing against the view that "the meanings in our heads don't have to refer to anything in the world". He was arguing against what he called "magic theories of reference", theories of reference according to which the content of a representation is intrinsic to that representation. For instance, a magic theory of reference would say that swampman does have a belief about my mother, since his brain state is identical to mine. Or if an ant just happens to walk around on a beach in such a manner that it produces a trail we would recognize as a likeness of Winston Churchill, then that is in fact a representation of Churchill, irrespective of the fact that the ant has never heard of Churchill and does not have the cognitive wherewithal to intentionally represent him even if it had heard of him.

comment by thomblake · 2012-05-18T14:53:54.956Z · LW(p) · GW(p)

I'm not sure that showing that XYZ can't make something water-like is any more helpful than just pointing out that there isn't actually a Twin Earth. Yes, it was supposed to be a counterfactual thought experiment. Oh noes, the counterfactual doesn't actually obtain!

And showing that particular chemical compounds don't make water, doesn't entail that there is no XYZ that makes water.

And as army1987 pointed out, it could have been "cat" instead of "water".

comment by JonathanLivengood · 2012-05-19T06:59:39.898Z · LW(p) · GW(p)

I'm going to agree with those saying that Thagard is missing the point of Putnam's thought experiment. Below, I will pick on Thagard's claim that Grisdale has refuted Putnam's thought experiment. For anyone interested, Putnam's 1975 article "The Meaning of "Meaning"", and Grisdale's thesis are both available as PDFs.

Thagard says that Grisdale has refuted Putnam's thought experiment. What would it mean to refute a thought experiment? I would have guessed that Thagard meant the conclusion or lesson drawn from the thought experiment is wrong. But Grisdale himself says that Putnam's conclusion was correct, even though it was "misleading."

Incidentally, I'm not sure how a thought experiment gets the right answer (or gets the thought-experimenter to the right answer) and yet counts as misleading ... but leave that for now.

Back to refuting a thought experiment. What does Grisdale actually do? As I read him, he lays out some reasons why Putnam's thought experiment cannot be realized in the actual world. Now, if the success of Putnam's thought experiment depended on its being realizable in the actual world, then Grisdale's thesis would be a heavy blow to that thought experiment. (Though not necessarily to Putnam's theory, as Grisdale concedes.)

But, the success of Putnam's thought experiment does not depend on its being realizable in the actual world. In this way, it is similar to Newcomb's problem and to various trolley problems. If I were able to push a fat man off of a bridge, that fat man would not be large enough to stop a runaway trolley capable of killing five other people; nonetheless, the thought experiment is useful and interesting. The thought experiment only serves to get you to imagine vividly the following abstract circumstance: A and B are people who belong to two different communities; there are two substances A and B that are superficially similar but have different micro-structures; the two communities use A and B for the same sorts of things; and when any member of community A refers to A he or she has the same internal mental state that a member of B has when he or she refers to B . Now, we are asked whether A and B mean the same thing by their respective words for A and B -- whatever that word or those words happen to be.

If a more realistic, concrete case helps, consider jade, which Putnam briefly talks about in his 1975 paper. (What Putnam says about jade strikes me as right but also strikes me as exactly the opposite of what he should have said on the basis of his own theory!) Jade is not a single kind of stuff. Rather, there are two different minerals, jadeite and nephrite, that go by the name "jade" because they have a bunch of very similar macroscopic properties, even though they have very different micro-structures. We could imagine two communities -- one that has only ever seen jadeite and one that has only ever seen nephrite. Now, suppose that Billy asks for some jadeite at his local market, and Suzy asks for some nephrite at her local market. And suppose that their concepts are identical. They are both thinking of a hard, waxy, precious stone with a greenish-white color. Do their words have the same meaning (for them)?

According to Putnam's theory, the answer should be NO. If Billy were to discover a piece of nephrite and say that it was jadeite, he would be wrong. Why? In part because the micro-structure of the stuff that his community baptized as "jadeite" is nothing like the micro-structure of nephrite.

Now, I will not deny that for some thought experiments, the hypothetical may be usefully resisted. The Chinese Room is one such thought experiment. And, I agree that one could refute a thought experiment by disputing its premisses. For example, by showing that if the premisses are as they must be, the conclusion or lesson of the thought experiment changes. But when that works, it is because the premisses that are being disputed make a difference for the conclusion reached. In the case of Putnam's H2O / XYZ thought experiment, the physical facts just don't matter.

comment by thelittledoctor · 2012-05-18T16:11:43.965Z · LW(p) · GW(p)

What a silly thought experiment. The fact that two people use one word to refer to two different things (which superficially appear similar) doesn't mean anything except that the language is imperfect.

Case in point: Uses of the word "love".

comment by A1987dM (army1987) · 2012-05-18T14:06:32.193Z · LW(p) · GW(p)

Pointing out that biochemistry couldn't be the same if water was different sounds like deliberately missing the point of Putnam's experiment. Suppose a planet like Earth, but where most people are left-handed, have their heart in the right-hand side of their body, wear wedding rings on their right hand, most live in the hemisphere where shadows move counterclockwise, most screws are left-handed, conservative political parties traditionally sit in the left-hand side of assemblies, etc., etc., and they speak a language identical to English except that left means ‘right’, right means ‘left’, clockwise means ‘counterclockwise’, etc. (Throughout this post, I use upright type for actual English and italic type for the language of that planet.) BTW, they are made of the same kind of matter as us, so that in their language lepton does mean ‘lepton’, antilepton means ‘antilepton’, etc. Now, when someone on that planet (who's not a particle physicist) says left he's thinking the same thoughts as someone on this planet saying ‘left’, but it doesn't follow that left in their language means the same as ‘left’ in our language (because the statement in their language only right-handed leptons and left-handed antileptons participate in the weak interaction is true, but the statement in our language “only right-handed leptons and left-handed antileptons participate in the weak interaction” is false)... or does it?

Bar pbhyq nethr gung vs fbzrbar vf abg n cnegvpyr culfvpvfg gurve ynathntr qbrfa'g npghnyyl unir n jbeq sbe yrcgba rgp. Ohg vs jr tb gbb sne qbja guvf ebnq, gur vzcyvpngvba vf gung vs V gryy fbzrbar gb ghea yrsg, gura vs gurl qba'g xabj nobhg cnevgl ivbyngvba va jrnx vagrenpgvbaf jung V zrna ol ‘yrsg’ vf abg jung gurl zrna ol ‘yrsg’, juvpu frrzf hafngvfsnpgbel gb zr orpnhfr nsgre nyy gurl qb raq hc gheavat jurer V jnagrq gurz gb ghea.

Replies from: prase
comment by prase · 2012-05-18T17:56:35.078Z · LW(p) · GW(p)

Well, 'left' means 'right' and 'right' means 'left', right? That their macroscopic world is a parity-inverted copy of ours (and that their word for 'left' souds the same as our word for 'right') is an unfortunate confusing accident, but I don't see how it would justify translating 'left' as 'left'. The representation of 'left' in their brains is not the same as the representation of 'left' in our brains, as demonstrated by different reactions to same sensory inputs. If you show the twin-earther your left hand they would say "it's your right hand". In the H2O-XYZ counterfactual the mental representations could be the same, thus Putnam's experiment is different from yours.

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-18T19:10:23.356Z · LW(p) · GW(p)

Well, 'left' means 'right' and 'right' means 'left', right?

Yes, as far as the literal spatial meanings are concerned. (Everything else is as in English: right-wing parties are conservative, left-continuous functions means the same as in English as they traditionally draw the x axis the other way round, left dislocation in their grammatical terminology means you move a phrase to the beginning of the sentence -- because (even if I'm not showing that in the, er..., transliteration I'm using, they write left (right) to right (left)), etc.)

I don't see how it would justify translating 'left' as 'left'.

Well, if you asked someone living before parity violation was discovered who can't see you what they meant by “left”, they could have answered, say, “the side where most people have their hearts”, or “the side other than that where most people have the hand they usually use to write”, and those would be true of left on the other planet, too.

If you show the twin-earther your left hand they would say "it's your right hand".

And if you gave a Putnamian twin-earther nothing but H2O to drink for a day, they'd still be thirsty (and possibly even worse, depending on the details of Putnam's thought experiment).

Replies from: prase
comment by prase · 2012-05-18T22:01:02.496Z · LW(p) · GW(p)

"Right" has several meanings and can be analysed as several different words: "right.1" means "conservative" (identical to "right.1"), "right.2" means "at the end of a sentence" (identical to "right.2"), "right.3" means "correct" (identical to "right.3") while "right.4" means "left", i.e. opposite to "right.4". Historically they were the same word which acquired metaphorical meanings because certain contingent facts, but now practically we have distinct homonyms and would better specify which is the one we are talking about.

Well, if you asked someone living before parity violation was discovered who can't see you what they meant by “left”, they could have answered, say, “the side where most people have their hearts”, or “the side other than that where most people have the hand they usually use to write”, and those would be true of left on the other planet, too.

They can answer that after parity violation was discovered, even if they could see us, and it would still be true. Those are true sentences about "left" or "left", but not complete descriptions of their meaning. When I ask you what you mean by "bus", you can truthfully answer that it's "a vehicle used for mass transportation of people" and another person can say the same about "train", but that doesn't imply that your "bus" is synonymous to the other person's "train".

Also don't forget to translate (or italicise) other words. "Most people have hearts on the left" is true as well as "most people have hearts on the left", but "most people have hearts on the left" or "most people have hearts on the left" are false. (If "people" is used to denote the populations of both mirror worlds then all given sentences are false.)

And if you gave a Putnamian twin-earther nothing but H2O to drink for a day, they'd still be thirsty (and possibly even worse, depending on the details of Putnam's thought experiment).

Is it really the case? I am not much familiar with Putnam, but I had thought that XYZ was supposed to be indistinguishable from H2O by any accessible means.

Replies from: army1987, army1987
comment by A1987dM (army1987) · 2012-05-19T15:51:33.413Z · LW(p) · GW(p)

"Right" has several meanings and can be analysed as several different words: "right.1" means "conservative" (identical to "right.1"), "right.2" means "at the end of a sentence" (identical to "right.2"), "right.3" means "correct" (identical to "right.3") while "right.4" means "left", i.e. opposite to "right.4". Historically they were the same word which acquired metaphorical meanings because certain contingent facts, but now practically we have distinct homonyms and would better specify which is the one we are talking about.

This assumes connotations and denotations can be perfectly separated, whereas they are so entangled that connotations pop up even in contexts which aren't obviously related to language. An example I've read about is that The Great Wave off Kanagawa evokes in speakers of left-to-right languages (such as English) a different feeling than the Japanese-speaking painter originally intended, and watching it in a mirror would fix that. (Well, it does for me, at least.)

comment by A1987dM (army1987) · 2012-05-19T08:07:12.945Z · LW(p) · GW(p)

(In the following, by ‘person/people’ I mean the population of both planets -- or more generally any sapient beings, by ‘human’ I mean that of this planet, and by ‘human’ that of the other planet. And unfortunately I'll have to use boldface for emphasis because italics is already used for the other purpose.)

They can answer that after parity violation was discovered, even if they could see us, and it would still be true.

They could, but they wouldn't need to. After parity violation, they could give an actual definition by describing details of the weak interactions; and if they could see us, they could just stick out their left hand. But if someone didn't know about P-violation and couldn't see us, the only ‘definitions’ they could possibly give would be ones based on said contingent facts. Hence, for all beliefs of such a human about left there's a corresponding belief of such a human about left, and vice versa, and the only things that distinguish them are outside their heads (except that the hemisphere lateralizations are the other way round than each other, but an algorithm stays the same if you flip the computer, provided it doesn't use weak interactions.)

Is it really the case? I am not much familiar with Putnam, but I had thought that XYZ was supposed to be indistinguishable from H2O by any accessible means.

Well, if he actually specified that you couldn't possibly tell XYZ from H2O even carrying stuff from one planet to another, then the scenario is much more blue-tentacley than I had thought, and I take back the whole “deliberately missing the point of Putnam's experiment” thing this subthread is about. FWIW, I seem to recall that he said that there are different conditions on the two planets such that H2O would be unwaterlike on Twin Earth and XYZ would be unwaterlike on Earth, but I'm not sure this is a later interpretation by someone else.

Replies from: prase
comment by prase · 2012-05-19T09:21:16.744Z · LW(p) · GW(p)

But if someone didn't know about P-violation and couldn't see us, the only ‘definitions’ they could possibly give would be ones based on said contingent facts.

That's an unfortunate fact about impossibility to faithfully communicate the meaning of some terms in certain circumstances, not about the meaning itself.

comment by DuncanS · 2012-05-18T21:00:04.894Z · LW(p) · GW(p)

It depends on your thought experiment - mathematics can be categorised as a form of thought experimentation, and it's generally helpful.

Thought experiments show you the consequences of your starting axioms. If your axioms are vague, or slightly wrong in some way, you can end up with completely ridiculous conclusions. If you are in a position to recognise that the result is ridiculous, this can help. It can help you to understand what your ideas mean.

On the other hand, it sometimes still isn't that helpful. For example, one might argue that an object can't move whilst being in the place where it is. And an object can't move whilst being in a place where it is not. Therefore an object can't move at all. I can see the conclusion's a little suspect, but working out why isn't quite as easy. (The answer is infinitesimals / derivatives, we now know). But if the silly conclusion wasn't about a subject where I could readily observe the actual behaviour, I might well accept the conclusion mistakenly.

Logic can distill all the error in a subtle mistake in your assumptions into a completely outrageous error at the end. Sometimes that property can be helpful, sometimes not.

comment by john_ku · 2012-05-20T01:30:10.714Z · LW(p) · GW(p)

I think many of the other commenters have done an admirable job defending Putnam's usage of thought experiments, so I don't feel a need to address that.

However, there also seems to be some confusion about Putnam's conclusion that "meaning ain't in the head." It seems to me that this confusion can be resolved by disambiguating the meaning of 'meaning'. 'Meaning' can refer to either the extension (i.e. referent) of a concept or its intension (a function from the context and circumstance of a concept's usage to its extension). The extension clearly "ain't in the head" but the intension is.

The Stanford Encyclopedia of Philosophy article on Two-Dimensional Semantics has a good explanation of my usage of the terms 'intension' and 'extension'. Incidentally, as someone with a lot of background in academic philosophy, I think making two-dimensional semantics a part of LessWrong's common background knowledge would greatly improve the level of philosophical discussion here as well as reduce the inferential distance between LessWrong and academic philosophers.

Replies from: bryjnar
comment by bryjnar · 2012-05-20T11:18:41.471Z · LW(p) · GW(p)

I have to say, I think Chalmers' Two-Dimensional Semantics thing is pretty awesome! Possibly presented in an overly complicated fashion, but hey.

As for Putnam, I think his point is stronger than that! He's not just saying that the extension of a term can vary given the state of the world: no shit, there might have been fewer cats in the world, and then the extension of "cat" would be different. He's saying that the very function that picks out the extension might have been different (if the objects we originally ostended as "cats" had been different) in an externalist way. So he's actually being an externalist about intensions too!

Replies from: john_ku
comment by john_ku · 2012-05-21T06:31:20.644Z · LW(p) · GW(p)

You're right that Putnam's point is stronger than what I initially made it out to be, but I think my broader point still holds.

I was trying to avoid this complication but with two-dimensional semantics, we can disambiguate further and distinguish between the C-intension and the A-intension (again see the Stanford Encyclopedia of Philosophy article for explanation). What I should have said is that while it makes sense to be externalist about extensions and C-intensions, we can still be internalist about A-intensions.

comment by DanielLC · 2012-05-18T17:43:45.012Z · LW(p) · GW(p)

Twin Earth is impossible in this universe. A universe could exist just like ours, except that water is made of a compound of xenon, yttrium, and zing (XeYZn). Furthermore, the laws of physics are such that this chemical acts like water does in ours, and everything else acts just like water in ours. The laws would have to be pretty bizarre, but they could exist.

Replies from: Alicorn, RolfAndreassen
comment by Alicorn · 2012-05-19T04:27:04.782Z · LW(p) · GW(p)

Is it not clear that the charitable reading of "XYZ" doesn't involve xenon, yttrium, or zinc in particular? I mean, as you point out, that involves two extra letters. I think XYZ were just a sequence of letters chosen to stand in for something not H2O.

Replies from: DanielLC
comment by DanielLC · 2012-05-19T04:44:00.570Z · LW(p) · GW(p)

I know. I was just using that as an example. At first I was going to go with something clearly impossible, like a compound of noble gasses.

My point was that even if it's nothing like water in our universe, if you were really willing to mess with the laws of physics, you could make it behave like water, but make everything else stay pretty much the same.

comment by RolfAndreassen · 2012-05-18T18:12:55.092Z · LW(p) · GW(p)

The compound XeYZn in our universe does not behave anything like water; in fact I rather suspect you can't get any such compound. How then is the other universe "just like ours"? You've just stated what the difference is!

Replies from: DanielLC
comment by DanielLC · 2012-05-19T02:38:03.446Z · LW(p) · GW(p)

It's not just like ours. It's just like ours, with one exception.

It has a major change in the laws of physics that increases the complexity by orders of magnitude, but it's such that the higher scale things are pretty much the same.

Replies from: fubarobfusco, RolfAndreassen
comment by fubarobfusco · 2012-05-19T04:45:11.641Z · LW(p) · GW(p)

It's not just like ours. It's just like ours, with one exception.

How do I put this ... No.

Replies from: DanielLC
comment by DanielLC · 2012-05-19T06:26:05.592Z · LW(p) · GW(p)

It's not a small exception. The vast majority of the laws of physics would be specifying what XeYZn is, just so that it can make it act like water should. In accordance to occam's razor, this is astronomically unlikely. It's still possible. To anyone looking at it on a larger scale, it would seem just like ours.

comment by RolfAndreassen · 2012-05-19T04:16:05.448Z · LW(p) · GW(p)

You say:

everything else acts just like [it does] in ours.

But this is clearly false: Demonstrably, xenon does not act as it does in our universe. In particular, it forms a compund with yttrium and zinc. Likewise, zinc is clearly different from our-universe zinc, which absolutely does not form compounds with xenon.

Never mind the water, or the XeYZn compound. In our universe, if you leave elemental xenon, yttrium, and zinc in a box together, they will not form a compund. That's not true in the other universe, or how does XeYZn form in the first place? And incidentally, what about other-universe hydrogen and oxygen, do they no longer bond to form water?

Replies from: DanielLC
comment by DanielLC · 2012-05-19T05:01:33.910Z · LW(p) · GW(p)

That's not true in the other universe, or how does XeYZn form in the first place? And incidentally, what about other-universe hydrogen and oxygen, do they no longer bond to form water?

When hydrogen and oxygen are combined, it causes a bizarre nuclear reaction that results in XeYZn.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-05-19T19:58:41.950Z · LW(p) · GW(p)

Well, that breaks conservation of energy right there, so now you've got a really bizarre set of laws. What happens when you run electricity through water? In our universe this results in hydrogen and oxygen.

I really don't think you can save this thought experiment; the laws of physics are too intimately interconnected.

Replies from: DanielLC
comment by DanielLC · 2012-05-19T20:21:06.748Z · LW(p) · GW(p)

so now you've got a really bizarre set of laws.

Yes. I thought I was pretty clear on that. Breaking conservation of energy is barely touching how bizarre it is. It still acts like our universe, except where H20 and XeYZn are concerned.

What happens when you run electricity through water?

If you run electricity through XeYZn, it results in hydrogen and oxygen. If you even have H20, it will immediately turn into zenon, yttrium, and zinc.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-05-19T21:11:10.423Z · LW(p) · GW(p)

Ok. You are talking about Omega constantly intervening to make things behave as they do in our universe. But in that case, what is the sense in which XYZ is not, in fact, H2O? How do the twin-universe people know that it is in fact XeYZn? Indeed, how do we know that our H2O isn't, in fact, XeYZn? It looks to me like you've reinvented the invisible, non-breathing, permeable-to-flour dragon, and are asserting its reality. Is there a test which shows water to be XeYZn? Then in that respect it does not act like our water. Is there no such test? Then in what sense is it different from H2O?

Replies from: DanielLC
comment by DanielLC · 2012-05-20T00:24:17.769Z · LW(p) · GW(p)

In order for everything to work exactly the same, there essentially would have to be water, since physics would have to figure out what water could do. That being said, it could just be similar. If it models how XeYZn should behave approximately, subtracts that from how XeYZn actually behaves, and adds how H2O should behave approximately, and has some force to hold the XeYZn together, you'd have to model XeYZn to predict the future.

Come to think of it, it would probably be more accurate to say that water is made of physics at this point, since it's really more about how physics are acting crazy at that point than it is about the arrangement of protons, neutrons, and electrons. In any case, it's not H2O.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-05-20T02:55:37.830Z · LW(p) · GW(p)

You didn't answer the question. Does XYZ behave like water in every way, or not? If it does, what's the difference? If it doesn't, you can no longer say it replaces water.

Replies from: CuSithBell, DanielLC
comment by CuSithBell · 2012-05-20T04:53:59.623Z · LW(p) · GW(p)

Is the thrust of the thought experiment preserved if we assume that the two versions of water differ on a chemical level, but magically act identically on the macro scale, and in fact are identical except to certain tests that are, conveniently, beyond the technological knowledge of the time period? (Assuming we are allowed to set the thought experiment in the past.)

Surely it's not necessary that the two worlds be completely indistinguishable?

comment by DanielLC · 2012-05-20T04:49:10.437Z · LW(p) · GW(p)

It doesn't behave just like water. It behaves like a simpler model of water. If you look more closely, the difference isn't what you'd expect between a good model of water and a bad model of water. It's what you'd expect between a good model of XeYZn and a bad model of XeYZn.

In other words, it would act like water to a first approximation, but instead of adding the terms you'd expect to make it more accurate, you add the terms you'd use to make an approximation of XeYZn more accurate.

comment by ChrisHallquist · 2012-05-20T09:01:27.488Z · LW(p) · GW(p)

This quote misunderstands the zombie thought experiment as used by Chalmers. Chalmers actually thinks zombies are impossible given the laws that actually govern the universe, and possible only in the sense it's possible the universe could have had different laws (or so many people would claim.)

I'm not as sure about Putnam's views, but I suspect he would make an analogous claim, that his thought experiment only requires Twin Earth to be possible in a very broad sense of possibility.

comment by casebash · 2015-11-06T23:27:53.239Z · LW(p) · GW(p)

Putnam's flaw is to try to prescribe how language works. Putnam is like, language works like X because it has to, ignoring that we create language and can choose how it works. I'd agree with the suggestion further up that the typical mind fallacy is at work here.

comment by hankx7787 · 2012-05-20T12:50:05.731Z · LW(p) · GW(p)

A similar point is that a lot of bad theories historically are the result of trying to explain something that should just be taken as an irreducible primary. Aristotle tried to explain "motion" by means of the "unmoved mover", Newton was treated skeptically because his theory didn't explain why things continued to move, Lavoisier's theory of oxygen I think was treated similarly contra phlogiston.

comment by roll · 2012-05-20T08:18:56.181Z · LW(p) · GW(p)

I think something is missing here. Suppose that water has some unknown property Y that may allow us to do Z. This very statement requires that water somehow refers to object in the real world, so that we would be interested in experimenting with the water in the real world instead of doing some introspection into our internal notion of 'water'. We want our internal model of water to match something that is only fully defined externally.

Other example, if water is the only liquid we know, we may have combined notions of 'liquid' and 'water', but as we explore properties of 'liquid/water' we find it necessary to add more references to external world: water, alcohol, salt water, liquid... those are in our head but they did not pop into existence out of nothing (unless you are a solipsist).

comment by billswift · 2012-05-18T12:00:01.971Z · LW(p) · GW(p)

I don't think there is anything special about consciousness. "Consciousness" is what any intelligence feels from the inside, just as qualia are what sense perceptions feel like from the inside.

Replies from: Richard_Kennaway, ciphergoth
comment by Richard_Kennaway · 2012-05-18T12:40:20.318Z · LW(p) · GW(p)

For qualia, that is precisely the definition of the word, and therefore says nothing to explain their existence. For consciousness, it also comes down to a definition, given a reasonable guess at what is meant by "intelligence" in this context.

What is this "inside"?

comment by Paul Crowley (ciphergoth) · 2012-05-18T12:28:00.503Z · LW(p) · GW(p)

I am inclined to believe that what we call "consciousness" and even "sentience" may turn out to be ideas fully as human-specific as Eliezer's favourite example, "humour".

There's at least a possibility that "suffering" is almost as specific.

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-18T14:15:14.238Z · LW(p) · GW(p)

There's at least a possibility that "suffering" is almost as specific.

Why? I'd expect that having a particular feeling when you're damaging yourself and not liking that feeling would be extremely widespread. (Unless by "suffering" you mean something else than ‘nociception’, in which case can you elaborate?)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-05-18T14:19:58.084Z · LW(p) · GW(p)

I mean something morally meaningful. I don't think a chess computer suffers when it loses a game, no matter how sophisticated. I expect that self-driving cars are programmed to try to avoid accidents even when other drivers drive badly, but I don't think they suffer if you crash into them.

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-19T08:38:08.675Z · LW(p) · GW(p)

Yeah, if by “suffering” you mean “nociception I care about”, it sure is human-specific.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-05-19T11:11:33.960Z · LW(p) · GW(p)

I'd find this more informative if you explicitly addressed my examples?

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-19T15:20:04.396Z · LW(p) · GW(p)

Well, I wouldn't usually call the thing a chess computer or a self-driving car is minimizing “suffering” (though I could if I feel like using more anthropomorphizing language than usual). But I'm confused by this, because I have no problem using that word to refer to a sensation felt by a chimp, a dog, or even an insect, and I'm not sure what is that an insect has and a chess computer hasn't that causes this intuition of mine. Maybe the fact that we share a common ancestor, and our nociception capabilities are synapomorphic with each other... but then I think even non-evolutionists would agree a dog can suffer, so it must be something else.