Posts

'Is' and 'Ought' and Rationality 2011-07-05T03:53:05.786Z

Comments

Comment by BobTheBob on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-09T15:50:48.454Z · LW · GW

I was thinking about this as a problem of so-called social epistemology -specifically, of what a person ought to believe when her or his beliefs contradict someone else's. It seems obvious to me that -other things being equal- the rational approach to take when encountering someone who appears rational and well-informed and who disagrees with you, is to take seriously that person's thoughts. Since the abstract authors fit the description, it's obvious, I think, that what they say deserves at least some consideration -ie, what they say is not obviously worthless (ie., to be dismissed with a one liner).

Is this fair?

I realize the situation is more complicated here, as there's the question whether a whole discipline has gone off the rails, which I think the OP has convinced himself is the case with philosophy (so, maybe other things aren't equal). I've tried a few times without success to recommend some epistemic humility on this point.

Comment by BobTheBob on Mixed Reference: The Great Reductionist Project · 2012-12-08T21:20:50.917Z · LW · GW

This is an interesting post but, I have to say, kind of frustrating. I have tried to follow the discussions between Esar and RobbBB and your substantial elucidation as well as many other great comments, but I remain kind-of in the dark. Below are some questions which I had, as I read.

This question doesn't feel like it should be very hard.

What question? What exactly is the problem you are purporting to solve, here? If it is, "What is the truth condition of 'If we took the number of apples in each pile, and multiplied those numbers together, we'd get six.'", then doesn't Tarski's disquotation schema give us the answer?

Navigating to the six requires a mixture of physical and logical reference

Not sure why you obscure matters with idiosyncratic metaphors like 'navigating to the six', but never mind. Can we infer from the distinction between logical and physical reference that there is a distinction between logical and physical truth? It appears you countenance the Analytic/Synthetic distinction - precisely the distinction which is usually considered to have undone logical positivism. Do you have a preferred response to Quine's famous argument in 'Two Dogmas of Empiricism', or do you have a reason for thinking you are immune to it? I think you think you aren't doing philosophy, so it doesn't apply, but then I really don't know how to understand what you're saying. If your problems are just computational, then surely you're making matters much harder for yourself than they should be (not that computational problems aren't sometimes very hard).

Next we have to call the stuff on the table 'apples'. But how, oh how can we do this...?

How about by saying "Those are apples"? What exactly is the problem, here?

...when grinding the universe and running it through a sieve will reveal not a single particle of appleness?

Here's my best guess at what is exercising you. You reason that only those properties needed to account for the constitution and behaviour of the smallest parts of matter are real, that being an apple is not among them, and hence that being an apple is not a real property. Assuming this guess is right, what exactly is your reason for accepting the first premise? It is not immediately obvious, though I know there are traditionally different reasons. The reason will inform the adequacy of your answer.

Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider. Nuclei and airplanes alike, according to our understanding, are obeying special relativity, quantum mechanics, and chromodynamics.

So far so good...

We also use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC. A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark. (Or a quantum field, really; but you get the idea.)

Nothing controversial here, but it of course has nothing to do with our understanding of the problem. If the understanding is correct, the problem exists regardless of whether anyone or thing ever imagines or represents or refers to apples or anything else. To introduce representations and models into the discussion is only to confuse matters, no?

So is the 747 made of something other than quarks?

Where does this question come from? If my guess about the problem is correct, it is irrelevant. It may be that the property of being a 747 (apple) is not identical to the property of being in any very complicated way composed of quarks, bosons and leptons, even though a given 747 (apple) is made only of these particles. The (philosophical) thesis about properties is different than the scientific thesis about the constitution of physical objects.

No, we're just modeling the 747 with representational elements that do not have a one-to-one correspondence with individual quarks. Similarly with apples.

Please clarify - what precisely does the relation between a computer model of a 747 and a 747 have to do with the metaphysics of properties?

To compare a mental image of high-level apple-objects to physical reality,

Can you say what you mean by this? For myself, this is someting I only ever do very rarely -conjure a mental image, then see how it agrees or differs from what I'm looking at. To be sure, sets of neurons in my brain are being activated all the time by patterns of light hitting my retinas, but there's a lot of explanatory distance to cover to show these (the story about neural events and the story about images) are the same thing. In any case, this seems entirely irrelevant to the present concerns.

for it to be true

Are mental images the sorts of thngs which can be true (in the sense in which a sentence or proposition can be, as opp. merely accurate)? Suppose I have a mental picture of a certain cat on a certain table and that the cat is indeed on the table. Is my mental image true? Even if the cat in my image is the wrong colour? Or is sitting when the cat is standing? As far as I can see this isn't just nit-picking. You have some kind of AI model which involves mental images and which you seem to think needs a semantic theory, and it's just not clear how it all fits together.

...doesn't require that apples be fundamental in physical law.

If my guess is correct, your answer to the problem as far as I can see is something like "The problem is not a problem".

A single discrete element of fundamental physics is not the only thing that a statement can ever be compared-to. We just need truth conditions that categorize the low-level states of the universe, so that different low-level physical states are inside or outside the mental image of "some apples on the table" or alternatively "a kitten on the table".

Can you give an example of a low-level state being 'inside a mental image' of "some apples on the table"? I really don't know what this means.

Having gone through this once, here's a second pass at a gloss. You accept, reasonably, that "That is an apple" is true in English iff that (pointing to a certain apple) is an apple. The referent of the "that" we can take to be a certain object. The question arises, however, as to what the referent or other semantic value is of "is an apple". Plausibly, it is the property of being an apple. But, we may reasonably ask, what sort of thing is being and apple? I understand your answer is as follows:

Just as an individual apple is nothing more than a quite large number of quarks and leptons and bosons interestingly assembled, being an apple is nothing more than being a quite large number of quarks and leptons and bosons assembled in a certain interesting way.

Is this roughly a fair understanding? If so, please consider:

1) You will need to augment your story to include so-called etiology. The property of being a 10-dollar bill is not equivalent to the property of being in a certain way composed of matter - causal origin/history matters, too (perfect counterfeits).

2) The problem of vagueness often seems like a paradigm of philosophical futility but it is a real problem. Suppose you could cross-breed apples and pears, and have a spectrum of individuals ranging from unproblematic apple to unproblematic pear (= non-apple). What will the truth-condition be of the statement 'That is an apple', pointing to the piece of fruit in the middle? Do you give up on bivalence, or do you say that the statement is determinately true or false, but there are deep epistemological problems? Neither answer seems satisfactory, and where you come down may affect your theory.

3) If this story is correct, it will presumably apply to the whole very large hierarchy of properties, ranging from being a quark through being a proton and being a carbon atom up to being an apple and beyond. And the high-level properties will have at a minimum to be disjunctions of lower properties, even to accomodate such mundane facts as the existence of both green and red apples. And you may find ultimately that what is in question is more like a family-resemblance relation among the cases which constitute being an apple (if not apples, then tables and 747s, very likely). And then aren't you in danger simply of laboriously re-capitulating the history of 20th c. philosophical thought on the subject?

This is all philosophy, which you've repeatedly said you aren't interested in doing. But that's what you're doing! If you're just doing AI, you really shouldn't be wasting your time on these questions, surely. Research into neural nets is already making great progress on the question of how we make the discriminations we do. Why isn't that enough for your purposes?

A last thought: there's something of a debate on this site about the value of traditional philosophy. I think it has value, a big part of which is that it encourages people to think carefully and to express themselves precisely. I don't claim always to be as careful or precise as I should be, but these are values. Doing analytic philosophy is some of the best rationality training you can get.

Comment by BobTheBob on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-08T20:52:15.312Z · LW · GW

I thing the comments of fortyeridania, JonathanLivengood, Peterdjones and others have pretty much nailed matters, but here's my take:

This post is actually self-undermining. Roughly, it is an argument that a person's having a background in Pearl and Kahneman will lead to that person's being able to reason better than if s/he lacked the background, which is in fact (sorry to be blunt, but I think the balance of the comments support this) a quite poor specimen of an argument made by someone who has the background. There's no evidence that you would have done even worse without the background. So the post is itself some evidence for the falsity of what it claims.

What is the rational value of the abstracts and your one-liners? I understand the point in each case is that the paper is obviously worthless. But this is false: they are indeed obviously not obviously worthless, insofar as they are made by people who are likely almost as smart and well-read as you, and very likely aware of the kinds of criticims you make.

You seem to be conflating questions of philosophical pedagogy with questions of professional practice/methodology.

  • Concerning the former: can you give an example of a philosophical paper which is mistaken as a result of biases which reading Kahneman as an undergraduate might have prevented, and indicate the mistake? Can you give an example of a philosophical paper which is mistaken as a result of a knowledge gap which reading Pearl might have avoided (written since Pearl published)? Your claim would be strengthened, of course, if the latter example is from the considerable majority of philosophy not specifically about the problem of causation (otherwise you're getting everyone to read Pearl despite its being apparently relevant only to a small minority). In other words, can you give any empirical evidence at all for your view? (Please don't say simply that people who understand Kahneman won't rely on 'philosophical intuitions', as that's plainly false and misrepresents the nature of what dispute there is over intuitions in philosophy)

  • Concerning the latter: your link is to a paper recommending formal methods in epistemology. Sounds terrific! Does the point extend to other areas of philosophy? As CEO of a philosophy/math/compsci research institute, maybe you'd be willing to set the example by going first. Would be great to see a formal statement of your intended argument here, and even better, formal re-statements of your past posts on philosophical topics.

Comment by BobTheBob on Intuitions Aren't Shared That Way · 2012-12-02T17:43:10.171Z · LW · GW

Following the sequence link at the top, I found this similar post, which has an impressive list of references. You include there this paper by Timothy Williamson. It seems to me an oversight you don't mention the paper's argument at all, as it's a sustained critique of the position you're representing.

The basic idea is that the kind of doubts about intuitions you raise are relevantly similar to more familiar forms of philosophical scepticism (scepticism about the external world, etc). I understand Williamson sees a dilemma: either they are mistaken for the same reasons familiar scepticism is mistaken (Williamson's position, to which most of the paper is dedicated), or the doubts undermine way more than its proponents think they do.

It'd be great to hear your summary of the argument there, and what you consider to be its flaw(s).

If you like Williamson, check out also this excellent bit on naturalism.

Comment by BobTheBob on The Useful Idea of Truth · 2012-11-17T15:34:20.674Z · LW · GW

The issue is whether a sentence's meaning is just its truth conditions, or whether it expresses some kind of independent thought or proposition, and this abstract object has truth conditions. These are two quite different approaches to doing semantics.

Why should you care? Personally, I don't see this problem has anything to do with the problem of figuring out how a brain acquires the patterns of connections needed to create the movements and sounds it does given the stimuli it receives. To me it's an interesting but independent problem, and the idea of 'neurally embodied beliefs' is worthless. Some people (with whom I disagree but whom I nevertheless respect) think the problems are related, in which case there's an extra reason to care, and what exactly a neurally embodied belief is, will vary. If you don't care, that's your business.

Comment by BobTheBob on The Useful Idea of Truth · 2012-11-15T14:01:04.698Z · LW · GW

This seems to me a reasonable question (at least partly - see below). To be clear, I said that reading the work of experts is more likely to produce a good understanding than merely writing-up one's own thoughts. My answer:

For any given field, reading the thoughts of experts -ie, smart people who have devoted substantial time and effort to thinking and collaborating in the field- is more likely to result in a good understanding of the field's issues than furrowing one's brow and typing away in relative isolation. I take this to be common sense, but please say if you need some substantiation. The conclusion about philosophy follows by universal instantiation.

"Ah", I hear you say, "but philosophy does not fit this pattern, because the people who do it aren't smart. They're all at best of mediocre intelligence." (is there another explanation of the poor understanding you refer to?). From what I've seen on LW, this position will be inferred to from a bad experience or two with philosophy profs , or perhaps on the grounds that no smart person would elect to study such a diseased subject.

Two rejoinders:

i) Suppose it were true that only second rate thinkers do philosophy. It would be still the case that with a large number of people discussing the issues over many years, there'd be a good chance something worth knowing -if there's anything to know- would emerge. It wouldn't be obvious that the rational course is to ignore it, if interested in the issues.

ii) It's obviously false (hence the 'partly' above). Just try reading the work of Timothy Williamson or David Lewis or Crispin Wright or W.V.O. Quine or Hilary Putnam or Donald Davidson or George Boolos or any of a huge number of other writers, and then making a rational case that the leading thinkers of philosophy are second-rate intellects. I think this is sufficiently obvious that the failure to see it suggests not merely oversight but bias.

Philosophical progress may tend to take the form just of increasingly nuanced understandings of its problems' parameters rather than clear resolutions of them, and so may not seem worth doing, to some. I don't know whether I'd argue with someone who thinks this, but I would suggest if one thinks it, one shouldn't be claiming it even while expounding a philosophical theory.

Comment by BobTheBob on The Useful Idea of Truth · 2012-11-15T13:54:55.738Z · LW · GW

As far as I can see, your point is something like:

"Your reasoning implies I should read some specific thing; there is no such thing; therefore your reasoning is mistaken." (or, "unless you can produce such a thing...")

Is this right? In any case, I don't see that the conditional is correct. I can only give examples of works which would help. Here are three more. Your second part seeks (as I understand it) a theory of meaning which would imply that your ' Elaine is a post-utopian' is meaningless, but that 'The photon continues to exist...' is both meaningful and true. I get the impression you think that an adequate answer could be articulated in a few paragraphs. To get a sense of some of the challenges you might face -ie, of what the project of contriving a theory of meaning entails- consider looking at Stephen Schiffer's excellent Remnants of Meaning and The Things we Mean or Scott Soames's What is Meaning? .

Comment by BobTheBob on The Useful Idea of Truth · 2012-11-14T03:46:37.400Z · LW · GW

1) I don't see that this really engages the criticism. I take it you reject that the subjects of truth and reference are important to you. On this, two thoughts:

a) This doesn't affect the point about the reliability of blogging versus research. The significance of the irrationality maybe, but the point remains. You may hold that the value to you of the creative process of explicating your own thoughts is sufficiently high that it trumps the value of coming to optimally informed beliefs - that the cost-benefit analysis favours blogging. I am sceptical of this, but would be interested to hear the case.

b) It seems just false that you don't care about these subjects. You've written repeatedly on them, and seem to be aiming for an internally coherent epistemology and semantics.

2) My claim was that your lack of references is evidence that you don't accord importance to experts on truth and meaning, not that there are specific things you should be referencing. That said, if your claim is ultimately just the observation that truth is useful as a device for so-called semantic ascent, you might mention Quine (see the relevant section of Word and Object or the discussion in Pursuit of Truth) or the opening pages of Paul Horwich's book Truth, to give just two examples.

3) My own view is that AI should have nothing to do with truth, meaning, belief or rationality - that AI theory should be elaborated entirely in terms of pattern matching and generation, and that philosophy (and likewise decision theory) should be close to irrelevant to it. You seem to think you need to do some philosophy (else why these posts?), but not too much (you don't have to decide whether the sorts of things properly called 'true' are sentences, abstract propositions or neural states, or all or none of the above). Where the line lies and why is not clear to me.

Comment by BobTheBob on The Useful Idea of Truth · 2012-11-13T04:30:51.922Z · LW · GW

A criticism - somewhat harsh but hopefully constructive.

As you know, lots of people have written on the subjects of truth and meaning (aside from Tarski). It seems, however, that you don't accord them much importance (no references, failure to consider alternate points of view, apparent lack of awareness of the significance of the matter of what the bearer of truth (sentence, proposition, 'neurally embodied belief') properly is, etc.). I put it to you this is a manifestation of irrationality: you have a known means at your disposal to learn reliably about a subject which is plainly important to you, but you apparently reject it in favour of the more personally satisfying but much less reliable alternative of blogging your own ideas -you willingly choose an inferior path to belief formation. If you want to get a good understanding of such things as truth, reference and mathematical proof, I submit that the rational starting point is to read at least a survey of what experts in the fields have written, and to develop your own thoughts, at least initially, in the context they provide.

Comment by BobTheBob on Remind Physicalists They're Physicalists · 2011-08-18T14:00:52.285Z · LW · GW

The comments of yours I've read are always clear and insightful, and usually I agree with what you say. I have to disagree with you here, though, about your supposed second fallacy.

Arguments against the existence of subjective experience (consciousness, qualia, etc.) generally take the form of arguing against other people's arguments in favour. Since no-one has a good account of what it is, it is not difficult to demolish their bad accounts. This is like refuting the phlogiston theory to prove that fire does not exist.

I disagree. Arguments against qualia typically challenge the very coherence of anything which could play the desired role. It's not like trying to prove fire doesn't exist, it's like trying to prove there is no such thing as elan vital or chakras.

I deny the existence of UFOs. It's pretty clear what UFOs are - spaceships built and flown to Earth by creatures who evolved on distant planets - and I can give fairly straight-forward probabilistic reasons of the kind amenable to rational disagreement, for my stance.

I (mostly) deny the existence of God. Apologies if you're a theist for the bluntness, but I don't think it's at all clear what God is or could be. Every explication I've ever encountered of God either involves properties which permit the deduction of contradictions (immovable rocks/unstoppable forces and what-not), or are so anodyne or diffuse as to be trivial ('God is love' -hence the 'mostly'). There is enough talk in our culture about God, however, to give meaning to denials of His existence - roughly, 'All (rather, most of) this talk which takes place in houses of worship and political chambers involving the word 'God' and its ilk, involves a mistaken ontological commitment'.

Do I deny the existence of consciousness, or subjective experience? If my wife and I go to a hockey game or a play, we in some sense experience the same thing -there is a common 'objective' experience. But equally we surely have in some sense different experiences - she may be interested or bored by different parts than I am, and will see slightly different parts of the action than I. So clearly there is such a thing as subjective experience, in some sense. This, however, is not what is at issue. Roughly, what we are concerned about is a supposed ineffable aspect of experience, a 'what it is like'. I deny the existence of this in the sense in which I deny the existence of God. That is, I have yet even to see a clear and coherent articulation of what's at issue. You imply the burden of argument is with the deniers; I (following Dennett and many others) suggest the burden is with defenders to say what it is they defend.

Are qualia causally efficacious, or not? If they are, then they are in principal objectively detectable/observable, and hence not worthy of the controversy they generate (if they have a causally efficacious 'aspect' and a non-efficacious, one, then just factor out the causally efficacious aspect as it plays no role in the controversy). On the flip side, of course, if qualia are not causally efficacious, then they aren't responsible for our talk of them - they aren't what we're presently talking about, paradoxically.

It seems to me the best case for exponents of consciousness is to force a dilemma - an argument pushing us on the one hand to accept the existence of something which on the other appears to be incoherent (as per just above). But I have yet to see this argument. Appeals to what's 'obvious' or to introspection just don't do it - the force of the sort of arg above and the several others adduced by Dennett et. al., clearly win out over thumping one's sternum and saying 'this!', simply because the latter isn't an argument. The typical candidates for serious arguments in this vein are inverted spectrum or Black and White Mary type-arguments, but it seems to me they always just amount to the chest thumping in fancy dress. Would be interested to hear of good candidate arguments for qualia, though, and to hear any objections if you think the foregoing is unfair.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-25T02:38:36.902Z · LW · GW

Thanks for the link to the paper by Timothy Bays. It looks like a worthwhile -if rather challenging- read.

I have to acknowledge there's lots to be said in response to Quine and Putnam. I could try to take on the task of defending them, but I suspect your ability to come up with objections would well outpace my ability to come up with responses. People get fed up with philosophers' extravagant thought experiments, I know. I guess Quine's implicit challenge with his "undetached rabbit parts" and so on is to come up with a clear (and, of course, naturalistic) criterion which would show the translation to be wrong. Simplicity considerations, as you suggest, may do it, but I'm not so sure.

Comment by BobTheBob on Secrets of the eliminati · 2011-07-21T21:32:06.300Z · LW · GW

This may owe to a confusion on my part. I understood from the title of the post and some of its parts (incl the last par.) that the OP was advocating elimination over reduction (ie, contrasting these two options and picking elimination). I agree that if reduction is an option, then it's still ok to use them in explanation, as per your dollar example.

Comment by BobTheBob on Secrets of the eliminati · 2011-07-21T20:31:01.150Z · LW · GW

Thanks for this great sequence of posts on behaviourism and related issues.

Anyone who does not believe mental states are ontologically fundamental - ie anyone who denies the reality of something like a soul - has two choices about where to go next. They can try reducing mental states to smaller components, or they can stop talking about them entirely.

Here's what I take it you're committed to:

  • by 'mental states' we mean things like beliefs and desires.
  • an eliminativist has both to stop talking about them and also using them in explanations.
  • whither go beliefs and desires also goes rationality. You can't have a rational agent without what amount to beliefs and desires.
  • you are advocating eliminativism.

Can you say a bit about the implications of eliminating rationality? How do we square doing so with all the posts on this site about what is and isn't rational? Are these claims all meaningless or false? Do you want to maintain that they all can be reformulated in terms of tendencies or the like?

Alternately, if you want to avoid this implication, can you say where you dig in your heels? My prejudices lead me to suspect that the devil lurks in the details of those 'higher level abstractions' you refer to, but am interested to hear how that suggestion gets cashed-out. Apols if you have answered this and I have missed it.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-19T02:11:50.456Z · LW · GW

The idea that any given hypothesized set of beliefs and desires is compatible with all possible facts, is not very plausible on its face.

I didn't mean to say this, if I did. The thesis is that there are indefinitely many sets of beliefs and desires compatible with all possible behavioural and other physical facts. And I do admit it seems a tall order. But then again, so does straight-forward scientific underdetermination, it seems to me. Just to be clear, my personal preoccupation is the prescriptive or normative nature of oughts and hence wants and beliefs, which I think is a different problem than the underdetermination problem.

The canonical statement comes in Chapter 2 of W.V.O. Quine's Word and Object. Quine focusses on linguistic behaviour, and on the conclusion that there is no unique correct translation manual for interpreting one person's utterances in the idiolect of another (even if they both speak, say, English). The claims about beliefs are a corrollary. Donald Davidson takes up these ideas and relates them specifically to agents' beliefs in a number of places, notably his papers 'Radical Interpretation', 'Belief and the Basis of Meaning', and 'Thought and Talk', all reprinted in his Inquiries into Truth and Interpretation. Hilary Putnam, in his paper 'Models and Reality' (reprinted in his Realism and Reason ), tried to give heft to what (I understand) comes down to Quine's idea by arguing it to be a consequence of the Lowenheim-Skolem theorem of mathematical logic.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-17T15:15:24.059Z · LW · GW

As I tried to explain in my July 08 post, there is a difference.

Straight-forward scientific underdetermination:

  • One observer/scientist

  • One, unproblematic set of facts (a curved white streak on a film plate exposed in a bubble chamber)

  • Any number of mutually incompatible scientific theories, each of which adequately explains this and all other facts. All theories completely adequate to all observations. The only puzzle is that there can be more than one theory. (Tempting to imagine two of these might be, say, established particle theory, and Wolfram's New Kind of Science conception of nature. Though presumably they would ultimately make divergent predictions, meaning this is a misleading thought).

Underdetermination of psychological facts by naturalistic facts:

  • One observer/scientist

  • One, unproblematic set of facts (behaviour and brain states. eg, a person picking an apple, and all associated neurlogical events)

  • Any number of problematic sets of supposed facts (complete but mutually incompatible assignments of beliefs and desires to the person consistent with her behaviour and brain states)

  • No (naturalistic) theory which justifies choosing one of the latter sets of facts -that is, justifies an assignment of beliefs and desires to the person.

The latter problem is not just an instance of the former. The problem for physics comparable to psychological underdetermination might look like this (ignoring Reality for the moment to make the point):

  • Scientist observes trace on film plate from cloud chamber experiment.

  • Scientist's theory is consistent with two different possible explanations (in one explanation it's an electron, in another it's a muon).

  • No further facts can nail down which explanation is correct, and all facts can anyway be explained, if more pedantically, without appeal to either electrons or muons. That is, both explanations can be reconciled equally well with all possible facts, and neither explanation anyway is ultimately needed. The suggestion is that the posits in question -electrons and muons (read different beliefs)- would be otiose for physics (read naturalistic psych).

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-14T17:19:05.011Z · LW · GW

I'm confused by your use of "cognitive states" in your 08 July comment above

You are quite right -sorry about the confusion. I meant to say behaviour and computational states -the thought being that we are trying to correlate having a belief or desire to being in some combination of these.

The question is, are these distinctions robust enough to support your argument?

I understand you're referring here to the claim -for which I can't take credit- that facts about behaviour underdetermine facts about beliefs and desires. Because the issue -or so I want to argue- is of underdetermination of one set of potential facts (namely, about beliefs and desires) by another (uninterpreted behaviour), rather than of underdetermination of scientific theory by fact or observation, I'm not seeing that the issue of the theory-ladenness of observation ultimately presents a problem.

The underdetermination is pretty easy to show, at least on a superficial level. Suppose you observe

  • a person, X, pluck an apple from a tree and eat it (facts about behaviour).

You infer:

  • X desires that s/he eat an apple, and X believes that if s/he plucks and bites this fruit, s/he will eat an apple.

But couldn't one also infer,

  • X desires that s/he eat a pear, and X believes (mistakenly) that if s/he plucks and bites this fruit, s/he will eat a pear.

or

  • X desires that s/he be healthy, and X believes that if s/he plucks and bites this fruit (whatever the heck it is), s/he will be healthy.

You may think that if you observe enough behaviour, you can constrain these possibilities. There are arguments (which I acknowledge I have not given), which show (or so a lot of people think) that this is not the case - the underdetermination keeps pace.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-14T04:25:13.411Z · LW · GW

I think the formal similarities of some aspects of arguments about qualia on the one hand and rationality on the other, are the extent of the similarities. I haven't followed all the recent discussions on qualia, so I'm not sure where you stand, but personally, I cannot make sense of the concept of qualia. Rationality-involving concepts (among them beliefs and desires), though, are absolutely indispensable. So I don't think the rationality issue resolves into one about qualia.

I appreciated your first July 07 comment about the details as to how norms can be naturalized and started to respond, then noticed the sound of a broken record. Going round one more time, to me it boils down to what Hume took to be obvious:

  • What you ought to do is distinct from what you will do.

  • Natural science can tell you at best what you will do.

  • Natural science can't tell you what you ought to do.

It is surprising to me there is so much resistance (I mean, from many people, not just yourself) to this train of thought. When you say in that earlier comment 'You have a set of goals...', you have already, in my view, crossed out of natural science. What natural science sees is just what it is your propensity to do, and that is not the same thing as a goal.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-12T13:26:57.686Z · LW · GW

As I understand it, the problem of scientific underdetermination can only be formulated if we make some kind of fact/theory distinction - observation/theory would be better, is that ok with you?

I'm not actually seeing how the temperature example is an instance of underdetermination, and I'm a little fuzzy on where bridge laws fit in, but would be open to clarification on these things.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-11T15:53:36.737Z · LW · GW

I think your question again gets right to the nub of the matter. I have no snappy answer to the challenge -here is my long-winded response.

The zombie analogy is a good one. I understand it's meant just as an analogy -the intent is not to fall into the qualia quagmire. The thought is that from a purely naturalistic perspective, people can only properly be seen as, as you put it, preference- or rationality-zombies.

The issue here is the validity of identity claims of the form,

  • Wanting that P = being in brain state ABC

My answer is to compare them to the fate of identity claims relating to sensations (qualia again), such as

  • Having sensation S (eg, being in pain) = being in brain state DEF

Suppose being in pain is found empirically always to correlate to being in brain state DEF, and the identity is proposed. Qualiaphiles will object, saying that this identity misses what's crucial to pain, viz, how it feels. The qualiaphile's thought can be defended by considering the logic of identity claims generally (this adapted from Saul Kripke's Naming and Necessity).

Scientific identity claims are necessary - if water = H2O in this world, then water = H2O in all possible worlds. That is, because water is a natural kind, whatever it is, it couldn't have been anything else. It is possible for water to present itself to us in a different phenomenal aspect ('ice9'!), but this is OK because what's essential to water is its underlying structure, not its phenomenal properties. The situation is different for pain - what's essential to pain is its phenomenal properties. Because pain essentially feels like this (so the story goes), it's correlation with being in brain state DEF can only be contingent. Since identities of this kind, if true, are by their natures necessary, the identity is false.

There is a further step (lots of steps, I admit) to rationality. The thought is that our access to people's rationality is 'direct' in the way our access to pain is. The unmediated judgement of rationality would, if push were to come to shove, trump the scientifically informed, indirect inference from brain states. Defending this proposition would take some doing, but the idea is that we need to understand each other as rational agents before we can get as far as dissecting ourselves to understand ourselves as mere objects.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-08T15:14:21.477Z · LW · GW

I do accept that 'wants' imply 'oughts'. It's an oversimplification, but the thought is that statements such as

  • X's wanting that X eat an apple implies (many other things being equal) that X ought to eat an apple.

are intuitively plausible. If wanting carries no implications for what one ought to do, I don't see how motivation can get off the ground.

Now, if we have

1) wanting that P implies one ought to do Q,

and

2) being in physical state ABC implies wanting that P

then, by transitivity of implication, we get

3) being in physical state ABC implies one ought to do Q

And this is just the kind of implication I'm trying to show is problematic.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-08T04:31:38.720Z · LW · GW

This is a nice way of putting things. As long as we're clear that what makes it a prescription is the fact that it is an ideal for the non-ideal agent.

Do you think this helps the cause of naturalism?

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-08T04:06:36.721Z · LW · GW

I agree that underdetermination problems are distinct from problems about norms -from the is/ought problem. Apologies if I introduced confusion in mentioning them. They are relevant because they arise (roughly speaking) at the interface between decision theory and empirical science, ie, where you try to map mere behaviours onto desires and beliefs.

My understanding is that in philosophy of science, an underdetermination problem arises when all evidence is consistent with more than one theory or explanation. You have a scientist, a set of facts, and more than one theory which the scientist can fit to the facts. In answer to your initial challenge, the problem is different for human psychology because the underdetermination is not of the scientist's theory but supposedly of one set of facts (facts about beliefs and desires) by another (behaviour and all cognitive states of the agent). That is, in contrast to the basic case, here you have a scientist, one set of facts -about a person's behaviour and cognitive states- a second set of suppposed facts -about the person's beliefs and desires- and the problem is that the former set underdetermine the latter.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-08T03:14:28.122Z · LW · GW

This seems reasonable, but I have to ask about "correctly describes". The statement

"want to eat an apple" implies being in brain state ABC or computational state DEF (or something of that nature)

is plausible to me. I think the reverse implication, though raises a problem:

being in brain state ABC or computational state DEF (or something of that nature) implies "want to eat an apple"

But maybe neither of these is what you have in mind?

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-07T17:54:05.896Z · LW · GW

You use a fair bit of normative, teleological vocabulary, here: 'purpose', 'goal', 'success', 'optimisation', 'trying', being 'good' at 'steering' the future. I understand your point is that these terms can all be cashed out in unproblematic, teleonomic terms, and that this is more or less an end of the matter. Nothing dubious going on here. Is it fair to say, though, that this does not really engage my point, which is that such teleonomic substitutes are insufficient to make sense of rationality?

To make sense of rationality, we need claims such as,

  • One ought to rank probabilities of events in accordance with the dictates of probability theory (or some more elegant statement to the effect).

If you translate this statement, substituting for 'ought' the details of the teleonomic 'ersatz' correlate, you get a very complicated statement about what one likely will do in different circumstances, and possibly about one's ancestor's behaviours and their relation to those ancestors' survival chances (all with no norms).

This latter complicated statement will not mean what the first statement means, and won't do the job required in discussing rationality of the first statement. The latter statement will be an elaborate description; what's needed is a prescription.

Probably none of this should matter to someone doing biology, or for that matter decision theory. But if you want to go beyond and commit to a doctrine like naturalism or physical reductionism, then I submit this does become relevant.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-07T03:56:17.473Z · LW · GW

I would taboo and translate that use to yield something like "To make sense of rationality in an agent, one needs to accept/assume/stipulate that the agent sometimes acts with a purpose in mind. We need to understand 'purpose', in that sense, to understand rationality."

Thanks, yes. This is very clear. I can buy this.

But I think I understand this kind of purpose, identifying it as the cognitive version of something like "being instrumental to survival and reproduction". That is, it is possible for an outside observer to point to behaviors or features of a virus that are instrumental to viral survival and reproduction.

Sorry if I'm slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They're the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are 'merely' teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order.

Here's one more crack at trying to motivate this, using very evidently non-scientific terms. On the one hand, I submit that you cannot make sense of a thing (human, animal, AI, whatever) as rational unless there is something that it cares about. Unless that is, there is something which matters or is important to it (this something can be as simple as survival or reproduction). You may not like to see a respectable concept like rationality consorting with such waffly notions, but there you have it. Please object to this if you think it's false.

On the other hand, nothing in nature implies that anything matters (etc) to a thing. You can show me all of the behavioural/cognitive correlates of X's mattering to a thing, or of a thing's caring about X, and provide me detailed evolutionary explanations of the behavioural correlates' presence, but these correlates simply do not add up to the thing's actually caring about X. X's being important to a thing, X's mattering, is more than a question of mere behaviour or computation. Again, if this seems false, please say.

If both hands seem false, I'd be interested to hear that, too.

At the level of a bacterium, there are second-messenger chemicals that symbolize or represent situations that are instrumental to survival and reproduction. At the level of the nematode, there are neuron firings serving as symbols. At the level of a human the symbols can be vocalizations: "I'm horny; how about you?". I don't see anything transcendently new at any stage in this progression, nor in the developmental progression that I offered as a substitute.

As soon as we start to talk about symbols and representation, I'm concerned that a whole new set of very thorny issues get introduced. I will shy away from these.

Let me try putting that in different words: "Norms are in the eye of the beholder. Natural science tries to be objective - to avoid observer effects. But that is not possible when studying rationality. It requires a different, non-reductionist and observer dependent way of looking at the subject matter." If that is what you are saying, I may come close to agreeing with you. But somehow, I don't think that is what you are saying.

"It requires a different, non-reductionist ... way of looking at the subject matter." -I can agree with you completely on this. (I do want however to resist the subjective, "observer dependent" part )

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-06T16:45:42.007Z · LW · GW

Thanks for this - I hadn't encountered this concept. Looks very useful.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-06T16:43:51.518Z · LW · GW

I'm also saying that non-natural "facts" are as easy to work with as natural facts. The issue that they're not part of natural science doesn't impact our ability to discuss them productively.

I agree entirely with this. This exercise isn't meant in any way to be an attack on decision theory or the likes. The target is so-called naturalism - the view that all facts are natural facts.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-06T16:30:01.928Z · LW · GW

I appreciate this clarification. The point is meant to be about, as you say, the empirical sciences. I agree that there can be arbitrarily sophisticated scientific theories concerning rational behaviour - just that these theories aren't straight-forwardly continuous with the theories of natural science.

In case you haven't encountered it and might be interested, the underdetermination problem associated with inferring to beliefs and desires from mere behaviour has been considered in some depth by Donald Davidson, eg in his Essays on Actions and Events

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-06T03:18:25.557Z · LW · GW

I appreciate your efforts to spell things out. I have to say I'm getting confused, though

You started by making an argument that listed a series of stages (virus, bacterium, nematode, man) and claimed that at no stage along the way (before the last) were any kind of normative concepts applicable.

I meant to say that at no stage -including the last!- does the addition of merely naturalistic properties turn a thing into something subject to norms -something of which it is right to say it ought, for its own sake, to do this or that.

I also said that the sense of right and wrong and of purpose which biology provides is merely metaphorical. When you talk about "the illusion of teleology in nature", that's exactly what I was getting at (or so it seems to me). That is, teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not - it's real. Can you live with this? I think a lot of people are apt to think that illusory teleology sort of fades into the real thing with increasing physical complexity. I see the pull of this idea, but I think it's mistaken, and I hope I've at least suggested that adherents of the view have some burden to try to defend it.

Do you believe it is possible to tell a teenager what she "ought" to do?

Now that is a whole other can of worms...

At what stage in development do normative judgements become applicable.

This is a fair and a difficult question. Roughly, another individual becomes suitable for normative appraisal when and to the extent that s/he becomes a recognizably rational agent -ie, capable of thinking and acting for her/himself and contributing to society (again, very roughly). All kinds of interesting moral issues lurk here, but I don't think we have to jump to any conclusions about them.

In case I'm giving the wrong impression, I don't mean to be implying that people are bound by norms in virtue of possessing some special aura or other spookiness. I'm not giving a theory of the nature of norms - that's just too hard. All I'm saying for the moment is that if you stick to purely natural science, you won't find a place for them.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-06T01:48:58.006Z · LW · GW

Rational-ought beliefs and actions are the ones optimal for achieving your goals. Goals and optimallity can be explained in scientific language. Rational-ought is not moral-ought. Moral-ought is harder to explain because it about the goals an agent should have,not the ones they happen to have.

I'm sincerely not sure about the rational-ought/moral-ought distinction - I haven't thought enough about it. But anyway, I think moral-ought is a red herring, here. As far as I can see, the claims made in the post apply to rational-oughts. That was certainly the intention. In other posts on LW about fallacies and the details of rational thinking, it's a commonplace to use quite normative language in connection with rationality. Indeed, a primary goal is to help people to think and behave more rationally, because this is seen for each of us to be a good. 'One ought not to procrastinate', 'One ought to compensate for one's biases', etc..

Try somehow to shoehorn normative facts into a naturalistic world-view, at the possible peril of the coherence of that world-view.

Easily done with non-moral norms such as rationality.

Would love to see the details... :-)

Not if you think of purpose as a metaphysical fundamental. Easily, if a purpose is just a particular idea in the mind. If I intend to but a lawnmower, and I write "buy lawnmower" on a piece of paper, there is nothing mysterious about the note, or about the state of mind that preceded it.

I'm not sure I get this. The intention behind drawing the initial distinction between is/ought problems was to make clear the focus is not on, as it were, the mind of the beholder. The question is a less specific variant of the question as to how any mere physical being comes to have intentions (e.g., to buy a lawnmower) in the first place.

That you want to do something does not mean you ought to do it in the categorial, unconditional sense of moral-ought.

I agree, but I think it does mean you ought to in a qualified sense. Your merely being in a physical or computational state, however, by itself doesn't, or so the thought goes.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-05T22:32:38.831Z · LW · GW

If I bet higher than 1/6th on a fair die's rolling 6 because in the last ten rolls 6 hasn't come up -meaning it's now 'due'- I make a mistake. I commit an error of reasoning; I do something wrong; I act in a manner I ought not to.

What about the virus particle which, in the course of sloshing about in an appropriate medium, participates in the coming into existence of a particle composed of RNA which, as it happens, is mostly identical but differs from itself in a few places. Are you saying that this particle makes a mistake in the same sense of 'mistake' as I do in making my bet?

Option (1): The sense is precisely the same (and it is unproblematically naturalistic). In this case I have to ask what the principles are by which one infers to conclusions about a virus's mistakes from facts about replication. What are the physical laws, how are their consequences (the consequences, again, being claims about what a virus ought to do) measured or verified, and so on?

Option (2): The senses are different. This was the point of calling the RNA mistake metaphorical. It was to convey that the sense is importantly different than it is in the betting case. The idea is that the sense, if any, in which a virus makes a 'mistake' in giving rise to a non-exact replica of itself is not enough to sustain the kind of norms required for rationality. It is not enough to sustain the conclusions about my betting behaviour. Is this fair?

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-05T22:14:58.852Z · LW · GW

Suppose I hear Bob say "I want to eat an apple." Am I justified in assigning a higher probability to "Bob wants to eat an apple" after I hear this than before (assuming I don't have some other evidence to the contrary, like someone is holding a gun to Bob's head)?

I think this question hits the nail on the head. You are justified in assigning a higher probability to "Bob wants to eat an apple" just in case you are already justified in taking Bob to be a rational agent (other things being equal...). If Bob isn't at least minimally rational, you can't even get so far as construing his words as English, let alone to trust that his intent in uttering them is to convey that he wants to eat an apple (think about assessing a wannabe AI chatbot, here). But in taking Bob to be rational, you are already taking him to have preferences and beliefs, and for there to be things which he ought or ought not to do. In other words, you have already crossed beyond what mere natural science provides for. This, anyway, is what I'm trying to argue.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-05T22:10:58.172Z · LW · GW

You're right that I didn't mean this necessarily to be about a specifically moral sense of 'ought'. As for the suggested inference about baby-punching, I would push that onto the 'other things being equal' clause, which covers a multitude of sins. No acceptable theory can entail that one ought to be punching babies, I agree.

The picture I want to suggest should be taken seriously is that on one side of the fence are naturalistic properties, on the other are properties such as rationality, having wants/beliefs, being bound by norms as to right and wrong (where this can be as meagre a sense of right and wrong as "it is right to predicate 'cat' of this animal and wrong to predicate 'dog' of it" -or, "it is right to eat this round, red thing if you desire an apple, it is wrong to eat it if you desire a pear"), oughts, goals, values, and so on. And that the fence isn't as trivial to break down as one might think.

I'm understanding a utility function is something like a mapping of states of affairs (possible worlds?) onto, say, real numbers. In this context, the question would be giving naturalistic sense to the notion of value -that is, of something's being better or worse for the agent in question- which the numbers here are meant to correlate to. It's the notion of some states of affairs being more or less optimal for the agent -which I think is part of the concept of utility function- which I want to argue is outside the scope of natural science. Please correct me on utility functions if I've got the wrong end of the stick.

To be clear - the intent isn't to attack the idea that there can be interesting and fruitful theories involving utility functions, rationality and related notions. It's just that these aren't consistent with a certain view of science and facthood.

Comment by BobTheBob on 'Is' and 'Ought' and Rationality · 2011-07-05T22:08:21.112Z · LW · GW

t may not be great, but I did give an argument. Roughly, again,

a) wants do entail oughts (plausible)

b) wanting = being in unproblematically naturalistic state ABC (from assumption of naturalism)

c) from a and b, there is some true statement of the form 'being in naturalistic state ABC entails an ought'

d) but no claim of the form 'being in naturalistic state ABC entails an ought' is plausible

I infer from the contradiction between c and d to the falsity of b. If you could formulate your dissatisfaction as a criticism of a premise or of the reasoning, I'd be happy to listen. In particular, if you can come up with a plausible counter-example to (d), I would like to hear it.

Comment by BobTheBob on Pluralistic Moral Reductionism · 2011-06-07T02:57:18.832Z · LW · GW

But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.

I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary. Sorry if I'm not getting it.

I'm not sure I'm following the argument here. I'm saying that all normativity is hypothetical. It sounds like you're arguing there is a categorical 'ought' for believing mathematical truths because it would be very strange to say we only 'ought' to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical 'oughts,' there might be others.

Is it something like that?

This states the thought very clearly -thanks.

If so, then I would offer the goal of "in order to be logically consistent."

I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It's possible this doesn't really engage your thoughts, though.

There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don't have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.

If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn't that an important result?

Comment by BobTheBob on Pluralistic Moral Reductionism · 2011-06-06T03:23:36.940Z · LW · GW

Facts in the disappointingly deflationary sense that

It is a fact that P if and only if P (and that's all there is to say about facthood).

This position is a little underwhelming to any who seek a metaphysically substantive account of what makes things true, but it is a realist stance all the same (no?). If you have strong arguments against this or for an alternative, I'm interested to hear.

Comment by BobTheBob on Pluralistic Moral Reductionism · 2011-06-05T19:59:34.328Z · LW · GW

For example, if you and I both consider a practice morally right and we do so because it measures up that way against the standard of improving 'the well-being of conscious creatures,' then there's a bit more going on than it just being right for you and me.

OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him - there is no sense in which he ought not to have done what he did (assuming his belief system doesn't inveigh against him offending yours)?

Or, to put the point another way, isn't having truth as its goal part of the concept of belief? [...] But if this is fair I'm back to wondering where the ought comes from.

Perhaps it comes from the way you view the concept of belief as implying a goal?

Touche.

Look, what I'm getting at is this. I assume we can agree that

"68 + 57 = 125" is true if and only if 68 + 57 = 125

This being the case, if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, "Why ought I to believe that 68 + 57 = 125?", and B answers, "Because it's true", then B is not really saying anything beyond, "Because it does". B does not answer A's question.

If the substantive answer is something along the lines that it is a mathematical fact, then I am interested to know how you conceive of mathematical facts, and whether there mightn't be moral facts of generally the same ilk (or if not, why). But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside - you'll surely allow this looks pretty dubious at least superficially.

Comment by BobTheBob on Pluralistic Moral Reductionism · 2011-06-04T03:24:32.166Z · LW · GW

Taking your thoughts out of order,

Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values - and possibly in quite different domains as well (politics, aesthetics, gardening)?

Yes.

What I was getting at is that this looks like complete moral relativism -'right for me' is the only right there is (since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people's values differ). I'm understanding that you're willing to bite this bullet.

I think that's generally the job of normative ethics, and metaethics is a little more open ended than that. I do grant that many people think the point of ethical philosophy in general is to identify categorical imperatives, not give a pluralistic reduction.

I take your point here. I may be conflating ethical and meta-ethical theory. I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?). If we're talking about a defence of, say, non-cognitivism, though, maybe what you say is fair.

No, I wouldn't say that. It would be a little odd to say anyone who doesn't hold a belief that 68 + 57 equals 125 is neglecting some cosmic duty.

This is fair.

Instead, I would affirm: In order to hold a mathematically correct belief when considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.

This is an interesting proposal, but I'm not sure what it implies. Is it possible for a rational person to strive to believe anything but the truth? Whether in math or anything else, doesn't a rational person always try to believe what is correct? Or, to put the point another way, isn't having truth as its goal part of the concept of belief? If so, I suggest this collapses to something like

*When considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.

or, more plausibly,

*When considering 68 + 57, we ought to believe it equals 125 or some equivalent expression.

But if this is fair I'm back to wondering where the ought comes from.

Comment by BobTheBob on Pluralistic Moral Reductionism · 2011-06-03T17:14:28.087Z · LW · GW

Just to clarify where you stand on norms: Would you say a person is obligated by facts woven into the universe to believe that 68 + 57 = 125 ? (ie, are we obligated in this sense to believe anything?)

To stick my own neck out: I am a realist about values. I think there are facts about what we ought to believe and do. I think you have to be, to capture mathematical facts. This step taken, there's no further commitment required to get ethical facts. Obviously, though, there are epistemic issues associated with the latter which are not associated with the former.

Morality as the expression of pluralistic value sets (and the hypothetical imperatives which go along with them) is a very neat explanation of the pattern we see of agreement, disagreement, and partially successful deliberation.

Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values - and possibly in quite different domains as well (politics, aesthetics, gardening)?

You seem to be suggesting meta-ethics aims merely to give a discriptively adequate characterisation of ethical discourse. If so, would you at least grant that many see (roughly) as its goal to give a general characterisation of moral rightness, that we all ought to strive for it?

Comment by BobTheBob on Conceptual Analysis and Moral Theory · 2011-05-28T03:15:09.359Z · LW · GW

Well, alright, please tell me: what is a Utility Function, that it can be inferred from a brain scan? How's this supposed to work, in broad terms?

Comment by BobTheBob on Conceptual Analysis and Moral Theory · 2011-05-27T21:19:17.564Z · LW · GW

As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU's definition for 'planet', I fail to see why clarifying our intuitive concepts is a good use of all that brain power.

I think that where we differ is on 'intuitive concepts' -what I would want to call just 'concepts'. I don't see that stipulative definitions replace them. Scenario (3), and even the IAU's definition, illustrate this. It is coherent for an astronomer to argue that the IAU's definition is mistaken. This implies that she has a more basic concept -which she would strive to make explicit in arguing her case- different than the IAU's. For her to succeed in making her case -which is imaginable- people would have to agree with her, in which case we would have at least partially to share her concept. The IAU's definition tries to make explicit our shared concept -and to some extent legislates, admittedly- but it is a different sort of animal than what we typically use in making judgements.

Philosophy doesn't impact non-philosophical activities often, but when it does the impact is often quite big. Some examples: the influence of Mach on Einstein, of Rousseau and others on the French and American revolutions, Mill on the emancipation of women and freedom of speech, Adam Smith's influence on economic thinking.

I consider though that the clarification is an end in itself. This site proves -what's obvious anyway- that philosophical questions naturally have a grip on thinking people. People usually suppose the answer to any given philosophical question to be self-evident, but equally we typically disagree about what the obvious answer is. Philosophy is about elucidating those disagreements.

Keeping people busy with activities which don't turn the planet into more non-biodegradeable consumer durables is fine by me. More productivity would not necessarily be a good thing (...to end with a sweeping undefended assertion.).

Comment by BobTheBob on Conceptual Analysis and Moral Theory · 2011-05-27T17:04:13.582Z · LW · GW

And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.

But this is false, surely. I take it that a fact about X's UF might be some such as 'X prefers apples to pears'. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There's any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.

Comment by BobTheBob on Conceptual Analysis and Moral Theory · 2011-05-27T03:32:10.953Z · LW · GW

But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking "rational". Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn'ts. I have been trying to press that unpacking morality leads to the similar analytical truth: " a moral agent ought to adopt universalisable goals."

I expressed myself badly. I agree entirely with this.

"Oughts" in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.

Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.

I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).

And I want to persuade LWers

1) that facts about her utility functions aren't naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,

and

2) that this is ok - these are still respectable facts, notwithstanding.

I don't see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.

But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system's goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.

Comment by BobTheBob on Conceptual Analysis and Moral Theory · 2011-05-26T04:22:04.680Z · LW · GW

You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with your comments.

Suppose

  • we have two people, Albert and Barry
  • we have one thing, a car, X, of determinate interior volume
  • we have one sentence, S: "X is a subcompact".
  • Albert affirms S, Barry denies S.

Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.

Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree.

Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).

Argument in scenarios 1 and 2 is futile - there is an acknowledged objective answer, and a way to get it - the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don't arrive at an uncontroversial end point, you often learn a lot about the concepts ('good', knowledge', 'desires', etc) in the process. Your example of the re-definition of 'planet' fits this model, I think.

This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don't typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I'd be interested to know if this seems wrong.

I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn't want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).

You may think it's obvious, but I don't see you've shown any of these 3 examples is silly. I don't see that Schroeder's project is silly (I haven't read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept - helps us think about what a desire -and hence in part a rational agent- is.

As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition - that's why the paper was so successful, and this is often how these debates go. Part of what's interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit - conceptual analysis - is elusive.

I objected to your example because I didn't see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments - not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle's Chinese Room argument. His argument is multiply flawed, as far as I'm concerned -could get into that another time. But I still think it's interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.

Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other's due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we'll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?

This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of 'planet' demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress - we really do come to a better understand of things.

Comment by BobTheBob on Conceptual Analysis and Moral Theory · 2011-05-25T01:26:48.930Z · LW · GW

I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet's Intentional Stance.

I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can't derive an ought from an is, and that this is what's at stake here. Since you can't make sense of a person as rational if it's not the case there's anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we're talking about the social sciences, that's another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I'd be open to hear a different view.

Again, I find it incredible that natural facts have no relation to morality.

I didn't say this - just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.

To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.

Here's another stab at it: natural science can in principle tell us everything there is to know about a person's inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, 'I ought to go to class' in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.

Comment by BobTheBob on Conceptual Analysis and Moral Theory · 2011-05-24T01:29:07.934Z · LW · GW

Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:

I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality".

-though the apparent tension in being a solipsist who argues gets to the root of the issue.

For what it may be worth:

I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them.

From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.

Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.

Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).

Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).

Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren't, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.

Comment by BobTheBob on Conceptual Analysis and Moral Theory · 2011-05-22T21:23:56.222Z · LW · GW

I think Peterdjones's answer hits it on the head. I understand you've thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.

Really I meant to be throwing the ball back to lukeprog to give us an idea of what the 'arguing about facts and anticipations' alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.

Comment by BobTheBob on Conceptual Analysis and Moral Theory · 2011-05-22T18:09:06.805Z · LW · GW

Some thoughts on this and related LW discussions. They come a bit late - apols to you and commentators if they've already been addressed or made in the commentary:

1) Definitions (this is a biggie).

There is a fair bit of confusion on LW, it seems to me, about just what definitions are and what their relevance is to philosophical and other discussion. Here's my understanding - please say if you think I've gone wrong.

If in the course of philosophical discussion, I explicitly define a familiar term, my aim in doing so is to remove the term from debate - I fix the value of a variable to restrict the problem. It'd be good to find a real example here, but I'm not convinced defining terms happens very often in philosophical or other debate. By way of a contrived example, one might want to consider, in evaluating some theory, the moral implications of actions made under duress (a gun held to the head) but not physically initiated by an external agent (a jostle to the arm). One might say, "Define 'coerced action' to mean any action not physically initiated but made under duress" (or more precise words to the effect). This done, it wouldn't make sense simply to object that my conclusion regarding coerced actions doesn't apply to someone physically pushed from behind - I have stuipulated for the sake of argument I'm not talking about such cases. (in this post, you distinguish stipulation and definition - do you have in mind a distinction I'm glossing over?)

Contrast this to the usual case for conceptual analyses, where it's assumed there's a shared concept ('good', 'right', 'possible', 'knows', etc), and what is produced is meant to be a set of necessary and sufficient conditions meant to capture the concept. Such an analysis is not a definition. Regarding such analyses, typically one can point to a particular thing and say, eg, "Our shared concept includes this specimen, it lacks a necessary condition, therefore your analysis is mistaken" - or, maybe "Intuitively, this specimen falls under our concept, it lacks...". Such a response works only if there is broad agreement that the specimen falls under the concept. Usually this works out to be the case.

I haven't read the Jackson book, so please do correct me if you think I've misunderstood, but I take it something like this is his point in the paragraphs you quote. Tom and Jack can define 'right action' to mean whatever they want it to. In so doing, however, we cease to have any reason to think they mean by the term what we intuitively do. Rather, Jackson is observing, what Tom and Jack should be doing is saying that rightness is that thing (whatever exactly it is) which our folk concepts roughly converge on, and taking up the task of refining our understanding from there - no defining involved.

You say,

... Jackson supposes that we can pick out which platitudes of moral discourse matter, and how much they matter, for determining the meaning of moral terms

Well, not quite. The point I take it is rather that there simply are 'folk' platitudes which pick-out the meanings of moral terms - this is the starting point. 'Killing people for fun is wrong', 'Helping elderly ladies across the street is right' etc, etc. These are the data (moral intuitions, as usually understood). If this isn't the case, there isn't even a subject to discuss. Either way, it has nothing to do with definitions.

Confusion about definitions is evident in the quote from the post you link to. To re-quote:

...the first person is speaking as if 'sound' means acoustic vibrations in the air; the second person is speaking as if 'sound' means an auditory experience in a brain. If you ask "Are there acoustic vibrations?" or "Are there auditory experiences?", the answer is at once obvious. And so the argument is really about the definition of the word 'sound'.

Possibly the problem is that 'sound' has two meanings, and the disputants each are failing to see that the other means something different. Definitions are not relevant here, meanings are. (Gratuitous digression: what is "an auditory experience in a brain"? If this means something entirely characterizable in terms of neural events, end of story, then plausibly one of the disputants would say this does not capture what he means by 'sound' - what he means is subjective and ineffable, something neural events aren't. He might go on to wonder whether that subjective, ineffable thing, given that it is apparently created by the supposedly mind-independent event of the falling of a tree, has any existence apart from his self (not to be confused with his brain!). I'm not defending this view, just saying that what's offered is not a response but rather a simple begging of the question against it. End of digression.)

2) In your opening section you produce an example meant to show conceptual analysis is silly. Looks to me more like a silly attempt at an example of conceptual analysis. If you really want to make your case, why not take a real example of a philosophical argument -preferably one widely held in high regard at least by philosophers? There's lots of 'em around.

3) In your section The trouble with conceptual analysis, you finally explain,

The trouble is that philosophers often take this "what we mean by" question so seriously that thousands of pages of debate concern which definition to use... .

As explained above, philosophical discussion is not about "which definition to use" -it's about (roughly, and among other things) clarifying our concepts. The task is difficult but worthwhile because the concepts in question are important but subtle.

Within 20 seconds of arguing about the definition of 'desire', someone will say, "Screw it. Taboo 'desire' so we can argue about facts and anticipations, not definitions."

If you don't have the patience to do philosophy, or you don't think it's of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don't think that in doing this latter thing you'll address the question philosophy is interested in, or that you've said anything at all so far to show philosophy isn't worth doing. In this connection, one of the real benefits of doing philosophy is that it encourages precision and attention to detail in thinking. You say Eliezer Yudkowsky "...advises against reading mainstream philosophy because he thinks it will 'teach very bad habits of thought that will lead people to be unable to do real work.'" The original quote continues, "...assume naturalism! Move on! NEXT!" Unfortunately Eliezer has a bad habit of making unclear and undefended or question-begging assertions, and this is one of them. What are the bad habits, and how does philosophy encourage them? And what precisely is meant by 'naturalism'? To make the latter assertion and simultaneously to eschew the responsibility of articulating what this commits you to is to presume you can both have your cake and eat it too. This may work in blog posts -it wouldn't pass in serious discussion.

(Unlike some on this blog, I have not slavishly pored through Eliezer's every post. If there is somewhere a serious discussion of the meaning of 'naturalism' which shows how the usual problems with normative concepts like 'rational' can successfully be navigated, I will withdraw this remark).

Comment by BobTheBob on Chemicals and Electricity · 2011-05-10T13:53:38.671Z · LW · GW

As I mentioned, there are reasons for thinking they're incompatible. Something's (someone's) being a rational agent implies s/he has goals and that in light of those goals ought in given circumstances to do certain things. Physical science makes no place for goals or purposes or right or wrong in nature - there is no physical apparatus which can detect the rightness of an action. Your thought may be that rationality can be made sense of without recourse to goals/purposes or right and wrong. I don't think this can be done. At best you'd be left with an ersatz which fails to capture what we mean by 'rational'.

Comment by BobTheBob on Chemicals and Electricity · 2011-05-10T03:01:33.944Z · LW · GW

You make your case vividly. It points to a difficulty which arises, I think, in a number of recent LW posts.

We have two conceptions of ourselves which on the face of it are not compatible. On the one hand we think of ourselves as mere physical objects, albeit highly structured and dynamical objects, buffeted by the vicissitudes of environment and the laws of nature, and not in any deep way different than any other physical system in nature. On the other hand, we think of ourselves as agents, in control of our actions and properly subject to appraisal as more or less rational. I get that your point is that 'you' -the agent- play(s) no part in a certain class of actions, as these are determined entirely by physiological factors. Once this rock is got rolling (I mean, the explanation of behaviours in purely non-intentional, electro-chemical terms), however, how can it be stopped before it takes out every vestige of agency? That is, how do you delineate the class of actions which 'you' really do participate in, once you acknowledge 'you' are wholly unnecessary to the explanation of some? Where do you dig in your heels and insist on an explanatory role for an agent, and what sort of a thing will this agent be, in this context?

One alterative to the picture I think you're suggesting (an alternative recommended in one version, eg, by Daniel Dennett in ''The Intentional Stance'') is to recognize that we have different, mutually incompatible ways of understanding and explaining ourselves. In one explanatory idiom, only scientifically respectable -ie verifiable- claims are made - we're just chemicals and electricity. No place is made for agency, and the notion of an action's being rational or in any even weak sense good or bad is as foreign as it would be to ascribe such properties to the internal changes of a single-celled organism.

In another explanatory idiom (used when we take the intentional stance), we do perceive people as agents with purposes and beliefs and desires, and explain their actions in terms of these together with the assumption of some measure of rationality. In this idiom, however, we do find agency even in the smallest actions, assuming these are not simply reflexive (a sneeze, say).

The point, though, is that the explanatory idioms are not mutually reducible - they're 'incommensurable', if you like. You don't try to reduce the terms of the latter to the former, because there is no way of fitting purposes or beliefs or desires or the agents who harbour them into the purely, merely physical world.