Conceptual Analysis and Moral Theory

post by lukeprog · 2011-05-16T06:28:37.021Z · LW · GW · Legacy · 481 comments

Contents

  The trouble with conceptual analysis
  Disputing definitions
    Disputing the definitions of moral terms
    Austere Metaethics vs. Empathic Metaethics
    Notes
    References
None
481 comments

Part of the sequence: No-Nonsense Metaethics. Also see: A Human's Guide to Words.

If a tree falls in the forest, and no one hears it, does it make a sound?

Albert: "Of course it does. What kind of silly question is that? Every time I've listened to a tree fall, it made a sound, so I'll guess that other trees falling also make sounds. I don't believe the world changes around when I'm not looking."

Barry: "Wait a minute. If no one hears it, how can it be a sound?"

Albert and Barry are not arguing about facts, but about definitions:

...the first person is speaking as if 'sound' means acoustic vibrations in the air; the second person is speaking as if 'sound' means an auditory experience in a brain. If you ask "Are there acoustic vibrations?" or "Are there auditory experiences?", the answer is at once obvious. And so the argument is really about the definition of the word 'sound'.

Of course, Albert and Barry could argue back and forth about which definition best fits their intuitions about the meaning of the word. Albert could offer this argument in favor of using his definition of sound:

My computer's microphone can record a sound without anyone being around to hear it, store it as a file, and it's called a 'sound file'. And what's stored in the file is the pattern of vibrations in air, not the pattern of neural firings in anyone's brain. 'Sound' means a pattern of vibrations.

Barry might retort:

Imagine some aliens on a distant planet. They haven't evolved any organ that translates vibrations into neural signals, but they still hear sounds inside their own head (as an evolutionary biproduct of some other evolved cognitive mechanism). If these creatures seem metaphysically possible to you, then this shows that our concept of 'sound' is not dependent on patterns of vibrations.

If their debate seems silly to you, I have sad news. A large chunk of moral philosophy looks like this. What Albert and Barry are doing is what philosophers call conceptual analysis.1

The trouble with conceptual analysis

I won't argue that everything that has ever been called 'conceptual analysis' is misguided.2 Instead, I'll give examples of common kinds of conceptual analysis that corrupt discussions of morality and other subjects.

The following paragraph explains succinctly what is wrong with much conceptual analysis:

Analysis [had] one of two reputations. On the one hand, there was sterile cataloging of pointless folk wisdom - such as articles analyzing the concept VEHICLE, wondering whether something could be a vehicle without wheels. This seemed like trivial lexicography. On the other hand, there was metaphysically loaded analysis, in which ontological conclusions were established by holding fixed pieces of folk wisdom - such as attempts to refute general relativity by holding fixed allegedly conceptual truths, such as the idea that motion is intrinsic to moving things, or that there is an objective present.3

Consider even the 'naturalistic' kind of conceptual analysis practiced by Timothy Schroeder in Three Faces of Desire. In private correspondance, I tried to clarify Schroeder's project:

As I see it, [your book] seeks the cleanest reduction of the folk psychological term 'desire' to a natural kind, ala the reduction of the folk chemical term 'water' to H2O. To do this, you employ a naturalism-flavored method of conceptual analysis according to which the best theory of desire is one that is logically consistent, fits the empirical facts, and captures how we use the term and our intuitions about its meaning.

Schroeder confirmed this, and it's not hard to see the motivation for his project. We have this concept 'desire', and we might like to know: "Is there anything in the world similar to what we mean by 'desire'?" Science can answer the "is there anything" part, and intuition (supposedly) can answer the "what we mean by" part.

The trouble is that philosophers often take this "what we mean by" question so seriously that thousands of pages of debate concern which definition to use rather than which facts are true and what to anticipate.

In one chapter, Schroeder offers 8 objections4 to a popular conceptual analysis of 'desire' called the 'action-based theory of desire'. Seven of these objections concern our intuitions about the meaning of the word 'desire', including one which asks us to imagine the existence of alien life forms that have desires about the weather but have no dispositions to act to affect the weather. If our intuitions tell us that such creatures are metaphysically possible, goes the argument, then our concept of 'desire' need not be linked to dispositions to act.

Contrast this with a conversation you might have with someone from the Singularity Institute. Within 20 seconds of arguing about the definition of 'desire', someone will say, "Screw it. Taboo 'desire' so we can argue about facts and anticipations, not definitions."5

Disputing definitions

Arguing about definitions is not always misguided. Words can be wrong:

When the philosophers of Plato's Academy claimed that the best definition of a human was a "featherless biped", Diogenes the Cynic is said to have exhibited a plucked chicken and declared "Here is Plato's Man." The Platonists promptly changed their definition to "a featherless biped with broad nails."

Likewise, if I give a lecture on correlations between income and subjective well-being and I conclude by saying, "And that, ladies and gentlemen, is my theory of the atom," then you have some reason to object. Nobody else uses the term 'atom' to mean anything remotely like what I've just discussed. If I ever do that, I hope you will argue that my definition of 'morality' is 'wrong' (or unhelpful, or confusing, or something).

Some unfortunate words are used in a wide variety of vague and ambiguous ways.6 Moral terms are among these. As one example, consider some commonly used definitions for 'morally good':

Often, people can't tell you what they mean by moral terms when you question them. There is little hope of taking a survey to decide what moral terms 'typically mean' or 'really mean'. The problem may be worse for moral terms than for (say) art terms. Moral terms have more powerful connotations than art terms, and are thus a greater attractor for sneaking in connotations. Moral terms are used to persuade. "It's just wrong!" the moralist cries, "I don't care what definition you're using right now. It's just wrong: don't do it."

Moral discourse is rife with motivated cognition. This is part of why, I suspect, people resist dissolving moral debates even while they have no trouble dissolving the 'tree falling in a forest' debate.

Disputing the definitions of moral terms

So much moral philosophy is consumed by debates over definitions that I will skip to an example from someone you might hope would know better: reductionist Frank Jackson7:

...if Tom tells us that what he means by a right action is one in accord with God's will, rightness according to Tom is being in accord with God's will. If Jack tells us that what he means by a right action is maximizing expected value as measured in hedons, then, for Jack, rightness is maximizing expected value...

But if we wish to address the concerns of our fellows when we discuss the matter - and if we don't, we will not have much of an audience - we had better mean what they mean. We had better, that is, identify our subject via the folk theory of rightness, wrongness, goodness, badness, and so on. We need to identify rightness as the property that satisfies, or near enough satisfies, the folk theory of rightness - and likewise for the other moral properties. It is, thus, folk theory that will be our guide in identifying rightness, goodness, and so on.8

The meanings of moral terms, says Jackson, are given by their place in a network of platitudes ('clauses') from folk moral discourse:

The input clauses of folk morality tell us what kinds of situations described in descriptive, non-moral terms warrant what kinds of description in ethical terms: if an act is an intentional killing, then normally it is wrong; pain is bad; 'I cut, you choose' is a fair procedure; and so on.
The internal role clauses of folk morality articulate the interconnections between matters described in ethical, normative language: courageous people are more likely to do what is right than cowardly people; the best option is the right option; rights impose duties of respect; and so on.
The output clauses of folk morality take us from ethical judgements to facts about motivation and thus behaviour: the judgement that an act is right is normally accompanied by at least some desire to perform the act in question; the realization that an act would be dishonest typically dissuades an agent from performing it; properties that make something good are the properties we typically have some kind of pro-attitude towards, and so on.
Moral functionalism, then, is the view that the meanings of the moral terms are given by their place in this network of input, output, and internal clauses that makes up folk morality.9

And thus, Jackson tosses his lot into the definitions debate. Jackson supposes that we can pick out which platitudes of moral discourse matter, and how much they matter, for determining the meaning of moral terms - despite the fact that individual humans, and especially groups of humans, are themselves confused about the meanings of moral terms, and which platitudes of moral discourse should 'matter' in fixing their meaning.

This is a debate about definitions that will never end.

Austere Metaethics vs. Empathic Metaethics

In the next post, we'll dissolve standard moral debates the same way Albert and Barry should have dissolved their debate about sound.

But that is only the first step. It is important to not stop after sweeping away the confusions of mainstream moral philosophy to arrive at mere correct answers. We must stare directly into the heart of the problem and do the impossible.

Consider Alex, who wants to do the 'right' thing. But she doesn't know what 'right' means. Her question is: "How do I do what is right if I don't know exactly what 'right' means?"

The Austere Metaethicist might cross his arms and say:

Tell me what you mean by 'right', and I will tell you what is the right thing to do. If by 'right' you mean X, then Y is the right thing to do. If by 'right' you mean P, then Z is the right thing to do. But if you can't tell me what you mean by 'right', then you have failed to ask a coherent question, and no one can answer an incoherent question.

The Empathic Metaethicist takes up a greater burden. The Empathic Metaethicist says to Alex:

You may not know what you mean by 'right.' You haven't asked a coherent question. But let's not stop there. Here, let me come alongside you and help decode the cognitive algorithms that generated your question in the first place, and then we'll be able to answer your question. Then not only can we tell you what the right thing to do is, but also we can help bring your emotions into alignment with that truth... as you go on to (say) help save the world rather than being filled with pointless existential angst about the universe being made of math.

Austere metaethics is easy. Empathic metaethics is hard. But empathic metaethics is what needs to be done to answer Alex's question, and it's what needs to be done to build a Friendly AI. We'll get there in the next few posts.

Next post: Pluralistic Moral Reductionism

Previous post: What is Metaethics?

Notes

1 Eliezer advises against reading mainstream philosophy because he thinks it will "teach very bad habits of thought that will lead people to be unable to do real work." Conceptual analysis is, I think, exactly that: a very bad habit of thought that renders many people unable to do real work. Also: My thanks to Eliezer for his helpful comments on an early draft of this post.

2 For example: Jackson (1998), p. 28, has a different view of conceptual analysis: "conceptual analysis is the very business of addressing when and whether a story told in one vocabulary is made true by one told in some allegedly more fundamental vocabulary." For an overview of Jackson's kind of conceptual analysis, see here. Also, Alonzo Fyfe reminded me that those who interpret the law must do a kind of conceptual analysis. If a law has been passed declaring that vehicles are not allowed on playgrounds, a judge must figure out whether 'vehicle' includes or excludes rollerskates. More recent papers on conceptual analysis are available at Philpapers. Finally, read Chalmers on verbal disputes.

3 Braddon-Mitchell (2008). A famous example of the first kind lies at the heart of 20th century epistemology: the definition of 'knowledge.' Knowledge had long been defined as 'justified true belief', but then Gettier (1963) presented some hypothetical examples of justified true belief that many of us would intuitively not label as 'knowledge.' Philosophers launched a cottage industry around new definitions of 'knowledge' and new counterexamples to those definitions. Brian Weatherson called this the "analysis of knowledge merry-go-round." Tyrrell McAllister called it the 'Gettier rabbit-hole.'

4 Schroeder (2004), pp. 15-27. Schroeder lists them as 7 objections, but I count his 'trying without desiring' and 'intending without desiring' objections separately.

5 Tabooing one's words is similar to what Chalmers (2009) calls the 'method of elimination'. In an earlier post, Yudkowsky used what Chalmers (2009) calls the 'subscript gambit', except Yudkowsky used underscores instead of subscripts.

6 See also Gallie (1956).

7 Eliezer said that the closest thing to his metaethics from mainstream philosophy is Jackson's 'moral functionalism', but of course moral functionalism is not quite right.

8 Jackson (1998), p. 118.

9 Jackson (1998), pp. 130-131.

References

Braddon-Mitchell (2008). Naturalistic analysis and the a priori. In Braddon-Mitchell & Nola (eds.), Conceptual Analysis and Philosophical Naturalism (pp. 23-43). MIT Press.

Chalmers (2009). Verbal disputes. Unpublished.

Gallie (1956). Essentially contested concepts. Proceedings of the Aristotelean Society, 56: 167-198.

Gettier (1963). Is justified true belief knowledge? Analysis, 23: 121-123.

Jackson (1998). From Metaphysics to Ethics: A Defense of Conceptual Analysis. Oxford University Press.

Schroeder (2004). Three Faces of Desire. Oxford University Press.

481 comments

Comments sorted by top scores.

comment by Will_Newsome · 2011-05-16T07:02:10.834Z · LW(p) · GW(p)

It almost annoys me, but I feel compelled to vote this up. (I know groundbreaking philosophy is not yet your intended purpose but) I didn't learn anything, I remain worried that the sequence is going to get way too ambitious, and I remain confused about where it's ultimately headed. But the presentation is so good -- clear language, straightforward application of LW wisdom, excellent use of hyperlinks, high skimmability, linked references, flattery of my peer group -- that I feel I have to support the algorithm that generated it.

Replies from: lukeprog, lukeprog, cousin_it
comment by lukeprog · 2011-05-16T13:48:26.749Z · LW(p) · GW(p)

Most of your comment looks as though it could apply just as well to the most upvoted post on LW ever (edit: second-most-upvoted), and that's good enough for me. :)

There are indeed many LW regulars, and especially SI folk, who won't learn anything from several posts in this series. On the other hand, I think that these points haven't been made clear (about morality) anywhere else. I hope that when people (including LWers) start talking about morality with the usual conceptual-analysis assumptions, you can just link them here and dissolve the problem.

Also, it sounds like you agree with everything in this fairly long post. If so, yours is faint criticism indeed. :)

Replies from: FAWS
comment by FAWS · 2011-05-16T15:10:12.366Z · LW(p) · GW(p)

Most of your comment looks as though it could apply just as well to the most-upvoted post on LW ever,

*Second most upvoted post. I was a bit sad that Generalizing From One Example apparently wasn't the top post anymore because I really liked it, and while I also liked Diseased Thinking I just didn't like it quite as much. Nope, not the case, Generalizing From One Example is still at the top. Though I do hope it will eventually be replaced by a post that fully deserves to.

Replies from: lukeprog
comment by lukeprog · 2011-05-16T15:13:56.314Z · LW(p) · GW(p)

Oops, thanks for the correction. I had to pull from memory because the 'Top' link doesn't work in my browser (Chrome on Mac). It just lists an apparently random selection of posts.

Replies from: matt
comment by matt · 2011-05-18T03:59:58.465Z · LW(p) · GW(p)

Look for the date range ("Links from") in the sidebar - you want "All Time".
Yes, we're fixing the placement of this control in the redesign.

Replies from: lukeprog
comment by lukeprog · 2011-05-18T04:35:59.858Z · LW(p) · GW(p)

Hey, lookie there!

comment by lukeprog · 2011-05-26T21:39:23.830Z · LW(p) · GW(p)

This comment is for anyone who is confused about where the 'no-nonsense metaethics' sequence is going.

First, I had to write a bunch of prerequisites. More prerequisites are upcoming:
Intuitions and Philosophy
The Neuroscience of Desire
The Neuroscience of Pleasure
Inferring Our Desires
Heading Toward: No-Nonsense Metaethics
What is Metaethics?

Stage One of the sequence intends to solve or dissolve many of the central problems of mainstream metaethics. Stage one includes this post and a few others to come later. This is my solution to "much of metaethics" promised earlier. The "much of" refers to mainstream metaethics, not to Yudkowskian metaethics.

Stage Two of the sequence intends to catch everybody up with the progress on Yudkowskian metaethics that has been made by a few particular brains (mostly at SI) in the last few years but hasn't been written down anywhere yet.

Stage Three of the sequence intends to state the open problems of Yudkowskian metaethics as clearly as possible so that rationalists can make incremental progress on them, ala Gowers' Polymath Project or Hilbert's problems. (Unfortunately, problems in metaethics are not as clearly defined as problems in math.)

comment by cousin_it · 2011-05-16T08:07:59.816Z · LW(p) · GW(p)

Same here.

comment by nhamann · 2011-05-16T08:24:24.559Z · LW(p) · GW(p)

Looking back at your posts in this sequence so far, it seems like it's taken you four posts to say "Philosophers are confused about meta-ethics, often because they spend a lot of time disputing defintions." I guess they've been well-sourced, which is worth something. But it seems like we're still waiting on substantial new insights about metaethics, sadly.

Replies from: Will_Newsome, None, lukeprog
comment by Will_Newsome · 2011-05-16T10:03:04.994Z · LW(p) · GW(p)

I admit it's not very fun for LW regulars, but a few relatively short and simple posts is probably the bare minimum you can get away with while still potentially appealing to bright philosopher or academic types, who will be way more hesitant than your typical contrarian to dismiss an entire field of philosophy as not even wrong. I think Luke's doing a decent job of making his posts just barely accessible/interesting to a very wide audience.

comment by [deleted] · 2011-05-16T09:25:45.138Z · LW(p) · GW(p)

it seems like it's taken you four posts to say "Philosophers are confused about meta-ethics, often because they spend a lot of time disputing defintions."

No, he said quite a lot more. E.g. why philosophers do that, why it is a bad thing, and what to do about it if we don't want to fall into the same trap. This is all neccessary ground work for his final argument.

If the state of metaethics were such that most people would already agree on these fundamentals then you would have a point, but lukeprog's premise is that it's not.

comment by lukeprog · 2011-05-16T14:14:48.884Z · LW(p) · GW(p)

Seeing as lots of people seemed to benefit even from the 'What is Metaethics' post, I'm not too worried that LW regulars won't learn much from a few of the posts in this series. If you already grok 'Austere Metaethics', then you'll have to wait a few posts for things to get interesting. :)

comment by Oscar_Cunningham · 2011-05-16T11:02:22.371Z · LW(p) · GW(p)

An interesting phenomenon I've noticed recently is that sometimes words do have short exact definitions that exactly coincide with common usage and intuition. For example, after Gettier scenarios ruined the definition of knowledge as "Justified true belief", philosophers found a new definition:

"A belief in X is knowledge if one would always have that belief whenever X, and never have it whenever not-X".

(where "always" and "never" are defined to be some appropriate significance level)

Now it seems to me that this definition completely nails it. There's not one scenario I can find where this definition doesn't return the correct answer. (EDIT: Wrong! See great-grandchild by Tyrrell McAllister) I now feel very silly for saying things like "'Knowledge' is a fuzzy concept, hard to carve out of thingspace, there's is always going to be some scenario that breaks your definition." It turns out that it had a nice definition all along.

It seems like there is a reason why words tend to have short definitions: the brain can only run short algorithms to determine whether an instance falls into the category or not. All you've got to do to write the definition is to find this algorithm.

Replies from: Eliezer_Yudkowsky, CuSithBell, DuncanS
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-16T11:13:19.897Z · LW(p) · GW(p)

Yep. Another case in point of the danger of replying, "Tell me how you define X, and I'll tell you the answer" is Parfit in Reason and Persons concluding that whether or not an atom-by-atom duplicate constructed from you is "you" depends on how you define "you". Actually it turns out that there is a definite answer and the answer is knowably yes, because everything Parfit reasoned about "indexical identity" is sheer physical nonsense in a world built on configurations and amplitudes instead of Newtonian billiard balls.

PS: Very Tarskian and Bayesian of them, but are you sure they didn't say, "A belief in X is knowledge if one would never have it whenever not-X"?

Replies from: Oscar_Cunningham, lukeprog
comment by Oscar_Cunningham · 2011-05-16T11:56:23.897Z · LW(p) · GW(p)

PS: Very Tarskian and Bayesian of them, but are you sure they didn't say, "A belief in X is knowledge if one would never have it whenever not-X"?

I'm thinking of Robert Nozick's definition. He states his definition thus:

  1. P is true
  2. S believes that P
  3. If it were the case that (not-P), S would not believe that P
  4. If it were the case that P, S would believe that P

(I failed to remember condition 1, since 2 & 3 => 1 anyway)

Replies from: Tyrrell_McAllister, Will_Sawin, Eliezer_Yudkowsky
comment by Tyrrell_McAllister · 2011-05-16T13:18:16.282Z · LW(p) · GW(p)

I'm thinking of Robert Nozick's definition. He states his definition thus:

  1. P is true
  2. S believes that P
  3. If it were the case that (not-P), S would not believe that P
  4. If it were the case that P, S would believe that P

There is a reason why the Gettier rabbit-hole is so dangerous. You can always cook up an improbable counterexample to any definition.

For example, here is a counterexample to Nozick's definition as you present it. Suppose that I have irrationally decided to believe everything written in a certain book B and to believe nothing not written in B. Unfortunately for me, the book's author, a Mr. X, is a congenital liar. He invented almost every claim in the book out of whole cloth, with no regard for the truth of the matter. There was only one exception. There is one matter on which Mr. X is constitutionally compelled to write and to write truthfully: the color of his mother's socks on the day of his birth. At one point in B, Mr. X writes that his mother was wearing blue socks when she gave birth to him. This claim was scrupulously researched and is true. However, there is nothing in the text of B to indicate that Mr. X treated this claim any differently from all the invented claims in the book.

In this story, I am S, and P is "Mr. X's mother was wearing blue socks when she gave birth to him." Then:

  1. P is true. (Mr. X's mother really was wearing blue socks.)

  2. S believes that P. (Mr. X claimed P in B, and I believe everything in B.)

  3. If it were the case that (not-P), S would not believe that P. (Mr. X only claimed P in B because that was what his scrupulous research revealed. Had P not been true, Mr. X's research would not have led him to believe it. And, since he is incapable of lying about this matter, he would not have put P in B. Therefore, since I don't believe anything not in B, I would not have come to believe P.)

  4. If it were the case that P, S would believe that P. (Mr. X was constitutionally compelled to write truthfully about what the color of his mother's socks were when he was born. In all possible worlds in which his mother wore blue socks, Mr. X's scrupulous research would have discovered it, and Mr. X would have reported it in B, where I would have read it, and so believed it.)

And yet, the intuitions on which Gettier problems play would say that I don't know P. I just believe P because it was in a certain book, but I have no rational reason to trust anything in that book.


ETA: And here's a counterexample from the other direction — that is, an example of knowledge that fails to meet Nozick's criteria.

Suppose that you sit before an upside-down cup, under which there is a ping-pong ball that has been painted some color. Your job is to learn the color of the ping-pong ball.

You employ the following strategy: You flip a coin. If the coin comes up heads, you lift up the cup and look at the ping-pong ball, noting its color. If the coin comes up tails, you just give up and go with the ignorance prior.

Suppose that, when you flip the coin, it comes up heads. Accordingly, you look at the ping-pong ball and see that it is red. Intuitively, we would say that you know that the ping-pong ball is red.

Nonetheless, we fail to meet Nozick's criterion 4. Had the coin come up tails, you would not have lifted the cup, so you would not have come to believe that the ball is red, even if this were still true.

Replies from: Oscar_Cunningham, lukeprog, AnlamK, None, nshepperd
comment by Oscar_Cunningham · 2011-05-16T14:16:51.442Z · LW(p) · GW(p)

Wham! Okay, I'm reverted to my old position. "Knowledge" is a fuzzy word.

ETA: Or at least a position of uncertainty. I need to research how counterfactuals work.

comment by lukeprog · 2011-05-16T13:51:10.248Z · LW(p) · GW(p)

Yes. An excellent illustration of 'the Gettier rabbit-hole.'

Replies from: IlyaShpitser, Tyrrell_McAllister
comment by IlyaShpitser · 2011-05-17T13:25:53.196Z · LW(p) · GW(p)

There is an entire chapter in Pearl's Causality book devoted to the rabbit-hole of defining what 'actual cause' means. (Note: the definition given there doesn't work, and there is a substantial literature discussing why and proposing fixes).

The counterargument to your post is that some seemingly fuzzy concepts actually have perfect intuitive consensus (e.g. almost everyone will classify any example as either concept X or not concept X the same way). This seems to be the case with 'actual cause.' As long as intuitive consensus continues to hold, the argument goes, there is hope of a concise logical description of it.

Replies from: Tyrrell_McAllister, lukeprog
comment by Tyrrell_McAllister · 2011-05-17T23:21:51.771Z · LW(p) · GW(p)

As long as intuitive consensus continues to hold, the argument goes, there is hope of a concise logical description of it.

Maybe the concept of "infinity" is a sort of success story. People said all sorts of confused and incompatible things about infinity for millennia. Then finally Cantor found a way to work with it sensibly. His approach proved to be robust enough to survive essentially unchanged even after the abandonment of naive set theory.

But even that isn't an example of philosophers solving a problem with conceptual analysis in the sense of the OP.

comment by lukeprog · 2011-05-17T13:29:37.592Z · LW(p) · GW(p)

Thanks for the Causality heads-up.

some seemingly fuzzy concepts actually have perfect intuitive consensus (e.g. almost everyone will classify any example as either concept X or not concept X the same way)

Can you name an example or two?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2011-05-17T17:32:12.044Z · LW(p) · GW(p)

Well, as I said, 'actual cause' appears to be one example. The literature is full of little causal stories where most people agree that something is an actual cause of something else in the story -- or not. Concepts which have already been formalized include concepts which are both used colloquially in "everyday conversation" and precisely in physics (e.g. weight/mass).

One could argue that 'actual cause' is in some sense not a natural concept, but it's still useful in the sense that formalizing the algorithm humans use to decide 'actual cause' problems can be useful for automating certain kinds of legal reasoning.

The Cyc project is a (probably doomed) example of a rabbit-hole project to construct an ontology of common sense. Lenat has been in that rabbit-hole for 27 years now.

comment by Tyrrell_McAllister · 2011-05-16T14:10:13.025Z · LW(p) · GW(p)

Now, if only someone would give me a hand out of this rabbit-hole before I spend all morning in here ;).

Replies from: komponisto, lukeprog
comment by komponisto · 2011-05-16T15:11:30.934Z · LW(p) · GW(p)

Well, of course Bayesianism is your friend here. Probability theory elegantly supersedes the qualitative concepts of "knowledge", "belief" and "justification" and, together with an understanding of heuristics and biases, nicely dissolves Gettier problems, so that we can safely call "knowledge" any assignment of high probability to a proposition that turns out to be true.

For example, take the original Gettier scenario. Since Jones has 10 coins in his pocket, P(man with 10 coins gets job) is bounded from below by P(Jones gets job). Hence any information that raises P(Jones gets job) necessarily raises P(man with 10 coins gets job) to something even higher, regardless of whether (Jones gets job) turns out to be true.

The psychological difficulty here is the counterintuitiveness of the rule P(A or B) >= P(A), and is in a sense "dual" to the conjunction fallacy. Just as one has to remember to subtract probability as burdensome details are introduced, one also has to remember to add probability as the reference class is broadened. When Smith learns the information suggesting Jones is the favored candidate, it may not feel like he is learning information about the set of all people with 10 coins in their pocket, but he is.

In your example of the book by Mr. X, we can observe that, because Mr. X was constitutionally compelled to write truthfully about his mother's socks, your belief about that is legitimately entangled with reality, even if your other beliefs aren't.

Replies from: Tyrrell_McAllister, Oscar_Cunningham
comment by Tyrrell_McAllister · 2011-05-16T16:08:19.144Z · LW(p) · GW(p)

Well, of course Bayesianism is your friend here. Probability theory elegantly supersedes the qualitative concepts of "knowledge", "belief" and "justification" and, together with an understanding of heuristics and biases, nicely dissolves Gettier problems, so that we can safely call "knowledge" any assignment of high probability to a proposition that turns out to be true.

I agree that, with regard to my own knowledge, I should just determine the probability that I assign to a proposition P. Once I conclude that P has a high probability of being true, why should I care whether, in addition, I "know" P in some sense?

Nonetheless, if I had to develop a coherent concept of "knowledge", I don't think that I'd go with "'knowledge' [is] any assignment of high probability to a proposition that turns out to be true." The crucial question is, who is assigning the probability? If it's my assignment, then, as I said, I agree that, for me, the question about knowledge dissolves. (More generally, the question dissolves if the assignment was made according to my prior and my cognitive strategies.)

But Getteir problems are usually about some third person's knowledge. When do you say that they know something? Suppose that, by your lights, they have a hopelessly screwed-up prior — say, an anti-Laplacian prior. So, they assign high probability to all sorts of stupid things for no good reason. Nonetheless, they have enough beliefs so that there are some things to which they assign high probability that turn out to be true. Would you really want to say that they "know" those things that just happen to be true?

That is essentially what was going on in my example with Mr. X's book. There, I'm the third person. I have the stupid prior that says that everything in B is true and everything not in B is false. Now, you know that Mr. X is constitutionally compelled to write truthfully about his mother's socks. So you know that reading B will legitimately entangle my beliefs with reality on that one solitary subject. But I don't know that fact about Mr. X. I just believe everything in B. You know that my cognitive strategy will give me reliable knowledge on this one subject. But, intuitively, my epistemic state seems so screw-up that you shouldn't say that I know anything, even though I got this one thing right.


ETA: Gah. This is what I meant by "down the rabbit-hole". These kinds of conversations are just too fun :). I look forward to your reply, but it will be at least a day before I reply in turn.


ETA: Okay, just one more thing. I just wanted to say that I agree with your approach to the original Gettier problem with the coins.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-05-16T17:45:26.908Z · LW(p) · GW(p)

I have the stupid prior that says that everything in B is true and everything not in B is false. Now, you know that Mr. X is constitutionally compelled to write truthfully about his mother's socks. So you know that reading B will legitimately entangle my beliefs with reality on that one solitary subject. But I don't know that fact about Mr. X. I just believe everything in B. You know that my cognitive strategy will give me reliable knowledge on this one subject.

If you want to set your standard for knowledge this high, I would argue that you're claiming nothing counts as knowledge since no one has any way to tell how good their priors are independently of their priors.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2011-05-16T18:12:48.599Z · LW(p) · GW(p)

If you want to set your standard for knowledge this high ...

I'm not sure what you mean by a "standard for knowledge". What standard for knowledge do you think that I have proposed?

I would argue that you're claiming nothing counts as knowledge since no one has any way to tell how good their priors are independently of their priors.

You're talking about someone trying to determine whether their own beliefs count as knowledge. I already said that the question of "knowledge" dissolves in that case. All that they should care about are the probabilities that they assign to propositions. (I'm not sure whether you agree with me there or not.)

But you certainly can evaluate someone else's prior. I was trying to explain why "knowledge" becomes problematic in that situation. Do you disagree?

comment by Oscar_Cunningham · 2011-05-16T16:06:32.388Z · LW(p) · GW(p)

I think that while what you define carves out a nice lump of thingspace, it fails to capture the intuitive meaning of the word probability. If I guess randomly that it will rain tomorrow and turn out to be right, then it doesn't fit intuition at all to say I knew that it would rain. This is why the traditional definition is "justified true belief" and that is what Gettier subverts.

You presumably already know all this. The point is that Tyrrell McAllister is trying (to avoid trying) to give a concise summary of the common usage of the word knowledge, rather than to give a definition that is actually useful for doing probability or solving problems.

comment by lukeprog · 2011-05-16T14:17:37.529Z · LW(p) · GW(p)

Here, let me introduce you to my friend Taboo...

;)

comment by AnlamK · 2011-05-17T21:19:52.765Z · LW(p) · GW(p)

There is a reason why the Gettier rabbit-hole is so dangerous. You can always cook up an improbable counterexample to any definition.

That's a very interesting thought. I wonder what leads you to it.

With the caveat that I have not read all of this thread:

*Are you basing this on the fact that so far, all attempts at analysis have proven futile? (If so, maybe we need to come up with more robust conditions.)

*Do you think that the concept of 'knowledge' is inherently vague similar (but not identical) to the way terms like 'tall' and 'bald' are?

*Do you suspect that there may be no fact of the matter about what 'knowledge' is, just like there is no fact of the matter about the baldness of the present King of France? (If so, then how do the competent speakers apply the verb 'to know' so well?)

If we could say with confidence that conceptual analysis of knowledge is a futile effort, I think that would be progress. And of course the interesting question would be why.

It may just be simply that non-technical, common terms like 'vehicle' and 'knowledge' (and of course others like 'table') can't be conceptually analyzed.

Also, experimental philosophy could be relevant to this discussion.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2011-05-17T21:55:07.192Z · LW(p) · GW(p)

There is a reason why the Gettier rabbit-hole is so dangerous. You can always cook up an improbable counterexample to any definition.

That's a very interesting thought. I wonder what leads you to it.

Let me expand on my comment a little: Thinking about the Gettier problem is dangerous in the same sense in which looking for a direct proof of the Goldbach conjecture is dangerous. These two activities share the following features:

  • When the problem was first posed, it was definitely worth looking for solutions. One could reasonably hope for success. (It would have been pretty nice if someone had found a solution to the Gettier problem within a year of its being posed.)

  • Now that the problem has been worked on for a long time by very smart people, you should assign very low probability to your own efforts succeeding.

  • Working on the problem can be addictive to certain kinds of people, in the sense that they will feel a strong urge to sink far more work into the problem than their probability of success can justify.

  • Despite the low probability of success for any given seeker, it's still good that there are a few people out there pursuing a solution.

  • But the rest of us should spend on our time on other things, aside from the occasional recreational jab at the problem, perhaps.

  • Besides, any resolution of the problem will probably result from powerful techniques arising in some unforeseen quarter. A direct frontal assault will probably not solve the problem.

So, when I called the Gettier problem "dangerous", I just meant that, for most people, it doesn't make sense to spend much time on it, because they will almost certainly fail, but some of us (including me) might find it too strong a temptation to resist.

*Are you basing this on the fact that so far, all attempts at analysis have proven futile? (If so, maybe we need to come up with more robust conditions.)

*Do you think that the concept of 'knowledge' is inherently vague similar (but not identical) to the way terms like 'tall' and 'bald' are?

*Do you suspect that there may be no fact of the matter about what 'knowledge' is, just like there is no fact of the matter about the baldness of the present King of France? (If so, then how do the competent speakers apply the verb 'to know' so well?)

Contemporary English-speakers must be implementing some finite algorithm when they decide whether their intuitions are happy with a claim of the form "Agent X knows Y". If someone wrote down that algorithm, I suppose that you could call it a solution to the Gettier problem. But I expect that the algorithm, as written, would look to us like a description of some inscrutably complex neurological process. It would not look like a piece of 20th century analytic philosophy.

On the other hand, I'm fairly confident that some piece of philosophy text could dissolve the problem. In short, we may be persuaded to abandon the intuitions that lie at the root of the Gettier problem. We may decide to stop trying to use those intuitions to guide what we say about epistemic agents.

comment by [deleted] · 2011-05-16T17:36:39.585Z · LW(p) · GW(p)

Both of your Gettier scenarios appear to confirm Nozick's criteria 3 and 4 when the criteria are understood as criteria for a belief-creation strategy to be considered a knowledge-creation strategy applicable to a context outside of the contrived scenario. Taking your scenarios one by one.

Suppose that I have irrationally decided to believe everything written in a certain book B and to believe nothing not written in B. Unfortunately for me, the book's author, a Mr. X, is a congenital liar.

You have described the strategy of believing everything written in a certain book B. This strategy fails to conform to Nozick's criteria 3 and 4 when considered outside of the contrived scenario in which the author is compelled to tell the truth about the socks, and therefore (if we apply the criteria) is not a knowledge creation strategy.

You employ the following strategy: You flip a coin. If the coin comes up heads, you lift up the cup and look at the ping-pong ball, noting its color. If the coin comes up tails, you just give up and go with the ignorance prior.

There are actually two strategies described here, and one of them is followed conditional on events occurring in the implementation of the other. The outer strategy is to flip the coin to decide whether to look at the ball. The inner strategy is to look at the ball. The inner strategy conforms to Nozick's criteria 3 and 4, and therefore (if we apply the criteria) is a knowledge creation strategy.

In both cases, the intuitive results you describe appear to conform to Nozick's criteria 3 and 4 understood as described in the first paragraph. Nozick's criteria 3 and 4 (understood as above) appear moreover to play a key role in making sense of our intuitive judgment in both the scenarios. That is, it strikes me as intuitive that the reason we don't count the belief about the socks as knowledge is that it is the fruit of a strategy which, as a general strategy, appears to us to violate criteria 3 and 4 wildly, and only happens to satisfy them in a particular highly contrived context. And similarly, it strikes me as intuitive that we accept the belief about the color as knowledge because we are confident that the method of looking at the ball is a method which strongly satisfies criteria 3 and 4.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2011-05-16T18:24:57.225Z · LW(p) · GW(p)

This strategy fails to conform to Nozick's criteria 3 and 4 when considered outside of the contrived scenario in which the author is compelled to tell the truth about the socks, and therefore (if we apply the criteria) is not a knowledge creation strategy.

The problem with conversations about definitions is that we want our definitions to work perfectly even in the least convenient possible world.

So imagine that, as a third-person observer, you know enough to see that the scenario is not highly contrived — that it is in fact a logical consequence of some relatively simple assumptions about the nature of reality. Suppose that, for you, the whole scenario is in fact highly probable.

On second thought, don't imagine that. For that is exactly the train of thought that leads to wasting time on thinking about the Getteir problem ;).

Replies from: None
comment by [deleted] · 2011-05-16T19:08:58.368Z · LW(p) · GW(p)

So imagine that, as a third-person observer, you know enough to see that the scenario is not highly contrived — that it is in fact a logical consequence of some relatively simple assumptions about the nature of reality. Suppose that, for you, the whole scenario is in fact highly probable.

A large part of what was highly contrived was your selection of a particular true, honest, well-researched sentence in a book otherwise filled with lies, precisely because it is so unusual. In order to make it not contrived, we must suppose something like, the book has no lies, the book is all truth. Or we might even need to suppose that every sentence in every book is the truth. In such a world, then the contrivedness of the selection of a true sentence is minimized.

So let us imagine ourselves into a world in which every sentence in every book is true. And now we imagine someone who selects a book and believes everything in it. In this world, this strategy, generalized (to pick a random book and believe everything in it) becomes a reliable way to generate true belief. In such a world, I think it would be arguable to call such a strategy a genuine knowledge-creation strategy. In any case, it would depart so radically from your scenario (since in your scenario everything in the book other than that one fact is a lie) that it's not at all clear how it would relate to your scenario.

Replies from: Tyrrell_McAllister, CuSithBell
comment by Tyrrell_McAllister · 2011-05-16T19:25:01.923Z · LW(p) · GW(p)

I'm not sure that I'm seeing your point. Are you saying that

  • One shouldn't waste time on trying to concoct exceptionless definitions — "exceptionless" in the sense that they fit our intuitions in every single conceivable scenario. In particular, we shouldn't worry about "contrived" scenarios. If a definition works in the non-contrived cases, that's good enough.

... or are you saying that

  • Nozick's definition really is exceptionless. In every conceivable scenario, and for every single proposition P, every instance of someone "knowing" that P would conform to every one of Nozick's criteria (and conversely).

... or are you saying something else?

Replies from: None
comment by [deleted] · 2011-05-16T19:38:25.050Z · LW(p) · GW(p)

Nozick apparently intended his definition to apply to single beliefs. I applied it to belief-creating strategies (or procedures, methods, mechanisms) rather than to individual beliefs. These strategies are to be evaluated in terms of their overall results if applied widely. Then I noticed that your two Gettier scenarios involved strategies which, respectively, violated and conformed to the definition as I applied it.

That's all. I am not drawing conclusions (yet).

Replies from: Jiro
comment by Jiro · 2014-03-14T18:13:30.187Z · LW(p) · GW(p)

I'm reminded of the Golden Rule. Since I would like if everyone would execute "if (I am Jiro) then rob", I should execute that as well.

It's actually pretty hard to define what it means for a strategy to be exceptionless, and it may be subject to a grue/bleen paradox.

comment by CuSithBell · 2011-05-16T19:22:40.054Z · LW(p) · GW(p)

I thought it sounded contrived at first, but then remembered there are tons of people who pick a book and believe everything they read in it, reaching many false conclusions and a few true ones.

comment by nshepperd · 2011-05-16T17:15:57.185Z · LW(p) · GW(p)

I always thought the "if it were the case" thing was just a way of sweeping the knowledge problem under the rug by restricting counterexamples to "plausible" things that "would happen". It gives the appearance of a definition of knowledge, while simply moving the problem into the "plausibility" box (which you need to use your knowledge to evaluate).

I'm not sure it's useful to try to define a binary account of knowledge anyway though. People just don't work like that.

comment by Will_Sawin · 2011-05-16T16:11:54.192Z · LW(p) · GW(p)

A different objection, following Eliezer's PS, is that:

Between me and a red box, there is a wall with a hole. I see the red box through the hole, and therefore know that the box is red. I reason, however, that I might have instead chosen to sit somewhere else, and I would not have been able to see the red box through the hole, and would not believe that the box is red.

Or more formally: If I know P, then I know (P or Q) for all Q, but:

P => Believes (P)

does not imply

(P v Q) => Believes (P v Q)

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2011-05-16T16:35:54.943Z · LW(p) · GW(p)

Between me and a red box, there is a wall with a hole. I see the red box through the hole, and therefore know that the box is red. I reason, however, that I might have instead chosen to sit somewhere else, and I would not have been able to see the red box through the hole, and would not believe that the box is red.

This is a more realistic, and hence better, version of the counterexample that I gave in my ETA to this comment.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-16T21:57:35.934Z · LW(p) · GW(p)

(3) If it were the case that (not-P), S would not believe that P

(4) If it were the case that P, S would believe that P

I'm genuinely surprised. Condition 4 seems blatantly unnecessary and I had thought analytic philosophers (and Nozick in particular) more competent than that. Am I missing something?

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2011-05-17T00:07:58.426Z · LW(p) · GW(p)

Your hunch is right. Starting on page 179 of Nozick's Philosophical explanations, he address counterexamples like the one that Will Sawin proposed. In response, he gives a modified version of his criteria. As near as I can tell, my first counterexample still breaks it, though.

comment by lukeprog · 2011-05-16T13:55:44.425Z · LW(p) · GW(p)

Yes. In the next post, I'll be naming some definitions for moral terms that should be thrown out, for example those which rest on false assumptions about reality (e.g. "God exists.")

comment by CuSithBell · 2011-05-16T21:27:36.016Z · LW(p) · GW(p)

It seems like there is a reason why words tend to have short definitions: the brain can only run short algorithms to determine whether an instance falls into the category or not.

I don't think the brain usually makes this determination by looking at things that are much like definitions.

comment by DuncanS · 2011-05-16T21:24:06.330Z · LW(p) · GW(p)

I think this isn't the usual sense of 'knowledge'. It's too definite. Do I know there's a website called less wrong, for example? Not for sure. It might have ceased to exist while I'm typing this - I have no present confirmation. And of course any confirmation only lasts as long as you look at it.

Knowledge is that state where one can make predictions about a subject which are better than chance. Of course this definition has its own flaws, doubtless....

comment by Goobahman · 2011-05-18T03:57:30.963Z · LW(p) · GW(p)

Hey Luke,

Thanks again for your work. You are by far the greatest online teacher I've ever come across (though I've never seen you teach face-to-face). you are concise, clear, direct, empathetic, extremely thorough, tactful and accessible. I am in awe of your abilities. You take the fruit that is at the top of the tree and gently place it into my straining arms! Sorry for the exuberant worship but I really want to express my gratitude for your efforts. They definitely aren't wasted on me.

comment by BobTheBob · 2011-05-22T18:09:06.805Z · LW(p) · GW(p)

Some thoughts on this and related LW discussions. They come a bit late - apols to you and commentators if they've already been addressed or made in the commentary:

1) Definitions (this is a biggie).

There is a fair bit of confusion on LW, it seems to me, about just what definitions are and what their relevance is to philosophical and other discussion. Here's my understanding - please say if you think I've gone wrong.

If in the course of philosophical discussion, I explicitly define a familiar term, my aim in doing so is to remove the term from debate - I fix the value of a variable to restrict the problem. It'd be good to find a real example here, but I'm not convinced defining terms happens very often in philosophical or other debate. By way of a contrived example, one might want to consider, in evaluating some theory, the moral implications of actions made under duress (a gun held to the head) but not physically initiated by an external agent (a jostle to the arm). One might say, "Define 'coerced action' to mean any action not physically initiated but made under duress" (or more precise words to the effect). This done, it wouldn't make sense simply to object that my conclusion regarding coerced actions doesn't apply to someone physically pushed from behind - I have stuipulated for the sake of argument I'm not talking about such cases. (in this post, you distinguish stipulation and definition - do you have in mind a distinction I'm glossing over?)

Contrast this to the usual case for conceptual analyses, where it's assumed there's a shared concept ('good', 'right', 'possible', 'knows', etc), and what is produced is meant to be a set of necessary and sufficient conditions meant to capture the concept. Such an analysis is not a definition. Regarding such analyses, typically one can point to a particular thing and say, eg, "Our shared concept includes this specimen, it lacks a necessary condition, therefore your analysis is mistaken" - or, maybe "Intuitively, this specimen falls under our concept, it lacks...". Such a response works only if there is broad agreement that the specimen falls under the concept. Usually this works out to be the case.

I haven't read the Jackson book, so please do correct me if you think I've misunderstood, but I take it something like this is his point in the paragraphs you quote. Tom and Jack can define 'right action' to mean whatever they want it to. In so doing, however, we cease to have any reason to think they mean by the term what we intuitively do. Rather, Jackson is observing, what Tom and Jack should be doing is saying that rightness is that thing (whatever exactly it is) which our folk concepts roughly converge on, and taking up the task of refining our understanding from there - no defining involved.

You say,

... Jackson supposes that we can pick out which platitudes of moral discourse matter, and how much they matter, for determining the meaning of moral terms

Well, not quite. The point I take it is rather that there simply are 'folk' platitudes which pick-out the meanings of moral terms - this is the starting point. 'Killing people for fun is wrong', 'Helping elderly ladies across the street is right' etc, etc. These are the data (moral intuitions, as usually understood). If this isn't the case, there isn't even a subject to discuss. Either way, it has nothing to do with definitions.

Confusion about definitions is evident in the quote from the post you link to. To re-quote:

...the first person is speaking as if 'sound' means acoustic vibrations in the air; the second person is speaking as if 'sound' means an auditory experience in a brain. If you ask "Are there acoustic vibrations?" or "Are there auditory experiences?", the answer is at once obvious. And so the argument is really about the definition of the word 'sound'.

Possibly the problem is that 'sound' has two meanings, and the disputants each are failing to see that the other means something different. Definitions are not relevant here, meanings are. (Gratuitous digression: what is "an auditory experience in a brain"? If this means something entirely characterizable in terms of neural events, end of story, then plausibly one of the disputants would say this does not capture what he means by 'sound' - what he means is subjective and ineffable, something neural events aren't. He might go on to wonder whether that subjective, ineffable thing, given that it is apparently created by the supposedly mind-independent event of the falling of a tree, has any existence apart from his self (not to be confused with his brain!). I'm not defending this view, just saying that what's offered is not a response but rather a simple begging of the question against it. End of digression.)

2) In your opening section you produce an example meant to show conceptual analysis is silly. Looks to me more like a silly attempt at an example of conceptual analysis. If you really want to make your case, why not take a real example of a philosophical argument -preferably one widely held in high regard at least by philosophers? There's lots of 'em around.

3) In your section The trouble with conceptual analysis, you finally explain,

The trouble is that philosophers often take this "what we mean by" question so seriously that thousands of pages of debate concern which definition to use... .

As explained above, philosophical discussion is not about "which definition to use" -it's about (roughly, and among other things) clarifying our concepts. The task is difficult but worthwhile because the concepts in question are important but subtle.

Within 20 seconds of arguing about the definition of 'desire', someone will say, "Screw it. Taboo 'desire' so we can argue about facts and anticipations, not definitions."

If you don't have the patience to do philosophy, or you don't think it's of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don't think that in doing this latter thing you'll address the question philosophy is interested in, or that you've said anything at all so far to show philosophy isn't worth doing. In this connection, one of the real benefits of doing philosophy is that it encourages precision and attention to detail in thinking. You say Eliezer Yudkowsky "...advises against reading mainstream philosophy because he thinks it will 'teach very bad habits of thought that will lead people to be unable to do real work.'" The original quote continues, "...assume naturalism! Move on! NEXT!" Unfortunately Eliezer has a bad habit of making unclear and undefended or question-begging assertions, and this is one of them. What are the bad habits, and how does philosophy encourage them? And what precisely is meant by 'naturalism'? To make the latter assertion and simultaneously to eschew the responsibility of articulating what this commits you to is to presume you can both have your cake and eat it too. This may work in blog posts -it wouldn't pass in serious discussion.

(Unlike some on this blog, I have not slavishly pored through Eliezer's every post. If there is somewhere a serious discussion of the meaning of 'naturalism' which shows how the usual problems with normative concepts like 'rational' can successfully be navigated, I will withdraw this remark).

Replies from: Amanojack, lukeprog, Eugine_Nier
comment by Amanojack · 2011-05-22T18:24:07.994Z · LW(p) · GW(p)

Within 20 seconds of arguing about the definition of 'desire', someone will say, "Screw it. Taboo 'desire' so we can argue about facts and anticipations, not definitions."

If you don't have the patience to do philosophy, or you don't think it's of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don't think that in doing this latter thing you'll address the question philosophy is interested in, or that you've said anything at all so far to show philosophy isn't worth doing.

You're tacitly defining philosophy as an endeavor that "doesn't involve facts or anticipations," that is, as something not worth doing in the most literal sense. Such "philosophy" would be a field defined to be useless for guiding one's actions. Anything that is useless for guiding my actions is, well, useless.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-22T18:36:17.865Z · LW(p) · GW(p)

The question of what is worth doing is of course profoundly philosophical. You have just assumed an answer.: that what is worth doing is achieving your aims efficiently and what is not worth doing is thinking about whether you have good aims, or which different aims you should have. (And anything that influences your goals will most certainly influence your expected experiences).

Replies from: Amanojack, nshepperd
comment by Amanojack · 2011-05-22T20:34:38.331Z · LW(p) · GW(p)

We've been over this: either "good aims" and "aims you should have" imply some kind of objective value judgment, which is incoherent, or they merely imply ways to achieve my final aims more efficiently, and we are back to my claim above as that is included under the umbrella of "guiding my actions."

Replies from: BobTheBob, BobTheBob, BobTheBob, BobTheBob, Peterdjones
comment by BobTheBob · 2011-05-22T21:23:56.222Z · LW(p) · GW(p)

I think Peterdjones's answer hits it on the head. I understand you've thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.

Really I meant to be throwing the ball back to lukeprog to give us an idea of what the 'arguing about facts and anticipations' alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.

Replies from: Amanojack, Peterdjones
comment by Amanojack · 2011-05-23T15:34:49.801Z · LW(p) · GW(p)

As far as objective value, I simply don't understand what anyone means by the term. And I think lukeprog's point could be summed up as, "Trying to figure out how each discussant is defining their terms is not really 'doing philosophy'; it's just the groundwork necessary for people not to talk past each other."

As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can't figure out what anticipations X entails, I will just respond, "So what?"

To unite the two themes: The ultimate definition would tell me why to care.

Replies from: ArisKatsaris, BobTheBob, BobTheBob, BobTheBob, BobTheBob, BobTheBob, Peterdjones, Peterdjones
comment by ArisKatsaris · 2011-05-25T11:54:39.580Z · LW(p) · GW(p)

The ultimate definition would tell me why to care.

In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call "morality", by and large.

It's "Do unto others...", but abstracted a bit, so that we really mean "Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you."


Omega puts you in a room with a big red button. "Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don't press it I punch you on the nose and you get no money. They have a similar button which they can use to kill you and get 10 dollars. You can't communicate with them. In fact they think they're the only person being given the option of a button, so this problem isn't exactly like Prisoner's dilemma. They don't even know you exist or that their own life is at stake."

"But here's the offer I'm making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they'll be identifying themself.

"Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances."


Given the above scenario, you'll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.

Replies from: Amanojack, Peterdjones
comment by Amanojack · 2011-05-25T18:39:03.222Z · LW(p) · GW(p)

I would indeed it prefer if other people had certain moral sentiments. I don't think I ever suggested otherwise.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-05-25T19:52:01.423Z · LW(p) · GW(p)

Not quite my point. I'm not talking about what your preferences would be. That would be subjective, personal. I'm talking about what everyone's meta-ethical preferences would be, if self-consistent, and abstracted enough.

My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.

That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T20:01:05.213Z · LW(p) · GW(p)

Then why not just call it "universal morality"?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-05-25T21:37:35.718Z · LW(p) · GW(p)

It's called that too. Are you just objecting as to what we are calling it?

Replies from: Amanojack
comment by Amanojack · 2011-05-25T22:17:13.299Z · LW(p) · GW(p)

Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you're saying in what sense it's supposed to be objective.

As for this collectivism, though, I don't go for it. There is no way to know another's utility function, no way to compare utility functions among people, etc. other than subjectively. And who's going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that's a debate for another day.

Replies from: ArisKatsaris, Peterdjones
comment by ArisKatsaris · 2011-05-26T00:25:29.165Z · LW(p) · GW(p)

I'm getting a bad vibe here, and no longer feel we're having the same conversation

"Person or group that decides"? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody "decides", or everyone does. And if they don't reach the same decision, then there's no single objective morality -- but even i so perhaps there's a limited set of coherent metaethical positions, like two or three of them.

I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas.

I think my post was inspired more by TDT solutions to Prisoner's dilemma and Newcomb's box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.

I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.

Replies from: Amanojack
comment by Amanojack · 2011-05-26T00:37:48.911Z · LW(p) · GW(p)

You're right, I think I'm confused about what you were talking about, or I inferred too much. I'm not really following at this point either.

One thing, though, is that you're using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God's will, as a way to get along with others, etc. That'll tend to cause some confusion. A good heuristic is, "Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it)."

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-05-27T13:12:12.652Z · LW(p) · GW(p)

One thing, though, is that you're using meta-ethics to mean ethics.

I'm not.

An ethic may say:

  • I should support same-sex marriage. (SSM-YES)
    or perhaps:
  • I should oppose same-sex marraige (SSM-NO)

The reason for this position is the meta-ethic:
e.g.

  • Because I should act to increase average utility. (UTIL-AVERAGE)
  • Because I should act to increase total utility. (UTIL-TOTAL)
  • Because I should act to increase total amount of freedom (FREEDOM-GOOD)
  • Because I should act to increase average societal happiness. (SOCIETAL-HAPPYGOOD-AVERAGE)
  • Because I should obey the will of our voters (DEMOCRACY-GOOD)
  • Because I should do what God commands. (OBEY-GOD).

But some metaethical positions are invalid because of false assumptions (e.g. God's existence). Other positions may not be abstract enough that they could possibly become universal or apply to all situations. Some combinations of ethics and metaethics may be the result of other factual or reasoning mistakes (e.g. someone thinks SSM will harm society, but it ends up helping it, even by the person's own measuring).

So, NO, I don't speak necessarily about Collective Greatest Happiness Utilitarianism. I'm NOT talking about a specific metaethic, not even necessarily a consequentialistic metaethic (let alone a "Greatest happiness utilitarianism") I'm speaking about the hypothetical point in metaethical space that everyone would hypothetically prefer everyone to have - an Attractor of metaethical positions.

comment by Peterdjones · 2011-05-25T23:25:29.345Z · LW(p) · GW(p)

As for this collectivism, though, I don't go for it. There is no way to know another's utility function, no way to compare utility functions among people, etc. other than subjectively.

That's very contestable. It has frequently argued here that preferences can be inferred from behaviour; it's also been argued that introspection (if that is what you mean by "subjectively") is not a reliable guide to motivation.

Replies from: Amanojack
comment by Amanojack · 2011-05-26T00:42:29.500Z · LW(p) · GW(p)

This is the whole demonstrated preference thing. I don't buy it myself, but that's a debate for another time. What I mean by subjectively is that I will value one person's life more than another person's life, or I could think that I want that $1,000,000 more than a rich person wants it, but that's just all in my head. To compare utility functions and work from demonstrated preference usually - not always - is a precursor to some kind of authoritarian scheme. I can't say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.

comment by Peterdjones · 2011-05-25T13:15:55.637Z · LW(p) · GW(p)

Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well, because the gains from co-operation are remote, and there are many realistic instances where selfish action produces immediate rewards (cd the Prudent Predatory objection Rand's egoistic ethics).

OTOH, since many people are selfish, they are made to care by having legal and social sanctions against excessively selfish behaviour.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-05-25T14:14:44.597Z · LW(p) · GW(p)

Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well,

I wasn't talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise; and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics - which overlaps with but doesn't equal altruism, same as it overlaps but doesn't equal selfishness.

The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can't.

This by itself isn't a reason that can force someone to care -- you can't make a rock care about anything, but that's not a problem with your argument. But it's something that leads to different expectations about the world, namely what Amanojack was asking for.

In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don't approach it, i expect more war and other devastation.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-25T14:35:33.371Z · LW(p) · GW(p)

I wasn't talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise;

Although it usually doesn't.

and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics - which overlaps with but doesn't equal altruism, same as it overlaps but doesn't equal selfishness.

I think that you version of altruism is a straw man, and that what most people mean by altruism isn't very different from co operation.

The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can't.

Or, as I call it, universalisability.

But it's something that leads to different expectations about the world, namely what Amanojack was asking for.

That argument doesn't have to be made at all. Morality can stand as a refutation of the claim that anticipiation of experience is of ultimate importance. And it can be made differently: if you rejig your values, you can expect to antipate different experiences -- it can be a self-fulffilling prophecy and not merely passive anticipation.

In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don't approach it, i expect more war and other devastation.

There is an argument from self interest, but it is tertiary to the two arguments I mentioned above.

comment by BobTheBob · 2011-05-24T01:29:07.934Z · LW(p) · GW(p)

Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:

I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality".

-though the apparent tension in being a solipsist who argues gets to the root of the issue.

For what it may be worth:

I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them.

From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.

Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.

Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).

Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).

Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren't, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.

Replies from: Amanojack, Peterdjones
comment by Amanojack · 2011-05-25T08:00:21.226Z · LW(p) · GW(p)

I took a slightly different tack, which is maybe moot given your admission to being a solipsist

Solipsism is an ontological stance: in short, "there is nothing out there but my own mind." I am saying something slightly different: "To speak of there being something/nothing out there is meaningless to me unless I can see why to care." Then again, I'd say this is tautological/obvious in that "meaning" just is "why it matters to me."

My "position" (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I'm sure it will be.

I'm assuming you subscribe to what you consider to be a rigorously scientific world-view

I'm not a naturalist. I'm not skeptical of "objective" because of such reasons; I am skeptical of it merely because I don't know what the word refers to (unless it means something like "in accordance with consensus"). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you'll call it (I mean them all synonymously).

If after engaging in such discourse I am not able to do that, I will eventually want to ask, "So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don't want?"

Replies from: Peterdjones
comment by Peterdjones · 2011-05-25T14:01:20.575Z · LW(p) · GW(p)

Solipsism is an ontological stance: in short, "there is nothing out there but my own mind." I am saying something slightly different: "To speak of there being something/nothing out there is meaningless to me unless I can see why to care." Then again, I'd say this is tautological/obvious in that "meaning" just is "why it matters to me."

Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.

My "position" (really a meta-position about philosophical positions) is just that language obscures what is going on.

Whose language ? What language? If you think all language is a problem, what do you intend to replace it with?

I'm not a naturalist. I'm not skeptical of "objective" because of such reasons; I am skeptical of it merely because I don't know what the word refers to

It refers to the stuff that doesn't go away when you stop believing in it.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T19:28:31.047Z · LW(p) · GW(p)

"To speak of there being something/nothing out there is meaningless to me unless I can see why to care."

Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.

Note the bold.

Whose language ? What language?

English, and all the rest that I know of.

If you think all language is a problem, what do you intend to replace it with?

Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful.

It refers to the stuff that doesn't go away when you stop believing in it.

If so, I suggest "permanent" as a clearer word choice.

comment by Peterdjones · 2011-05-24T14:33:03.863Z · LW(p) · GW(p)

From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.

I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet's Intentional Stance.

can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.

Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.

Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).

To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.

Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren't, but I submit the idea is not incoherent.

For some value of "incoherent". Personally, I find it useful to strike out the word and replace it with something more precise, such a "semantically meaningless", "contradictory", "self-underminng" etc.

Replies from: nshepperd, BobTheBob
comment by nshepperd · 2011-05-24T15:30:49.758Z · LW(p) · GW(p)

Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.

I take the position that while we may well have evolved with different values, they wouldn't be morality. "Morality" is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-24T16:25:02.402Z · LW(p) · GW(p)

I take the position that while we may well have evolved with different values, they wouldn't be morality.

How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)

Replies from: nshepperd
comment by nshepperd · 2011-05-25T03:51:00.675Z · LW(p) · GW(p)

I'm not sure what you want to know. I feel motivated to be moral, and the things that motivate thinking machines are what I call "values". Hence, our values are moral.

But of course naturally-evolved values are not moral simply by virtue of being values. Morality isn't about values, it's about life and death and happiness and sadness and many other things beside.

comment by BobTheBob · 2011-05-25T01:26:48.930Z · LW(p) · GW(p)

I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet's Intentional Stance.

I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can't derive an ought from an is, and that this is what's at stake here. Since you can't make sense of a person as rational if it's not the case there's anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we're talking about the social sciences, that's another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I'd be open to hear a different view.

Again, I find it incredible that natural facts have no relation to morality.

I didn't say this - just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.

To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.

Here's another stab at it: natural science can in principle tell us everything there is to know about a person's inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, 'I ought to go to class' in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-25T12:43:29.213Z · LW(p) · GW(p)

I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can't derive an ought from an is, and that this is what's at stake here.

Since you can't make sense of a person as rational if it's not the case there's anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality.

But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking "rational". Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn'ts. I have been trying to press that unpacking morality leads to the similar analytical truth: " a moral agent ought to adopt universalisable goals."

I didn't say this - just that from a purely scientific point of view, morality is invisible.

"Oughts" in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.

Here's another stab at it: natural science can in principle tell us everything there is to know about a person's inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, 'I ought to go to class' in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?).

I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).

To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view.

I don't see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.

Replies from: BobTheBob, BobTheBob, BobTheBob
comment by BobTheBob · 2011-05-27T03:32:10.953Z · LW(p) · GW(p)

But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking "rational". Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn'ts. I have been trying to press that unpacking morality leads to the similar analytical truth: " a moral agent ought to adopt universalisable goals."

I expressed myself badly. I agree entirely with this.

"Oughts" in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.

Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.

I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).

And I want to persuade LWers

1) that facts about her utility functions aren't naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,

and

2) that this is ok - these are still respectable facts, notwithstanding.

I don't see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.

But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system's goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-27T12:29:27.855Z · LW(p) · GW(p)

And I want to persuade LWers

1) that facts about her utility functions aren't naturalistic facts, as facts about her > cholesterol level or about neural activity in different parts of her cortex, are,

And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour. You seem to be in need of a narrow, sipulative definition of naturalistic.

Some people might say, eg, that an evolved, living system's goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.

You introduced the word "basic" there. It might be the case that goals disappear on a very fine-grained atomistic view of things (along with rules and structures and various other things). But that would mean that goals aren't basic physical facts. Naturalism tends to be defined more epistemically than physicalism, so the inferrabilty of UFs (or goals or intentions) from coarse-grained physical behaviour is a good basis for supposing them to be natural by that usage.

Replies from: BobTheBob, BobTheBob, BobTheBob, BobTheBob
comment by BobTheBob · 2011-05-27T17:04:13.582Z · LW(p) · GW(p)

And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.

But this is false, surely. I take it that a fact about X's UF might be some such as 'X prefers apples to pears'. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There's any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-27T23:48:21.138Z · LW(p) · GW(p)

Oh, that's the philosopher's definition of naturalistic. OTOH, you could just adopt the scientists version and scan their brain.

Replies from: BobTheBob
comment by BobTheBob · 2011-05-28T03:15:09.359Z · LW(p) · GW(p)

Well, alright, please tell me: what is a Utility Function, that it can be inferred from a brain scan? How's this supposed to work, in broad terms?

comment by BobTheBob · 2011-05-27T17:03:44.522Z · LW(p) · GW(p)

And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.

But this is false, surely. I take it that a fact about X's UF might be some such as 'X prefers apples to pears'. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There's any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advances which try, but the concensus, I think, is that there is no easy naturalistic solution.

comment by BobTheBob · 2011-05-27T16:58:59.915Z · LW(p) · GW(p)

And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.

But this is false, surely. I take it that a fact about X's UF might be some such as 'X prefers apples to pears'. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There's any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been any number of theories advances which try, but the concensus, I think, is that all fail.

comment by BobTheBob · 2011-05-27T16:57:21.639Z · LW(p) · GW(p)

And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.

But this is false, surely. I take it that a fact about X's UF might be some such as 'X prefers apples to pears' (is this what you have in mind?) First, notice that X may also prefer his/her philosophy TA to his/her EM Fields and Waves TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There's any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there's no easy naturalistic solution.

comment by BobTheBob · 2011-05-27T03:29:32.369Z · LW(p) · GW(p)

But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking "rational". Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn'ts. I have been trying to press that unpacking morality leads to the similar analytical truth: " a moral agent ought to adopt universalisable goals."

I expressed myself badly. I agree entirely with this.

"Oughts" in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.

Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.

I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).

And I want to persuade LWers

*that facts about her utility functions aren't naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,

and

*that this is ok - these are still respectable facts, notwithstanding.

I don't see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.

But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system's goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.

comment by BobTheBob · 2011-05-27T03:25:57.516Z · LW(p) · GW(p)

But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking "rational". Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn'ts. I have been trying to press that unpacking morality leads to the similar analytical truth: " a moral agent ought to adopt universalisable goals."

I expressed myself badly. I agree entirely with this.

"Oughts" in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.

Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.

I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).

And I want to persuade LWers

  • that facts about her utility functions aren't naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are, and *that this is ok - these are still respectable facts, notwithstanding.

I don't see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.

But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system's goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.

comment by BobTheBob · 2011-05-24T01:28:16.452Z · LW(p) · GW(p)

Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:

I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality". -though the apparent tension in being a solipsist who argues gets to the root of the issue.

For what it may be worth:

I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them.

From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.

Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.

Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).

Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).

Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren't, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.

comment by BobTheBob · 2011-05-24T01:26:18.735Z · LW(p) · GW(p)

Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist

I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality".

-though the apparent contradiction in being a solipsist who argues gets to the root of the issue.

For what it may be worth:

I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them. From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.

Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.

Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).

Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).

Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren't, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.

comment by BobTheBob · 2011-05-24T01:24:02.172Z · LW(p) · GW(p)

Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:

I should disclose that I don't find ultimately any kind of objectivism coherent, including 'objective reality'. -though the apparent contradiction in being a solipsist who argues gets to the root of the issue.

For what it may be worth:

I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them.

From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.

Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.

Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).

Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).

Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren't, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.

comment by BobTheBob · 2011-05-24T01:21:50.978Z · LW(p) · GW(p)

Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says is more incisive and clear than what I came up with. I took a different tack, which is maybe moot given your admission to being a solipsist:

I should disclose that I don't find ultimately any kind of objectivism coherent, including 'objective reality'." -though the apparent tension in being a solipsist who argues gets to the root of the issue.

For what it may be worth, here's what I had:

I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them.

From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.

Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.

Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).

Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).

Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren't, but anyway the idea at least is not incoherent. And showing otherwise would take doing some philosophy.

comment by Peterdjones · 2011-05-23T15:46:16.810Z · LW(p) · GW(p)

What they generally mean is "not subjective". You might object that non-subjective value is contradictory, but that is not the same as objecting that it is incomprehensible, since one has to understand the meanings of individual terms to see a contradiction.

As for anticipations: believing morality is objective entails that some of your beliefs may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.

Replies from: Amanojack
comment by Amanojack · 2011-05-23T16:26:32.808Z · LW(p) · GW(p)

I'm not saying non-subjective value is contradictory, just that I don't know what it could mean. To me "value" is a verb, and the noun form is just a nominalization of the verb, like the noun "taste" is a nominalization of the verb "taste." Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music, etc. I didn't understand what she meant either.

As for anticipations: believing morality is objective entails that some of your beliefs may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.

But before I would even want to revise my aims and goals, I'd have to anticipate something different than I do now. What does "some of your beliefs may be wrong by objective standards" make me anticipate that would motivate me to change my goals? (This is the same as the question in the other comment: What penalty do I suffer by having the "wrong" moral sentiments?)

Replies from: Peterdjones
comment by Peterdjones · 2011-05-23T16:42:31.001Z · LW(p) · GW(p)

value" is a verb, and the noun form is just a nominalization of the verb,

I don't see the force to that argument. "Believe" is a verb and "belief" is a nominalisation. But beliefs can be objectively right or wrong -- if they belong to the appropriate subject area.

Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music,

It is possible for aesthetics(and various other things) to be un-objectifiable whilst morality (and various other things) are objectifiable.

But before I would even want to revise my aims and goals, I'd have to anticipate something different than I do now.

Why?

What does "some of your beliefs may be wrong by objective standards" make me anticipate that would motivate me to change my goals?

You should be motivated by a desire to get things right in general. The anticipation thing is just a part of that. It's not an ultimate. But morality is an ultimate because there is no more important value than a moral value.

(This is the same as the question in the other comment: What penalty do I suffer by having the "wrong" moral sentiments?)

If there is no personal gain from morality, that doesn't mean you shouldn't be moral. You should be moral by the definition of "moral"and "should". It's an analytical truth. It is for selfishness to justify itself in the face of morality, not vice versa.

Replies from: Amanojack
comment by Amanojack · 2011-05-23T18:06:04.280Z · LW(p) · GW(p)

First of all, I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality." It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably. In the end, nothing else matters to me (nor, I expect, anyone else - if they understand what I'm getting at here).

You should be motivated by a desire to get things right in general. The anticipation thing is just a part of that. It's not an ultimate

So you disagree with EY about making beliefs pay rent? Like, maybe some beliefs don't pay rent but are still important? I just don't see how that makes sense.

You should be moral by the definition of "moral"and "should".

This seems circular.

If there is no personal gain from morality, that doesn't mean you shouldn't be moral.

What if I say, "So what?"

Replies from: Peterdjones
comment by Peterdjones · 2011-05-24T15:05:43.276Z · LW(p) · GW(p)

First of all, I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality." It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably

How do you know that?

So you disagree with EY about making beliefs pay rent?

If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.

You should be moral by the definition of "moral"and "should". This seems circular.

You say that like that's a bad thing. I said it was analytical and analytical truths would be expected to sound tautologous or circular.

If there is no personal gain from morality, that doesn't mean you shouldn't be moral.

What if I say, "So what?"

So it's still true. Not caring is not refutation.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T08:59:26.179Z · LW(p) · GW(p)

It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably

How do you know that?

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.

Well, what use is your belief in "objective value"?

So it's still true. Not caring is not refutation.

Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like "true" and "refute." I would substitute "useful" and "show people why it is not useful," respectively.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-25T13:26:46.741Z · LW(p) · GW(p)

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

Well, what use is your belief in "objective value"?

What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn't: Morality is, by definition.

Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like "true" and "refute."

Then I have a bridge to sell you.

I would substitute "useful" and "show people why it is not useful," respectively.

And would it be true that it is non-useful? Since to assert P is to assert "P is true", truth is a rather hard thing to eliminate. One would have to adopt the silence of Diogenes.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T18:50:19.218Z · LW(p) · GW(p)

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

That's what I was responding to.

What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn't: Morality is, by definition.

Zorg: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.

And would it be true that it is non-useful? Since to assert P is to assert "P is true", truth is a rather hard thing to eliminate.

You just eliminated it: If to assert P is to assert "P is true," then to assert "P is true" is to assert P. We could go back and forth like this for hours.

But you still haven't defined objective value.

Dictionary says, "Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased."

How can a value be objective? ---EDIT: Especially since a value is a personal feeling. If you are defining "value" differently, how?

Replies from: Peterdjones
comment by Peterdjones · 2011-05-25T21:06:03.279Z · LW(p) · GW(p)

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

That's what I was responding to.

It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences. I have given that counterargument several times.

Z org: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.

I think moral values are ultimate because I can;t think of a valid argument of the form "I should do because ". Please give an example of a pangalactic value that can be substituted for ,

You just eliminated it: If to assert P is to assert "P is true," then to assert "P is true" is to assert P. We could go back and forth like this for hours.

Yeah,. but it sitll comes back to truth. If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to "no, that's not true".

Dictionary says, [objective[ "Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased."

How can a value be objective?

By being (relatively) uninfluenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.

Especially since a value is a personal feeling.

You haven't remotely established that as an identity. It is true that some people some of the time arrive at values through feelings. Others arrive at them (or revise them) through facts and thinking.

you are defining "value" differently, how?

"Values can be defined as broad preferences concerning appropriate courses of action or outcomes"

Replies from: Amanojack, Amanojack
comment by Amanojack · 2011-05-25T22:31:49.317Z · LW(p) · GW(p)

I missed this:

If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to "no, that's not true".

I'll just decide not to follow the advice, or I'll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don't need to use the word "true" or any equivalent to do that. I can just say it didn't work.

Replies from: NancyLebovitz, Peterdjones
comment by NancyLebovitz · 2011-05-25T23:08:51.495Z · LW(p) · GW(p)

People have been known to follow really bad advice, sometimes to their detriment and suffering a lot of pain along the way.

Some people have followed excessively stringent diets to the point of malnutrition or death. (This isn't intended as a swipe at CR-- people have been known to go a lot farther than that.)

People have attempted (for years or decades) to shut down their sexual feelings because they think their God wants it.

comment by Peterdjones · 2011-05-25T23:15:04.033Z · LW(p) · GW(p)

I'll just decide not to follow the advice, or I'll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don't need to use the word "true" or any equivalent to do that. I can just say it didn't work.

Any word can be eliminated in favour of a definitions or paraphrase. Not coming out with an equivalent -- showing that you have dispensed with the concept -- is harder. Why didn't it work? You're going to have to paraphrase "Because it wasn't true" or refuse to answer.

Replies from: Amanojack
comment by Amanojack · 2011-05-26T00:29:29.275Z · LW(p) · GW(p)

The concept of truth is for utility, not utility for truth. To get them backwards is to merely be confused by the words themselves. It's impossible to show you've dispensed with any concept, except to show that it isn't useful for what you're doing. That is what I've done. I'm non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion, though they all are or were at one time useful approximate means of expressing things in English.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-26T12:56:48.588Z · LW(p) · GW(p)

The concept of truth is for utility, not utility for truth.

Truth is useful for whatever you want to do with it. If people can collect stamps for the sake of collecting stamps, they can collect truths for the sake of collecting truths.

I'm non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion

Sounding like religion would not render something incomprehensible...but it could easilly provoke an "I don't like it" reaction, which is then dignified with the label "incoherent" or whatever.

comment by Amanojack · 2011-05-25T21:33:48.032Z · LW(p) · GW(p)

It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences.

I agree, if you mean things like, "If I now believe that she is really a he, I don't want to take 'her' home anymore."

I think moral values are ultimate because I can;t think of a valid argument of the form "I should do because ".

Neither can I. I just don't draw the same conclusion. There's a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I'm not sure why you would think it is veiled disagreement, seeing as lukeprog's whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of "incoherent to me" or someone else, so it's not some kind of damning word. It simply means the message is not getting through to me. That could be your fault, my fault, or English's fault, and I don't really care which it is, but it would be preferable for something to actually make it across the inferential gap.)

EDIT: Oops, posted too soon.

"Values can be defined as broad preferences concerning appropriate courses of action or outcomes"

So basically you are saying that preferences can change because of facts/beliefs, right? And I agree with that. To give a more mundane example, if I learn Safeway doesn't carry egg nog and I want egg nog, I may no longer want to go to Safeway. If I learn that egg nog is bad for my health, I may no longer want egg nog. If I believe health doesn't matter because the Singularity is near, I may want egg nog again. If I believe that egg nog is actually made of human brains, I may not want it anymore.

At bottom, I act to get enjoyment and/or avoid pain, that is, to win. What actions I believe will bring me enjoyment will indeed vary depending on my beliefs. But it is always ultimately that winning/happiness/enjoyment/fun//deliciousness/pleasure that I am after, and no change in belief can change that. I could take short-term pain for long-term gain, but that would be because I feel better doing that than not.

But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn't make my want for deliciousness "uninfluenced by personal feelings."

In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn't lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus "objective values." But not for 1, because personal feelings are, well, personal.

If so, then it seems I am back to my initial response to lukeprog and ensuing brief discussion. In short, if it is only the belief in objective facts that is wrong, then I wouldn't want to call that morality, but more just self-help, or just what the whole rest of LW is. It is not that someone could be wrong about their preferences/values 1, but preferences/values 2.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-26T00:57:52.488Z · LW(p) · GW(p)

There's a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I'm not sure why you would think it is veiled disagreement, seeing as lukeprog's whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of "incoherent to me" or someone else,

"incoherence" means several things. Some of them, such a self-contradiction are as objective as anything. You seem to find morality meaningless in some personal sense. Looking at dictionaries doesn't seem to work for you. Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word "good" used a lot, unless they were raised by wolves. So that's why I see complaints of incoherence as being disguised disagreement.

At bottom, I act to get enjoyment and/or avoid pain, that is, to win.

If you say so. That doesn't make morality false, meaningless or subjective. It makes you an amoral hedonist.

But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn't make my want for deliciousness "uninfluenced by personal feelings."

Perhaps not completley, but that sill leaves some things as relatively more objective than others.

In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn't lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus "objective values." But not for 1, because personal feelings are, well, personal.

Then your categories aren't exhaustive, because preferences can also be defined to include universalisable values alongside personal whims. You may be making the classic of error of taking "subjective" to mean "believed by a subject"

Replies from: Amanojack
comment by Amanojack · 2011-05-26T01:15:49.938Z · LW(p) · GW(p)

Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word "good" used a lot, unless they were raised by wolves

The problem isn't that I don't know what it means. The problem is that it means many different things and I don't know which of those you mean by it.

an amoral hedonist

I have moral sentiments (empathy, sense of justice, indignation, etc.), so I'm not amoral. And I am not particularly high time-preference, so I'm not a hedonist.

preferences can also be defined to include universalisable values alongside personal whims

If you mean preferences that everyone else shares, sure, but there's no stipulation in my definitions that other people can't share the preferences. In fact, I said, "(though they may be universal or semi-universal)."

You may be making the classic of error of taking "subjective" to mean "believed by a subject"

It'd be a "classic error" to assume you meant one definition of subjective rather than another, when you haven't supplied one yourself? This is about the eight time in this discussion that I've thought that I can't imagine what you think language even is.

I doubt we have any disagreement, to be honest. I think we only view language very, radically differently. (You could say we have a disagreement about language.)

Replies from: Peterdjones
comment by Peterdjones · 2011-05-26T13:43:37.411Z · LW(p) · GW(p)

Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word "good" used a lot, unless they were raised by wolves

The problem isn't that I don't know what it means.

What "moral" means or what "good" means/?

The problem is that it means many different things and I don't know which of those you mean by it.

No, that isn't the problem. It has one basic meaning, but there are a lot of different theories about it. Elsewhere you say that utilitarianism renders objective morality meaningful. A theory of X cannot render X meaningful, but it can render X plausible.

I have moral sentiments (empathy, sense of justice, indignation, etc.), so I'm not amoral. And I am not particularly high time-preference, so I'm not a hedonist.

But you theorise that you only act on them(and that nobody ever acts but) toincrea se your pleasure.

If you mean preferences that everyone else shares, sure, but there's no stipulation in my definitions that other people can't share the preferences.

I don't see the point in stipulating that preferences can't be shared. People who believe they can be just have to find another word. Nothing is proven.

You may be making the classic of error of taking "subjective" to mean "believed by a subject"

It'd be a "classic error" to assume you meant one definition of subjective rather than another, when you haven't supplied one yourself?

I've quoted the dictionary derfinition, and that's what I mean.

"existing in the mind; belonging to the thinking subject rather than to the object of thought ( opposed to objective). 2. pertaining to or characteristic of an individual; personal; individual: a subjective evaluation. 3. placing excessive emphasis on one's own moods, attitudes, opinions, etc.; unduly egocentric"

This is about the eight time in this discussion that I've thought that I can't imagine what you think language even is.

I think language is public, I think (genuine) disagreements about meaning can be resolved with dictionaries, and I think you shouldn't assume someone is using idiosyncratic definitions unless they give you good reason.

comment by Peterdjones · 2011-05-24T13:35:44.252Z · LW(p) · GW(p)

As far as objective value, I simply don't understand what anyone means by the term.

Objective truth is what you should believe even if you don't. Objective values are the values you should have even if you have different values.

And I think lukeprog's point could be summed up as, "Trying to figure out how each discussant is defining their terms is not really 'doing philosophy'; it's just the groundwork necessary for people not to talk past each other."

Where the groundwork is about 90% of the job...

As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can't figure out what anticipations X entails, I will just respond, "So what?"

That has been answered several times. You are assuming that instrumental value is ultimate value, and it isn't.

To unite the two themes: The ultimate definition would tell me why to care.

Imagine you are arguing with someone who doesn't "get" rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don't, you can't. Even good arguments will fail to work on some people.

You should care about morality because it is morality. Morality defines (the ultimate kind of) "should".

"What I should do" =def "what is moral".

Nor everyone does get that , which is why "don't care" is "made to care" by various sanctions.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T08:35:28.730Z · LW(p) · GW(p)

As far as objective value, I simply don't understand what anyone means by the term.

Objective truth is what you should believe even if you don't.

"Should" for what purpose?

Where the groundwork is about 90% of the job...

I certainly agree there. The question is whether it is more useful to assign the label "philosophy" to groundwork+theory or just the theory. A third possibility is that doing enough groundwork will make it clear to all discussants that there are no (or almost no) actually theories in what is now called "philosophy," only groundwork, meaning we would all be in agreement and there is nothing to argue except definitions.

Imagine you are arguing with someone who doesn't "get" rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don't, you can't.

I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims. It seems you're saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I'm wrong). This is what makes me curious about why you think I would care. The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it's all semantic confusion, and because I don't want to sound dismissive or obstinate in continuing to say, "So what?"

Replies from: Peterdjones
comment by Peterdjones · 2011-05-25T13:49:39.990Z · LW(p) · GW(p)

Objective truth is what you should believe even if you don't.

"Should" for what purpose?

Believing in truth is what rational people do.

Imagine you are arguing with someone who doesn't "get" rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don't, you can't.

I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims.

Which is good because...?

It seems you're saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I'm wrong).

Correct.

This is what makes me curious about why you think I would care.

I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn't care about truth at all, the process probably isn't going to work.

The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it's all semantic confusion, and because I don't want to sound dismissive or obstinate in continuing to say, "So what?"

I think that horse has bolted. Inasmuch as you don't care about truth per se. you have advertised yourself as being irrational.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T19:19:43.206Z · LW(p) · GW(p)

"Should" for what purpose?

Believing in truth is what rational people do.

Winning is what rational people do. We can go back and forth like this.

Which is good because...?

It benefits me, because I enjoy helping people. See, I can say, "So what?" in response to "You're wrong." Then you say, "You're still wrong." And I walk away feeling none the worse. Usually when someone claims I am wrong I take it seriously, but only because I know how it could ever, possibly, potentially ever affect me negatively. In this case you are saying it is different, and I can safely walk away with no terror ever to befall me for "being wrong."

I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn't care about truth at all, the process probably isn't going to work.

Sure, people usually argue whether something is "true or false" because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc. As this is almost always the case, it is customarily unusual for someone to say they don't care about something being true or false. But in a situation where, ex hypothesi, the thing being discussed - very unusually - is claimed to not have any effect on such things, "true" and "false" become pointless labels. I only ever use such labels because they can help me enjoy life more. When they can't, I will happily discard them.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-25T20:45:54.051Z · LW(p) · GW(p)

Sure, people usually argue whether something is "true or false" because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc.

So you say. I can think of two arguments against that: people acquire true beliefs that aren't immediately useful, and untrue beliefs can be pleasing.

Replies from: Amanojack, Will_Sawin
comment by Amanojack · 2011-05-25T21:19:16.921Z · LW(p) · GW(p)

I never said they had to be "immediately useful" (hardly anything ever is). Untrue beliefs might be pleasing, but when people are arguing truth and falsehood it is not in order to prove that the beliefs they hold are untrue so that they can enjoy believing them, so it's not an objection either.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-25T21:40:44.035Z · LW(p) · GW(p)

You still don't have a good argument to the effect that no one cares about truth per se.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T22:20:59.532Z · LW(p) · GW(p)

A lot of people care about truth, even when (I suspect) they diminish their enjoyment needlessly by doing so, so no argument there. In the parent I'm just continuing to try to explain why my stance might sound weird. My point from farther above, though, is just that I don't/wouldn't care about "truth" in those rare and odd cases where it is already part of the premises that truth or falsehood will not affect me in any way.

comment by Will_Sawin · 2011-05-25T20:55:04.152Z · LW(p) · GW(p)

I think 'usually" is enough qualification, especially considering that he says 'makes a difference' and not 'completely determines"

comment by Peterdjones · 2011-05-24T14:02:31.193Z · LW(p) · GW(p)

For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism.

Hmm. I sounds to me like a kind of methodological twist on logical positivism...just don't bother with things that don't have empirical consequences.

comment by BobTheBob · 2011-05-22T21:23:01.270Z · LW(p) · GW(p)

I think Peterdjones's answer hits it on the head. I understand you've thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.

Really I meant to be throwing the ball back to lukeprog to give us an idea of what the 'arguing about facts and anticipations' alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post [anticipations] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.

comment by BobTheBob · 2011-05-22T21:22:17.936Z · LW(p) · GW(p)

I think Peterdjones's answer hits it on the head. I understand you've thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.

Really I meant to be throwing the ball back to lukeprog to give us an idea of what the 'arguing about facts and anticipations' alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post [anticipations] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.

comment by BobTheBob · 2011-05-22T21:17:11.747Z · LW(p) · GW(p)

I think Peterdjones's answer hits it on the head. I understand you've thrashed-out related issues elsewhere, but here too it seems you have to do quite a bit of philosophy to get the conclusion that the idea of an objective value judgement is incoherent.

Really I meant to be throwing the ball back to lukeprog to give us an idea of what the 'arguing about facts and anticipations' alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of the complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about the idea, they really should look into its implications if they want to avoid inadvertent contradictions in their world-views. That means doing some philosophy.

comment by Peterdjones · 2011-05-22T22:08:15.021Z · LW(p) · GW(p)

You say that objective values are incoherent, but you offer no argument for it. Presenting philosophical claims without justification isn't something different to philosophy, or something better. It isn't good rationality either. Rationality is as rationality does.

Replies from: Amanojack
comment by Amanojack · 2011-05-23T15:19:53.792Z · LW(p) · GW(p)

By incoherent I simply mean "I don't know how to interpret the words." So far no one seems to want to help me do that, so I can only await a coherent definition of objective ethics and related terms. Then possibly an argument could start. (But this is all like deja vu from the recent metaethics threads.)

Replies from: Peterdjones
comment by Peterdjones · 2011-05-23T15:34:41.930Z · LW(p) · GW(p)

Can you interpret the word "morality is subjective"? How about the the words "morality is not subjective"?

Replies from: Amanojack
comment by Amanojack · 2011-05-23T15:42:43.575Z · LW(p) · GW(p)

"Morality is subjective": Each person has their own moral sentiments.

"Morality is not subjective": Each person does not have their own moral sentiments. Or there is something more than each person's moral sentiments that is worth calling "moral." <--- But I ask, what is that "something more"?

Replies from: Peterdjones
comment by Peterdjones · 2011-05-23T15:56:16.826Z · LW(p) · GW(p)

OK. That is not what "subjective" means. What it means is that if something is subjective, an opinion is guaranteed to be correct or the last word on the matter just because it is the person's opinion. And "objective" therefore means that it is possible for someone to be wrong in their opinion.

Replies from: Amanojack
comment by Amanojack · 2011-05-23T16:12:33.853Z · LW(p) · GW(p)

I don't claim moral sentiments are correct, but simply that a person's moral sentiment is their moral sentiment. They feel some emotions, and that's all I know. You are seeming to say there is some way those emotions can be correct or incorrect, but in what sense? Or probably a clearer way to ask the question is, "What disadvantage can I anticipate if my emotions are incorrect?"

Replies from: Peterdjones
comment by Peterdjones · 2011-05-23T16:29:10.228Z · LW(p) · GW(p)

An emotion, such as a feeling of elation or disgust, is not correct or incorrect per se; but an emotion per se is no basis for a moral sentiment, because moral sentiment has to be about something. You could think gay marriage is wrong because homosexuality disgusts you, or you could feel serial-killing is good because it elates you, but that doesn't mean the conclusions you are coming to are right. It may be a cast iron fact that you have those particular sentiments, but that says nothing about the correctness of their content, any more than any opinion you entertain is automatically correct.

ETA The disadvantages you can expect if your emotions are incorrect include being in the wrong whilst feeling you are in the right. Much as if you are entertaining incorrect opinions.

Replies from: Amanojack
comment by Amanojack · 2011-05-23T16:42:07.400Z · LW(p) · GW(p)

What if I don't care about being wrong (if that's really the only consequence I experience)? What if I just want to win?

Replies from: Peterdjones
comment by Peterdjones · 2011-05-23T16:53:09.939Z · LW(p) · GW(p)

Then you are, or are likely to be, morally in the wrong. That is of course possible. You can choose to do wrong. But it doesn't constitute any kind of argument. Someone can elect to ignore the roundness of the world for some perverse reason, but that doesn't make "!he world is round" false or meaningless or subjective.

Replies from: Amanojack
comment by Amanojack · 2011-05-23T18:12:57.474Z · LW(p) · GW(p)

You can choose to do wrong. But it doesn't constitute any kind of argument.

Indeed it is not an argument. Yet I can still say, "So what?" I am not going to worry about something that has no effect on my happiness. If there is some way it would have an effect, then I'd care about it.

Someone can elect to ignore the roundness of the world for some perverse reason, but that doesn't make "!he world is round" false or meaningless or subjective.

The difference is, believing "The world is round" affects whether I win or not, whereas believing "I'm morally in the wrong" does not.

Replies from: None, Peterdjones, Peterdjones
comment by [deleted] · 2011-05-23T19:11:06.002Z · LW(p) · GW(p)

The difference is, believing "The world is round" affects whether I win or not, whereas believing "I'm morally in the wrong" does not.

That is apparently true in your hypothetical, but it's not true in the real world. Just as the roundness of the world has consequences, the wrongness of an action has consequences. For example, if you kill someone, then your fate is going to depend (probabilistically) on whether you were in the right (e.g. he attacked and you were defending your life) or in the wrong (e.g. you murdered him when he caught you burgling his house). The more in the right you were, then, ceteris paribus, the better your chances are.

Replies from: Amanojack, Peterdjones
comment by Amanojack · 2011-05-25T07:37:37.200Z · LW(p) · GW(p)

For example, if you kill someone, then your fate is going to depend (probabilistically) on whether you were in the right (e.g. he attacked and you were defending your life) or in the wrong (e.g. you murdered him when he caught you burgling his house).

You're interpreting "I'm morally in the wrong" to mean something like, "Other people will react badly to my actions," in which case I fully agree with you that it would affect my winning. Peterdjones apparently does not mean it that way, though.

Replies from: None
comment by [deleted] · 2011-05-25T10:12:21.858Z · LW(p) · GW(p)

You're interpreting "I'm morally in the wrong" to mean something like, "Other people will react badly to my actions," in which case I fully agree with you that it would affect my winning.

Actually I am not. I am interpreting "I'm morally wrong" to mean something like, "I made an error of arithmetic in an area where other people depend on me."

An error of arithmetic is an error of arithmetic regardless of whether any other people catch it, and regardless of whether any other people react badly to it. It is not, however, causally disconnected from their reaction, because, even though an error of arithmetic is what it is regardless of people's reaction to it, nevertheless people will probably react badly to it if you've made it in an area where other people depend on you. For example, if you made an error of arithmetic in taking a test, it is probably the case that the test-grader did not make the same error of arithmetic and so it is probably the case that he will react badly to your error. Nevertheless, your error of arithmetic is an error and is not merely getting-a-different-answer-from-the-grader. Even in the improbable case where you luck out and the test grader makes exactly the same error as you and so you get full marks, nevertheless, you did still make that error.

Even if everyone except you wakes up tomorrow and believes that 3+4=6, whereas you still remember that 3+4=7, nevertheless in many contexts you had better not switch to what the majority believe. For example, if you are designing something that will stand up, like a building or a bridge, you had better get your math right, you had better correctly add 3+4=7 in the course of designing the edifice if that sum is ever called on calculating whether the structure will stand up.

If humanity divides into two factions, one faction of which believes that 3+4=6 and the other of which believes that 3+4=7, then the latter faction, the one that adds correctly, will in all likelihood over time prevail on account of being right. This is true even if the latter group starts out in the minority. Just imagine what sort of tricks you could pull on people who believe that 3+4=6. Because of the truth of 3+4=7, eventually people who are aware of this truth will succeed and those who believe that 3+4=6 will fail, and over time the vast majority of society will once again come to accept that 3+4=7.

And similarly with morality.

Replies from: Alicorn, Amanojack
comment by Alicorn · 2011-05-25T20:19:43.049Z · LW(p) · GW(p)

Just imagine what sort of tricks you could pull on people who believe that 3+4=6.

Nothing's jumping out at me that would seriously impact a group's effectiveness from day to day. I rarely find myself needing to add three and four in particular, and even more rarely in high-stakes situations. What did you have in mind?

Replies from: None
comment by [deleted] · 2011-05-25T21:40:39.730Z · LW(p) · GW(p)

Suppose you think that 3+4=6.

I offer you the following deal: give me $3 today and $4 tomorrow, and I will give you a 50 cent profit the day after tomorrow, by returning to you $6.50. You can take as much advantage of this as you want. In fact, if you like, you can give me $3 this second, $4 in one second, and in the following second I will give you back all your money plus 50 cents profit - that is, I will give you $6.50 in two seconds.

Since you think that 3+4=6, you will jump at this amazing deal.

Replies from: Alicorn, Amanojack
comment by Alicorn · 2011-05-25T21:59:33.165Z · LW(p) · GW(p)

I find that most people who believe absurd things still have functioning filters for "something is fishy about this". I talked to a person who believed that the world was going to end in 2012, and I offered to give them a dollar right then in exchange for a hundred after the world didn't end, but of course they didn't take it: something was fishy about that.

Also, dollars are divisible: someone who believes that 3+4=6 may not believe that 300+400=600.

Replies from: None
comment by [deleted] · 2011-05-25T22:46:03.334Z · LW(p) · GW(p)

If he isn't willing to take your trade, then his alleged belief that the world will end in 2012 is weak at best. In contrast, if you offer to give me $6.50 in exchange for $3 plus $3, then I will take your offer, because I really do believe that 3+3=6.

On the matter of divisibility, you are essentially proposing that someone with faulty arithmetic can effectively repair the gap by translating arithmetic problems away from the gap (e.g. by realizing that 3 dollars is 300 pennies and doing arithmetic on the pennies). But in order for them to do this consistently they need to know where the gap is, and if they know that, then it's not a genuine gap. If they realize that their belief that 3+4=6 is faulty, then they don't really believe it. In contrast, if they don't realize that their belief that 3+4=6 is faulty, then they won't consistently translate arithmetic problems away from the gap, and so my task becomes a simple matter of finding areas where they don't translate problems away from the gap, but instead fall in.

Replies from: Alicorn
comment by Alicorn · 2011-05-25T22:56:11.769Z · LW(p) · GW(p)

Are you saying that you would not be even a little suspicious and inclined to back off if someone said they'd give you $6.50 in exchange for $3+$3? Not because your belief in arithmetic is shaky, but because your trust that people will give you fifty cents for no obvious reason is nonexistent and there is probably something going on?

I'm not denying that in a thought experiment, agents that are wrong about arithmetic can be money-pumped. I'm skeptical that in reality, human beings that are wrong about arithmetic can be money-pumped on an interesting scale.

Replies from: None
comment by [deleted] · 2011-05-25T23:35:50.703Z · LW(p) · GW(p)

Are you saying that you would not be even a little suspicious and inclined to back off if someone said they'd give you $6.50 in exchange for $3+$3? Not because your belief in arithmetic is shaky, but because your trust that people will give you fifty cents for no obvious reason is nonexistent and there is probably something going on?

In my hypothetical, we can suppose that they are perfectly aware of the existence of the other group. That is, the people who think that 3+4=7 are aware of the people who think that 3+4=6, and vice versa. This will provide them with all the explanation they need for the offer. They will think, "this person is one of those people who think that 3+4=7", and that will explain to them the deal. They will see that the others are trying to profit off them, but they will believe that the attempt will fail, because after all, 3+4=6.

As a matter of fact, in my hypothetical the people who believe that 3+4=6 would be just as likely to offer those who believe that 3+4=7 a deal in an attempt to money-pump them. Since they believe that 3+4=6, and are aware of the belief of the others, they might offer the others the following deal: "give us $6.50, and then the next day we will give you $3 and the day after $4." Since they believe that 3+4=6, they will think they are ripping the others off.

I'm not denying that in a thought experiment, agents that are wrong about arithmetic can be money-pumped. I'm skeptical that in reality, human beings that are wrong about arithmetic can be money-pumped on an interesting scale.

The thought experiment wasn't intended to be applied to humans as they really are. It was intended to explain humans as they really are by imagining a competition between two kinds of humans - a group that is like us, and a group that is not like us. In the hypothetical scenario, the group like us wins.

And I think you completely missed my point, by the way. My point was that arithmetic is not merely a matter of agreement. The truth of a sum is not merely a matter of the majority of humanity agreeing on it. If more than half of humans believed that 3+4=6, this would not make 3+4=6 be true. Arithmetic truth is independent of majority opinion (call the view that arithmetic truth is a matter of consensus within a human group "arithmetic relativism" or "the consensus theory of arithmetic truth"). I argued for this as follows: suppose that half of humanity - nay, more than half - believed that 3+4=6, and a minority believed that 3+4=7. I argued that the minority with the latter belief would have the advantage. But if consensus defined arithmetic truth, that should not be the case. Therefore consensus does not define arithmetic truth.

My point is this: that arithmetic relativism is false. In your response, you actually assumed this point, because you've been assuming all along that 3+4=6 is false, even though in my hypothetical scenario a majority of humanity believed it is true.

So you've actually assumed my conclusion but questioned the argument that I used to argue for the conclusion.

And this, in turn, was to illustrate a more general point about consensus theories and relativism. The context was a discussion of morality. I had been interpreted as advocating what amounts to a consensus theory of morality, and I was trying to explain why may specific claims do not entail a consensus theory of morality, but are also compatible with a theory of morality as independent of consensus.

comment by Amanojack · 2011-05-25T22:34:57.654Z · LW(p) · GW(p)

I agree with this, if that makes any difference.

comment by Amanojack · 2011-05-25T18:33:21.816Z · LW(p) · GW(p)

In sum, you seem to be saying that morality involves arithmetic, and being wrong about arithmetic can hurt me, so being wrong about morality can hurt me.

Replies from: None
comment by [deleted] · 2011-05-25T19:11:02.642Z · LW(p) · GW(p)

There's no particular connection between morality and arithmetic that I'm aware of. I brought up arithmetic to illustrate a point. My hope was that arithmetic is less problematic, less apt to lead us down philosophical blind allies, so that by using it to illustrate a point I wasn't opening up yet another can of worms.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T19:38:09.260Z · LW(p) · GW(p)

Then you basically seem to be saying I should signal a certain morality if I want to get on well in society. Well I do agree.

comment by Peterdjones · 2011-05-23T19:18:51.332Z · LW(p) · GW(p)

Whether someone is judged right and wrong by others has consequences, but the people doing the judging might be wrong. It is still an error to make morality justify itself in terms of instrumental utility, since there are plenty of examples of things that are instrumentally right but ethically wrong, like improved gas chambers.

Replies from: None
comment by [deleted] · 2011-05-23T20:13:00.191Z · LW(p) · GW(p)

Whether someone is judged right and wrong by others has consequences, but the people doing the judging might be wrong.

Actually being in the right increases your probability of being judged to be in the right. Yes, the people doing the judging may be wrong, and that is why I made the statement probabilistic. This can be made blindingly obvious with an example. Go to a random country and start gunning down random people in the street. The people there will, with probability so close to 1 as makes no real difference, judge you to be in the wrong, because you of course will be in the wrong.

There is a reason why people's judgment is not far off from right. It's the same reason that people's ability to do basic arithmetic when it comes to money is not far off from right. Someone who fails to understand that $10 is twice $5 (or rather the equivalent in the local currency) is going to be robbed blind and his chances of reproduction are slim to none. Similarly, someone whose judgment of right and wrong is seriously defective is in serious trouble. If someone witnesses a criminal lunatic gun down random people in the street and then walks up to him and says, "nice day", he's a serious candidate for a Darwin Award. Correct recognition of evil is a basic life skill, and any human who does not have it will be cut out of the gene pool. And so, if you go to a random country and start killing people randomly, you will be neutralized by the locals quickly. That's a prediction. Moral thought has predictive power.

It is still an error to make morality justify itself in terms of instrumental utility, since there are plenty of examples of things that are instrumentally right but ethically wrong, like improved gas chambers.

The only reason anyone can get away with the mass murder that you allude to is that they have overwhelming power on their side. And even they did it in secret, as I recall learning, which suggests that powerful as they were, they were not so powerful that they felt safe murdering millions openly.

Morality is how a human society governs itself in which no single person or organized group has overwhelming power over the rest of society. It is the spontaneous self-regulation of humanity. Its scope is therefore delimited by the absence of a person or organization with overwhelming power. Even though just about every place on Earth has a state, since it is not a totalitarian state there are many areas of life in which the state does not interfere, and which are therefore effectively free of state influence. In these areas of life humanity spontaneously self-regulates, and the name of the system of spontaneous self-regulation is morality.

Replies from: AdeleneDawner, Peterdjones
comment by AdeleneDawner · 2011-05-24T01:21:46.478Z · LW(p) · GW(p)

Similarly, someone whose judgment of right and wrong is seriously defective is in serious trouble. If someone witnesses a criminal lunatic gun down random people in the street and then walks up to him and says, "nice day", he's a serious candidate for a Darwin Award. Correct recognition of evil is a basic life skill, and any human who does not have it will be cut out of the gene pool.

It sounds to me like you're describing the ability to recognize danger, not evil, there.

Say that your hypothetical criminal lunatic manages to avoid the police, and goes about his life. Later that week, he's at a buffet restaurant, acting normally. Is he still evil? Assuming nobody recognizes him from the shooting, do you expect the other people using the buffet to react unusually to him in any way?

Replies from: None
comment by [deleted] · 2011-05-24T02:54:39.242Z · LW(p) · GW(p)

It sounds to me like you're describing the ability to recognize danger, not evil, there.

It's not either/or. There is no such thing as a bare sense of danger. For example, if you are about to drive your car off a cliff, hopefully you notice in time and stop. In that case, you've sensed danger - but you also sensed the edge of a cliff, probably with your eyes. Or if you are about to drink antifreeze, hopefully you notice in time and stop. In that case, you've sensed danger - but you've also sensed antifreeze, probably with your nose.

And so on. It's not either/or. You don't either sense danger or sense some specific thing which happens to be dangerous. Rather, you sense something that happens to be dangerous, and because you know it's dangerous, you sense danger.

Say that your hypothetical criminal lunatic manages to avoid the police, and goes about his life. Later that week, he's at a buffet restaurant, acting normally. Is he still evil?

Chances are higher than average that if he was a criminal lunatic a few days ago, he is still a criminal lunatic today.

Assuming nobody recognizes him from the shooting, do you expect the other people using the buffet to react unusually to him in any way?

Obviously not, because if you assume that people fail to perceive something, then it follows that they will behave in a way that is consistent with their failure to perceive it. Similarly, if you fail to notice that the antifreeze that you're drinking is anything other than fruit punch, then you can be expected to drink it just as if it were fruit punch.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2011-05-24T03:40:07.415Z · LW(p) · GW(p)

My point was that in the shooting case, the perception of danger is sufficient to explain bystanders' behavior. They may perceive other things, but that seems mostly irrelevant.

You said:

Correct recognition of evil is a basic life skill, and any human who does not have it will be cut out of the gene pool.

This claim appears to be incompatible with your expectation that people will not notice your hypothetical murderer when they encounter him acting according to social norms after committing a murder, given that he's supposedly still evil.

Replies from: None
comment by [deleted] · 2011-05-24T04:11:39.040Z · LW(p) · GW(p)

My point was that in the shooting case, the perception of danger is sufficient to explain bystanders' behavior.

People perceive danger because they perceive evil, and evil is dangerous.

They may perceive other things, but that seems mostly irrelevant.

It is not irrelevant that they perceive a specific thing (such as evil) which is dangerous. Take away the perception of the specific thing, and they have no basis upon which to perceive danger. Only Spiderman directly perceives danger, without perceiving some specific thing which is dangerous. And he's fictional.

Correct recognition of evil is a basic life skill, and any human who does not have it will be cut out of the gene pool.

This claim appears to be incompatible with your expectation that people will not notice your hypothetical murderer when they encounter him acting according to social norms after committing a murder, given that he's supposedly still evil.

I was referring to the standard, common ability to recognize evil. I was saying that someone who does not have that ability will be cut out of the gene pool (not definitely - probabilistically, his chances of surviving and reproducing are reduced, and over the generations the effect of this disadvantage compounds).

People who fail to recognize that the guy is that same guy from before are not thereby missing the standard human ability to recognize evil.

comment by Peterdjones · 2011-05-23T20:46:40.663Z · LW(p) · GW(p)

If someone witnesses a criminal lunatic gun down random people in the street and then walks up to him and says, "nice day", he's a serious candidate for a Darwin Award.

Except when the evil guys take over, Then you are in trouble if you oppose them.

The only reason anyone can get away with the mass murder that you allude to is that they have overwhelming power on their side.

That doesn't affect my point. If there are actual or conceptual circumstances where instrumental good diverges from moral good, the two cannot be equated.

Morality is how a human society governs itself in which no single person or organized group has overwhelming power over the rest of society.

Why would it be wrong if they do? You theory of morality seems to be in need of another theory of morality to justify it.

Replies from: None
comment by [deleted] · 2011-05-23T23:19:29.041Z · LW(p) · GW(p)

Except when the evil guys take over, Then you are in trouble if you oppose them.

Which is why the effective scope of morality is limited by concentrated power, as I said.

That doesn't affect my point. If there are actual or conceptual circumstances where instrumental good diverges from moral good, the two cannot be equated.

I did not equate moral good with instrumental good in the first place.

Why would it be wrong if they do?

I didn't say it would be wrong. I was talking about making predictions. The usefulness of morality in helping you to predict outcomes is limited by concentrated power.

You theory of morality seems to be in need of another theory of morality to justify it.

On the contrary, my theory of morality is confirmed by the evidence. You yourself supplied some of the evidence. You pointed out that a concentration of power creates an exception to the prediction that someone who guns down random people will be neutralized. But this exception fits with my theory of morality, since my theory of morality is that it is the spontaneous self-regulation of humanity. Concentrated power interferes with self-regulation.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-24T12:29:11.226Z · LW(p) · GW(p)

You say:

I did not equate moral good with instrumental good in the first place.

...but you also say...

The usefulness of morality in helping you to predict outcomes

..which seems to imply that you are still thinking of morality as something that has to pay its way instrumentally, by making useful predictions.

On the contrary, my theory of morality is confirmed by the evidence[..]But this exception fits with my theory of morality, since my theory of morality is that it is the spontaneous self-regulation of humanity. Concentrated power interferes with self-regulation.

It's a conceptual truth that power interferes with spontaneous self-regulation: but that isn't the point. The point is not that you have a theory that makes predictions, but whether it is a theory of morality.

It is dubious to say of any society that the way it is organised is ipso facto moral. You have forestalled the relativistic problem by saying that socieites must self organise for equality and justice, not any old way, which takes it as read that equality and justice are Good Things. But an ethical theory must explain why they are good, not rest on them as a given.

Replies from: None
comment by [deleted] · 2011-05-24T12:58:23.172Z · LW(p) · GW(p)

..which seems to imply that you are still thinking of morality as something that has to pay its way instrumentally, by making useful predictions.

"Has to"? I don't remember saying "has to". I remember saying "does", or words to that effect. I was disputing the following claim:

The difference is, believing "The world is round" affects whether I win or not, whereas believing "I'm morally in the wrong" does not.

This is factually false, considered as a claim about the real world.

It is dubious to say of any society that the way it is organised is ipso facto moral. You have forestalled the relativistic problem by saying that socieites must self organise for equality and justice, not any old way, which takes it as read that equality and justice are Good Things. But an ethical theory must explain why they are good, not rest on them as a given.

I am presenting the hypothesis that, under certain constraints, there is no way for humanity to organize itself but morally or close to morally and that it does organize itself morally or close to morally. The most important constraint is that the organization is spontaneous, that is to say, that it does not rely on a central power forcing everyone to follow the same rules invented by that same central power. Another constraint is absence of war, though I think this constraint is already implicit in the idea of "spontaneous order" that I am making use of, since war destroys order and prevents order.

Because humans organize themselves morally, it is possible to make predictions. However, because of the "no central power" constraint, the scope of those predictions is limited to areas outside the control of the central power.

Fortunately for those of us who seek to make predictions on the basis of morality, and also fortunately for people in general, even though the planet is covered with centralized states, much of life still remains largely outside of their control.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-24T16:11:23.816Z · LW(p) · GW(p)

I am presenting the hypothesis that, under certain constraints, there is no way for humanity to organize itself but morally or close to morally and that it does organize itself morally or close to morally.

is that a stipulative definition("morality" =def "spontaneous organisation") or is there some independent standard of morality on which it based?

The most important constraint is that the organization is spontaneous, that is to say, that it does not rely on a central power forcing everyone to follow the same rules invented by that same central power.

What about non-centralised power? What if one fairly large group -- the gentry, men, citizens, some racial group, have power over another in a decentralised way?

And what counts as a society? Can an Athenian slave-owner state that all citizens in their society are equal, and, as for slaves, they are not members of their society.

ETA: Actually, it's worse than that. Not only are there examples of non-centralised power,there are cases where centralised power is on the side of angels and spontaneous self-organisation on the the other side; for instance the Civil Rights struggle, where the federal government backed equality, and the opposition was from the grassroots.

Replies from: None
comment by [deleted] · 2011-05-24T18:19:44.058Z · LW(p) · GW(p)

ETA: Actually, it's worse than that. Not only are there examples of non-centralised power,there are cases where centralised power is on the side of angels and spontaneous self-organisation on the the other side; for instance the Civil Rights struggle, where the federal government backed equality, and the opposition was from the grassroots.

The Civil Rights struggle was national government versus state government, not government versus people. The Jim Crow laws were laws created by state legislatures, not spontaneous laws created by the people.

There is, by the way, such a thing as spontaneous law created by the people even under the state. The book Order Without Law is about this. The "order" it refers to is the spontaneous law - that is, the spontaneous self-government of the people acting privately, without help from the state. This spontaneous self-government ignores and in some cases contradicts the state's official, legislated law.

Jim Crow was an example of official state law, and not an example of spontaneous order.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-24T19:32:42.346Z · LW(p) · GW(p)

The Civil Rights struggle was national government versus state government, not government versus people. The Jim Crow laws were laws created by state legislatures, not spontaneous laws created by the people.

Plenty of things that happened weren't sanctioned by state legislatures, such as discrimination by private lawyers, hassling of voters during registration drives, and the assassination of MLK

There is, by the way, such a thing as spontaneous law created by the people even under the state.

But law isn't morality. There is such a thing as a laws that apply only to certain people, and which support privilege and the status quo rather than equality and justice.

Replies from: None
comment by [deleted] · 2011-05-24T20:05:51.466Z · LW(p) · GW(p)

Plenty of things that happened weren't sanctioned by state legislatures, such as discrimination by private lawyers, hassling of voters during registration drives, and the assassination of MLK

Legislation distorts society and the distortion ripples outward. As for the assassination, that was a single act. Order is a statistical regularity.

But law isn't morality.

I didn't say it was. I pointed out an example of spontaneous order. It is my thesis that spontaneous order tends to be moral. Much order is spontaneous, so much order is moral, so you can make predictions on the basis of what is moral. That should not be confused with a claim that all order is morality, that all law is morality, which is the claim that you are disputing and a claim I did not make.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-24T21:38:45.135Z · LW(p) · GW(p)

Legislation distorts society

From it's primordial state of equality...? I can see how a society that starts equal might self organise to stay that way. But I don't think they start equal that often.

comment by Peterdjones · 2011-05-24T13:50:30.639Z · LW(p) · GW(p)

Indeed it is not an argument. Yet I can still say, "So what?" I am not going to worry about something that has no effect on my happiness. If there is some way it would have an effect, then I'd care about it.

The fact that you are amoral does not mean there is anything wrong with morality, and is not an argument against it. You might as well be saying "there is a perfectly good rational argument that the world is round, but I prefer to be irrational".

The difference is, believing "The world is round" affects whether I win or not, whereas believing "I'm morally in the wrong" does not.

That doesn't constitute an argument unless you can explain why your winning is the only thing that should matter.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T08:49:20.947Z · LW(p) · GW(p)

Yeah, I said it's not an argument. Yet again I can only ask, "So what?" (And this doesn't make me amoral in the sense of not having moral sentiments. If you tell me me it is wrong to kill a dog for no reason, I will agree because I will interpret that as, "We both would be disgusted at the prospect of killing a dog for no reason." But you seem to be saying there is something more.)

That doesn't constitute an argument unless you can explain why your winning is the only thing that should matter.

The wordings "affect my winning" and "matter" mean the same thing to me. I take "The world is round" seriously because it matters for my actions. I do not see how "I'm morally in the wrong"* matters for my actions. (Nor how "I'm pan-galactically in the wrong" matters. )

*EDIT: in the sense that you seem to be using it (quite possibly because I don't know what that sense even is!).

Replies from: Peterdjones
comment by Peterdjones · 2011-05-25T13:38:56.161Z · LW(p) · GW(p)

Yeah, I said it's not an argument. Yet again I can only ask, "So what?"

So being wrong and not caring you are in the wrong is not the same as being right.

(And this doesn't make me amoral in the sense of not having moral sentiments. If you tell me me it is wrong to kill a dog for no reason, I will agree because I will interpret that as, "We both would be disgusted at the prospect of killing a dog for no reason." But you seem to be saying there is something more.)

Yes. I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything.

The wordings "affect my winning" and "matter" mean the same thing to me.

But they don't mean the same thing. Morality matters more than anything else by definition. You don't prove anything by adopting an idiosyncratic private language.

I take "The world is round" seriously because it matters for my actions. I do not see how "I'm morally in the wrong"* matters for my actions. (Nor how "I'm pan-galactically in the wrong" matters. )

The question is whether mattering for your actions is morally justifiable.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T18:59:38.518Z · LW(p) · GW(p)

So being wrong and not caring you are in the wrong is not the same as being right.

Yet I still don't care, and by your own admission I suffer not in the slightest from my lack of caring.

I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything.

Zorg says that getting pangalacticism right matters more than anything. He cannot tell us why it matters, but boy it really does matter.

Morality matters more than anything else by definition.

Which would be? If you refer me to the dictionary again, I think we're done here.

comment by Peterdjones · 2011-05-23T18:44:29.322Z · LW(p) · GW(p)

The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ?

The difference is, believing "The world is round" affects whether I win or not, whereas believing "I'm morally in the wrong" does not.

You have not succeeded in showing that winning is the most important thing.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T07:31:37.967Z · LW(p) · GW(p)

The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ?

I've never argued (a), I'm still arguing (actually just informing you) that the words "objective morality" are meaningless to me, and I'm still arguing (c) but only in the sense that it is equivalent to (b): in other words, I can only await some argument that morality is objective. (But first I'd need a definition!)

You have not succeeded in showing that winning is the most important thing.

I'm using the word winning as a synonym for "getting what I want," and I understand the most important thing to mean "what I care about most." And I mean "want" and "care about" in a way that makes it tautological. Keep in mind I want other people to be happy, not suffer, etc. Nothing either of us have argued so far indicates we would necessarily have different moral sentiments about anything.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-25T14:15:11.265Z · LW(p) · GW(p)

The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ?

I've never argued (a), I'm still arguing (actually just informing you) that the words "objective morality" are meaningless to me

You are not actually being all that informative, since there remains a distinct supsicion that when you say some X is meaningless-to-you, that is a proxy for I-don't-agree-with-it. I notice throughout these discussions that you never reference accepted dictiionary definitions as a basis for meaningfullness, but instead always offer some kind of idiosyncratic personal testimony.

and I'm still arguing (c) but only in the sense that it is equivalent to (b): in other words, I can only await some argument that morality is objective. (But first I'd need a definition!)

What is wrong with dictionary definitions?

You have not succeeded in showing that winning is the most important thing.

I'm using the word winning as a synonym for "getting what I want," and I understand the most important thing to mean "what I care about most."

That doesn't affect anything. You still have no proof for the revised version.

And I mean "want" and "care about" in a way that makes it tautological. Keep in mind I want other people to be happy

Other people out there in the non-existent Objective World?

, not suffer, etc. Nothing either of us have argued so far indicates we would necessarily have different moral sentiments about anything.

I don't think moral anti-realists are generally immoral people. I do think it is an intellectual mistake, whether or not you care about that.

Replies from: Amanojack
comment by Amanojack · 2011-05-25T19:36:31.542Z · LW(p) · GW(p)

You are not actually being all that informative, since there remains a distinct supsicion that when you say some X is meaningless-to-you, that is a proxy for I-don't-agree-with-it.

Zorg said the same thing about his pan-galactic ethics.

I notice throughout these discussions that you never reference accepted dictiionary definitions as a basis for meaningfullness, but instead always offer some kind of idiosyncratic personal testimony.

Did you even read the post we're commenting on?

That doesn't affect anything. You still have no proof for the revised version.

Wait, you want proof that getting what I want is what I care about most?

Other people out there in the non-existent Objective World?

Read what I wrote again.

I don't think moral anti-realists are generally immoral peopl

Read.

comment by nshepperd · 2011-05-23T02:56:38.286Z · LW(p) · GW(p)

"Changing your aims" is an action, presumably available for guiding with philosophy.

comment by lukeprog · 2011-05-24T04:58:31.244Z · LW(p) · GW(p)

Upvoted for thoughtfulness and thoroughness.

(in this post, you distinguish stipulation and definition - do you have in mind a distinction I'm glossing over?)

I'm using 'definition' in the common sense: "the formal statement of the meaning or significance of a word, phrase, etc." A stipulative definition is a kind of definition "in which a new or currently-existing term is given a specific meaning for the purposes of argument or discussion in a given context."

A conceptual analysis of a term using necessary and sufficient conditions is another type of definition, in the common sense of 'definition' given above. Normally, a conceptual analysis seeks to arrive at a "formal statement of the meaning or significance of a word, phrase, etc." in terms of necessary and sufficient conditions.

Jackson is observing, what Tom and Jack should be doing is saying that rightness is that thing (whatever exactly it is) which our folk concepts roughly converge on, and taking up the task of refining our understanding from there - no defining involved.

Using my dictionary usage of the term 'define', I would speak (in my language) of conceptual analysis as a particular way of defining a term, since the end result of a conceptual analysis is meant to be a "formal statement of the meaning or significance of a word, phrase, etc."

In your opening section you produce an example meant to show conceptual analysis is silly. Looks to me more like a silly attempt at an example of conceptual analysis.

I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn't want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).

And I do think my opening offers an accurate example of conceptual analysis. Albert and Barry's arguments about the computer microphone and hypothetical aliens are meant to argue about their intuitive concepts of 'sound', and what set of necessary and sufficient conditions they might converge upon. That's standard conceptual analysis method.

The reason this process looks silly to us (when using a non-standard example like 'sound') is that it is so unproductive. Why think Albert and Barry have the same concept in mind? Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other's due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we'll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning? And, let's say we arrive at a messy set of 6 necessary and sufficient conditions for the intuitive meaning of the term. Is that going to be as useful for communication as one we consciously chose because it carved-up thingspace well? I doubt it. The IAU's definition of 'planet' is more useful than the messy 'folk' definition of 'planet'. Folk intuitions about 'planet' evolved over thousands of years and different people have different intuitions which may not always converge. In 2006, the IAU used modern astronomical knowledge to carve up thingspace in a more useful and informed way than our intuitions do.

Vague, intuitively-defined concepts are useful enough for daily conversation in many cases, and wherever they break down due to divergent intuitions and uses, we can just switch to stipulation/tabooing.

If you don't have the patience to do philosophy, or you don't think it's of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don't think that in doing this latter thing you'll address the question philosophy is interested in, or that you've said anything at all so far to show philosophy isn't worth doing.

Yes. I'm going to argue about facts and anticipations. I've tried to show (a bit) in this post and in this comment about why doing (certain kinds of) conceptual analysis aren't worth it. I'm curious to hear your answers to my many-questions paragraph about the use of conceptual analysis, above.

I've skipped responding to many parts of your comment because I wanted to 'get on the same page' about a few things first. Please re-raise any issues you'd like a response on.

Replies from: BobTheBob, BobTheBob, BobTheBob, BobTheBob, BobTheBob, BobTheBob, lukeprog
comment by BobTheBob · 2011-05-26T04:22:04.680Z · LW(p) · GW(p)

You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with your comments.

Suppose

  • we have two people, Albert and Barry
  • we have one thing, a car, X, of determinate interior volume
  • we have one sentence, S: "X is a subcompact".
  • Albert affirms S, Barry denies S.

Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.

Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree.

Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).

Argument in scenarios 1 and 2 is futile - there is an acknowledged objective answer, and a way to get it - the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don't arrive at an uncontroversial end point, you often learn a lot about the concepts ('good', knowledge', 'desires', etc) in the process. Your example of the re-definition of 'planet' fits this model, I think.

This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don't typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I'd be interested to know if this seems wrong.

I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn't want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).

You may think it's obvious, but I don't see you've shown any of these 3 examples is silly. I don't see that Schroeder's project is silly (I haven't read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept - helps us think about what a desire -and hence in part a rational agent- is.

As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition - that's why the paper was so successful, and this is often how these debates go. Part of what's interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit - conceptual analysis - is elusive.

I objected to your example because I didn't see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments - not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle's Chinese Room argument. His argument is multiply flawed, as far as I'm concerned -could get into that another time. But I still think it's interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.

Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other's due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we'll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?

This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of 'planet' demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress - we really do come to a better understand of things.

Replies from: lukeprog
comment by lukeprog · 2011-05-26T06:07:00.640Z · LW(p) · GW(p)

As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU's definition for 'planet', I fail to see why clarifying our intuitive concepts is a good use of all that brain power. Such work might theoretically have some value for the psychology of concepts and for linguistics, and yet I suspect neither science would miss philosophy if philosophy went away. Indeed, scientific psychology is often said to have 'debunked' conceptual analysis because concepts are not processed in our brains in terms of necessary and sufficient conditions.

But I'm not sure I'm reading you correctly. Why do you think its useful to devote all that brainpower to clarifying our intuitive concepts of things?

Replies from: BobTheBob, Peterdjones
comment by BobTheBob · 2011-05-27T21:19:17.564Z · LW(p) · GW(p)

As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU's definition for 'planet', I fail to see why clarifying our intuitive concepts is a good use of all that brain power.

I think that where we differ is on 'intuitive concepts' -what I would want to call just 'concepts'. I don't see that stipulative definitions replace them. Scenario (3), and even the IAU's definition, illustrate this. It is coherent for an astronomer to argue that the IAU's definition is mistaken. This implies that she has a more basic concept -which she would strive to make explicit in arguing her case- different than the IAU's. For her to succeed in making her case -which is imaginable- people would have to agree with her, in which case we would have at least partially to share her concept. The IAU's definition tries to make explicit our shared concept -and to some extent legislates, admittedly- but it is a different sort of animal than what we typically use in making judgements.

Philosophy doesn't impact non-philosophical activities often, but when it does the impact is often quite big. Some examples: the influence of Mach on Einstein, of Rousseau and others on the French and American revolutions, Mill on the emancipation of women and freedom of speech, Adam Smith's influence on economic thinking.

I consider though that the clarification is an end in itself. This site proves -what's obvious anyway- that philosophical questions naturally have a grip on thinking people. People usually suppose the answer to any given philosophical question to be self-evident, but equally we typically disagree about what the obvious answer is. Philosophy is about elucidating those disagreements.

Keeping people busy with activities which don't turn the planet into more non-biodegradeable consumer durables is fine by me. More productivity would not necessarily be a good thing (...to end with a sweeping undefended assertion.).

comment by Peterdjones · 2011-05-27T23:37:44.237Z · LW(p) · GW(p)

Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU's definition for 'planet', I fail to see why clarifying our intuitive concepts is a good use of all that brain power.

OTOH, there is a class of fallacies (the True Scotsman argument, tendentious redefinition, etc),which are based on getting stipulative definitions wrong. Getting them right means formalisation of intution or common usage or something like that.

comment by BobTheBob · 2011-05-26T04:19:18.380Z · LW(p) · GW(p)

You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose

  • we have two people, Albert and Barry
  • we have one thing, a car, X, of determinate interior volume
  • we have one sentence, S: "X is a subcompact".
  • Albert affirms S, Barry denies S.

Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.

Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree.

Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).

Argument in scenarios 1 and 2 is futile - there is an acknowledged objective answer, and a way to get it - the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don't arrive at an uncontroversial end point, you often learn a lot about the concepts ('good', knowledge', 'desires', etc) in the process. Your example of the re-definition of 'planet' fits this model, I think.

This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don't typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I would be interested to hear if this seems wrong.

I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn't want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).

You may think it's obvious, but I don't see you've shown any of these 3 examples is silly. I don't see that Schroeder's project is silly (I haven't read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept - helps us think about what a desire -and hence in part a rational agent- is.

As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition - that's why the paper was so successful, and this is often how these debates go. Part of what's interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit - conceptual analysis - is elusive.

I objected to your example because I didn't see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments - not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle's Chinese Room argument. His argument is multiply flawed, as far as I'm concerned -could get into that another time. But I still think it's interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.

Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other's due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we'll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?

This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of 'planet' demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress - we really do come to a better understand of things.

comment by BobTheBob · 2011-05-26T04:13:42.632Z · LW(p) · GW(p)

You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose

  • we have two people, Albert and Barry
  • we have one thing, a car, X, of determinate interior volume
  • we have one sentence, S: "X is a subcompact".
  • Albert affirms S, Barry denies S.

Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.

Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree.

Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).

Argument in scenarios 1 and 2 is futile - there is an acknowledged objective answer, and a way to get it - the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don't arrive at an uncontroversial end point, you often learn a lot about the concepts ('good', knowledge', 'desires', etc) in the process. Your example of the re-definition of 'planet' fits this model, I think.

This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don't typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.

I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn't want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).

You may think it's obvious, but I don't see you've shown any of these 3 examples is silly. I don't see that Schroeder's project is silly (I haven't read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept - helps us think about what a desire -and hence in part a rational agent- is.

As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition - that's why the paper was so successful, and this is often how these debates go. Part of what's interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit - conceptual analysis - is elusive.

I objected to your example because I didn't see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments - not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle's Chinese Room argument. His argument is multiply flawed, as far as I'm concerned -could get into that another time. But I still think it's interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.

Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other's due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we'll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?

This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of 'planet' demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress - we really do come to a better understand of things.

comment by BobTheBob · 2011-05-26T04:13:02.398Z · LW(p) · GW(p)

You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose

  • we have two people, Albert and Barry
  • we have one thing, a car, X, of determinate interior volume
  • we have one sentence, S: "X is a subcompact".
  • Albert affirms S, Barry denies S.

Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.

Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree.

Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).

Argument in scenarios 1 and 2 is futile - there is an acknowledged objective answer, and a way to get it - the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don't arrive at an uncontroversial end point, you often learn a lot about the concepts ('good', knowledge', 'desires', etc) in the process. Your example of the re-definition of 'planet' fits this model, I think.

This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don't typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.

I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn't want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).

You may think it's obvious, but I don't see you've shown any of these 3 examples is silly. I don't see that Schroeder's project is silly (I haven't read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept - helps us think about what a desire -and hence in part a rational agent- is.

As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition - that's why the paper was so successful, and this is often how these debates go. Part of what's interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit - conceptual analysis - is elusive.

I objected to your example because I didn't see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments - not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle's Chinese Room argument. His argument is multiply flawed, as far as I'm concerned -could get into that another time. But I still think it's interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.

Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other's due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we'll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?

This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of 'planet' demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress - we really do come to a better understand of things.

comment by BobTheBob · 2011-05-26T04:12:11.752Z · LW(p) · GW(p)

You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose

  • we have two people, Albert and Barry
  • we have one thing, a car, X, of determinate interior volume
  • we have one sentence, S: "X is a subcompact".
  • Albert affirms S, Barry denies S.

Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.

Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree.

Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).

Argument in scenarios 1 and 2 is futile - there is an acknowledged objective answer, and a way to get it - the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don't arrive at an uncontroversial end point, you often learn a lot about the concepts ('good', knowledge', 'desires', etc) in the process. Your example of the re-definition of 'planet' fits this model, I think.

This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don't typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.

I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn't want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).

You may think it's obvious, but I don't see you've shown any of these 3 examples is silly. I don't see that Schroeder's project is silly (I haven't read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept - helps us think about what a desire -and hence in part a rational agent- is.

As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition - that's why the paper was so successful, and this is often how these debates go. Part of what's interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit - conceptual analysis - is elusive.

I objected to your example because I didn't see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments - not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle's Chinese Room argument. His argument is multiply flawed, as far as I'm concerned -could get into that another time. But I still think it's interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.

Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other's due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we'll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?

This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of 'planet' demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress - we really do come to a better understand of things.

comment by BobTheBob · 2011-05-26T04:10:47.097Z · LW(p) · GW(p)

You are surely right that there is no point in arguing over definitions in at least one sense - esp the definition of "definition". Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with what you're saying. Suppose

we have two people, Albert and Barry we have one thing, a car, X, of determinate interior volume we have one sentence, S: "X is a subcompact". Albert affirms S, Barry denies S.

Scenario (1): Albert and Barry agree on the standard definition of 'subcompact' - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.

Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of 'subcompact' (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn't anything people should engage in for long, I agree.

Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn't be classified as subcompact -ie, X isn't really subcompact, notwithstanding the received definition. This doesn't have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter - a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of 'subcompact car'. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).

Argument in scenarios 1 and 2 is futile - there is an acknowledged objective answer, and a way to get it - the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don't arrive at an uncontroversial end point, you often learn a lot about the concepts ('good', knowledge', 'desires', etc) in the process. Your example of the re-definition of 'planet' fits this model, I think.

This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don't typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.

I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn't want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).

You may think it's obvious, but I don't see you've shown any of these 3 examples is silly. I don't see that Schroeder's project is silly (I haven't read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept - helps us think about what a desire -and hence in part a rational agent- is.

As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition - that's why the paper was so successful, and this is often how these debates go. Part of what's interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit - conceptual analysis - is elusive.

I objected to your example because I didn't see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments - not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle's Chinese Room argument. His argument is multiply flawed, as far as I'm concerned -could get into that another time. But I still think it's interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.

Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other's due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we'll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?

This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of 'planet' demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress - we really do come to a better understand of things.

comment by lukeprog · 2011-05-24T19:13:33.616Z · LW(p) · GW(p)

To point people to some additional references on conceptual analysis in philosophy. Audi's (1983, p. 90) "rough characterization" of conceptual analysis is, I think, standard: "Let us simply construe it as an attempt to provide an illuminating set of necessary and sufficient conditions for the (correct) application of a concept."

Or, Ramsey's (1992) take on conceptual analysis: "philosophers propose and reject definitions for a given abstract concept by thinking hard about intuitive instances of the concept and trying to determine what their essential properties might be."

Sandin (2006) gives an example:

Enter Freddie, philosopher, who has set out to analyse the concept of knowledge. Freddie sits back in his armchair and thinks hard about knowledge and the ‘‘what-we-would-say-when’’ of the term knowledge. He tentatively proposes and either rejects or accepts necessary and sufficient conditions for (his) correct use of the term knowledge. After a while, he feels he has succeeded, writes down his analysis and publishes it. End of part 1. Part 2: Enter a second philosopher, Eddie. Eddie reads Freddie’s paper about knowledge. Eddie’s room is also furnished with an appropriate armchair, in which he sits back and tries to concoct a counterexample to Freddie’s proposed analysis. He feels he has succeeded, writes down his counterexample and publishes it. End of part 2.

This is precisely what Albert and Barry are doing with regard to 'sound'.


Audi (1983). The Applications of Conceptual Analysis. Metaphilosophy 14: 87-106.

Ramsey (1992). Prototypes and Conceptual Analysis. Topoi, 11: 59-70.

Sandin (2006). Has psychology debunked conceptual analysis? Metaphilosophy, 37: 26-33.

comment by Eugine_Nier · 2011-05-22T19:07:22.097Z · LW(p) · GW(p)

Eliezer does have a post in which he talks about doing what you call conceptual analysis more-or-less as you describe and why it's worthwhile. Unfortunately, since that's just one somewhat obscure post whereas he talks about tabooing words in many of his posts, when LWrongers encounter conceptual analysis, their cached thought is to say "taboo your words" and dismiss the whole analysis as useless.

Replies from: wedrifid, Will_Sawin, Amanojack
comment by wedrifid · 2011-05-22T19:09:07.524Z · LW(p) · GW(p)

The 'taboo X' reply does seem overused. It is something that is sometimes best to just ignore when you don't think it aids in conveying the point you were making.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-05-22T19:18:10.178Z · LW(p) · GW(p)

It is something that is sometimes best to just ignore when you don't think it aids in conveying the point you were making.

When I try that, I tend to get down-votes and replies complaining that I'm not responding to their arguments.

Replies from: wedrifid
comment by wedrifid · 2011-05-22T19:26:20.696Z · LW(p) · GW(p)

When I try that, I tend to get down-votes and replies complaining that I'm not responding to their arguments.

I don't know the specific details of the instances in question. One thing I am sure about, however, is that people can't downvote comments that you don't make. Sometimes a thread is just a lost cause. Once things get polarized it often makes no difference at all what you say. Which is not to say I am always wise enough to steer clear of arguments. Merely that I am wise enough to notice when I do make that mistake. ;)

comment by Will_Sawin · 2011-05-22T20:35:42.300Z · LW(p) · GW(p)

I do not think that he is describing conceptual analysis. Starting with a word vs. starting with a set of objects makes all the difference.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-05-22T21:27:01.462Z · LW(p) · GW(p)

In the example he does start with a word, namely 'art', then uses our intuition to get a set of examples. This is more-or-less how conceptual analysis works.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-05-23T00:04:01.763Z · LW(p) · GW(p)

But he's not analyzing "art", he's analyzing the set of examples, and that is all the difference.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-05-23T01:32:57.614Z · LW(p) · GW(p)

But he's not analyzing "art"

I disagree. Suppose after proposing a definition of art based to the listed examples, someone produced another example that clearly satisfied our intuitions of what constituted art but didn't satisfy the definitions. Would Eliezer:

a) say "sorry despite our intuitions that example isn't art by definition", or

b) conclude that the example was art and there was a problem with the definition?

I'm guessing (b).

Replies from: Will_Sawin
comment by Will_Sawin · 2011-05-23T20:11:31.365Z · LW(p) · GW(p)

He's not trying to define art in accord with on our collective intuitions, he's trying to find the simplest boundary around a list of examples based on an individual's intuitions.

I would argue that the list of examples in the article is abbreviated for simplicity. If there is no single clear simple boundary between the two sets, one can always ask for more examples. But one asks an individual and not all of humanity.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-05-23T21:41:14.636Z · LW(p) · GW(p)

He's not trying to define art in accord with on our collective intuitions, he's trying to find the simplest boundary around a list of examples based on an individual's intuitions.

I would argue he's trying to find the simplest coherent extrapolation of our intuitions.

Replies from: bcoburn
comment by bcoburn · 2011-05-23T21:59:30.748Z · LW(p) · GW(p)

Why do we even care about what specifically Eliezer Yudkowsky was trying to do in that post? Isn't "is it more helpful to try to find the simplest boundary around a list or the simplest coherent explanation of intuitions?" a much better question?

Focus on what matters, work on actually solving problems instead of trying to just win arguments.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-05-24T00:00:56.508Z · LW(p) · GW(p)

The answer to your question is "it depends on the situation". There are some situations in which are intuitions contain some useful, hidden information which we can extract with this method. There are some situation in which our intuitions differ and it makes sense to consider a bunch of separate lists.

But, regardless, it is simply the case that when Eliezer says

"Perhaps you come to me with a long list of the things that you call "art" and "not art""

and

"It feels intuitive to me to draw this boundary, but I don't know why - can you find me an intension that matches this extension? Can you give me a simple description of this boundary?"

he is not talking about "our intuitions", but a single list provided by a single person.

(It is also the case that I would rather talk about that than whatever useless thing I would instead be doing with my time.)

comment by Amanojack · 2011-05-22T20:24:31.455Z · LW(p) · GW(p)

Eliezer's point in that post was that there are more and less natural ways to "carve reality at the joints." That however much we might say that a definition is just a matter of preference, there are useful definitions and less useful ones. The conceptual analysis lukeprog is talking about does call for the rationalist taboo, in my opinion, but simply arguing about which definition is more useful as Eliezer does (if we limit conceptual analysis to that) does not.

comment by PhilGoetz · 2011-05-18T04:14:23.952Z · LW(p) · GW(p)

Analysis [had] one of two reputations. On the one hand, there was sterile cataloging of pointless folk wisdom - such as articles analyzing the concept VEHICLE, wondering whether something could be a vehicle without wheels. This seemed like trivial lexicography.

This work is useful. Understanding how people conceptualize and categorize is the starting point for epistemology. If Wittgenstein hadn't asked what qualified as a game, we might still be trying to define everything in terms of necessary and sufficient conditions.

Replies from: lukeprog, Will_Sawin
comment by lukeprog · 2011-05-24T05:02:02.823Z · LW(p) · GW(p)

I largely disagree, for these reasons.

comment by Will_Sawin · 2011-05-19T04:00:34.645Z · LW(p) · GW(p)

Wasn't the whole point of Wittgenstein's observation that the question of whether something can be a vehicle without wheels is pretty much useless?

comment by Vladimir_Nesov · 2011-05-16T17:28:06.340Z · LW(p) · GW(p)

(I'll reiterate some standard points, maybe someone will find them useful.)

The explicit connection you make between figuring out what is right and fixing people's arguments for them is a step in the right direction. Acting in this way is basically the reason it's useful to examine the physical reasons behind your own decisions or beliefs, even though such reasons don't have any normative power (that your brain tends to act a certain way is not a very good argument for acting that way). Understanding these reasons can point you to a step where the reasoning algorithm was clearly incorrect and can be improved in a known way, thus giving you an improved reasoning algorithm that produces better decisions or beliefs (while the algorithm, both original and improved, remains normatively irrelevant and far from completely understood).

In other words, given that you have tools for making normative decisions that sometimes work, you should seek out as many opportunities for usefully applying them as you can find. If they don't tell you what you should do, perhaps they can tell you how you should be thinking about what you should do. In particular, you should seek opportunities for applying them to their own operation, so that they start working better.

Of course, you'll need tools for making normative decisions about the appropriate methods of improvement for a person's reasoning, and here we hit a wall (on the way to a more rigorous method), because we typically only have our own intuitions to go on. Also, the way you'd like to improve other person's reasoning can be different from the way that person would like their reasoning improved, which makes the ideas of "Alex-right" or "human-right" even more difficult to designate than just "right" (and perhaps much less useful).

Replies from: Nisan
comment by Nisan · 2011-05-16T17:35:38.411Z · LW(p) · GW(p)

the way you'd like to improve other person's reasoning can be different from the way that person would like their reasoning improved

I appreciate that this is a theoretical problem. Have you seen any evidence that this or is not a problem in our particular world?

Replies from: lessdazed
comment by lessdazed · 2011-05-17T15:42:39.134Z · LW(p) · GW(p)

People tend to prefer "just being told the answer", where forcing them to work through problem sets teaches them better.

~~~~~

People dislike articulating answers to rhetorical questions regarding what seems obvious, as this would force them to admit to being surprised by an eventual conclusion, which is a state that can be emotionally uncomfortable, yet the discomfort is linked with embedding it in their memory and it also forces them to face the reality that neighboring beliefs need updating in light of the surprising conclusion because the conclusion was a surprise to them.

The above sentence is steeped in my theory behind a phenomenon that you may have better competing theories for, that people dislike rhetorical questions. Note that other theories are obvious but not entirely competitive with mine.

META: I have divided my posts with tildes because what seemed in my own mind a minute ago to be two roughly equivalent answers to Nisan's question has unraveled into different qualities of response on my part, this is surprising to me and if there is anything to learn from it I only found it out by trying my fingertips at typing an answer to the question. The tildes also represent that I empathize with anyone downvoting this comment because everything below the tildes is too wordy and low quality; my first response (above the tildes) I think is really insightful.

META-META:I've been bemused by my inability to predict how others perceive my comments, but I've recently noticed a pattern: meta comments like this one are likely to get uniform positive or negative response (I'm still typing it out and sticking out my neck [in the safety of pseudonymity] as they are often well received), and I'd appreciate advice on how I could or should have written this post differently for it to be better if it is flawed as I suspect it is. One thing I am trying out for the first time are the META and META-META tags. Is there a better (or more standardized) way to do this?

Replies from: Barry_Cotter
comment by Barry_Cotter · 2011-05-18T20:00:00.007Z · LW(p) · GW(p)

The first sentence seems banal, the second interesting. I suspect this is like the take five minutes technique, you thought better because you thought longer. The second paragraph after the tildes seems unnecessary to me.

Replies from: lessdazed
comment by lessdazed · 2011-05-19T00:00:18.533Z · LW(p) · GW(p)

Thanks.

comment by Amanojack · 2011-05-17T07:17:09.963Z · LW(p) · GW(p)

Upvoted for lucidity, but Empathetic Metaethics sounds more like the whole rest of LessWrong than metaethics specifically.

If there are supposed to be any additional connotations to Empathetic Metaethics it would make me very wary. I am wary of the connotation that I need someone to help me decide whether my feelings align with the Truth. I always assumed this site is called LessWrong because it generally tries to avoid driving readers to any particular conclusion, but simply away from misguided ones, so they can make their own decisions unencumbered by bias and confusion.

Austere-san may come off as a little callous, but Empathetic-san comes off as a meddler. I'd still rather just be a friendly Mr. Austere supplemented with other LW concepts, especially from the Human's Guide to Words sequence. After all, if it is just confusion and bias getting in the way, all there is to do is to sweep those errors away. Any additional offer of "help" in deciding what it is "right" for me to feel would tingle my Spidey sense pretty hard.

Replies from: lukeprog, None, lessdazed
comment by lukeprog · 2011-05-17T13:20:11.129Z · LW(p) · GW(p)

We are trying to be 'less wrong' because human brains are so far from ideal at epistemology and at instrumental rationality ('agency'). But it's a standard LW perspective to assert that there is a territory, and some maps of (parts of) it are right and others are wrong. And since we are humans, it helps to retrain our emotions: "Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts."

Replies from: Amanojack
comment by Amanojack · 2011-05-18T06:56:53.080Z · LW(p) · GW(p)

And since we are humans, it helps to retrain our emotions: "Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts."

I'd rather call this "self-help" than "meta-ethics." Why self-help? Because...

But it's a standard LW perspective to assert that there is a territory, and some maps of (parts of) it are right and others are wrong.

...even if my emotions are "wrong," why should I care? In this case, the answer can only be that it will help me derive more satisfaction out of life if I get it "right", which seems to fall squarely under the purview of self-help.

Of course we can draw the lines between meta-ethics and self-help in various ways, but there is so much baggage in the label "ethics" that I'd prefer to get away from it as soon as possible.

comment by [deleted] · 2011-05-18T00:54:15.405Z · LW(p) · GW(p)

I always assumed this site [...] tries to avoid driving readers to any particular conclusion, but simply away from misguided ones[.]

As a larger point, separate from the context of lukeprog's particular post:

What you assumed above will not always be possible. If models M0...Mn are all misguided, and M(n+1) isn't, driving readers away from misguided models necessarily drives them to one particular conclusion, M(n+1).

comment by lessdazed · 2011-05-17T15:23:16.168Z · LW(p) · GW(p)

I am wary of the connotation that I need someone to help me decide whether my feelings align with the Truth.

I'm not sure what this means. Could you elaborate?

What I imagine you to mean seems similar to the sentiment expressed in the first comment to this blog post. That comment seems to me to be so horrifically misguided that I had a strong physiological response to reading it. Basically the commenter thought that since he doesn't experience himself as following rules of formulating thoughts and sentences, he doesn't follow them. This is a confusion of the map and territory that stuck in my memory for some reason, and your comment reminded me of it because you seem to be expressing a very strong faith in the accuracy of how things seem to you.

Feel free to just explain yourself without feeling obligated to read a random blog post or telling me how I am misreading you, which would be a side issue.

Replies from: Amanojack
comment by Amanojack · 2011-05-18T07:18:56.869Z · LW(p) · GW(p)

I think my response to lukeprog above answers this in a way, but it's more just a question of what we mean by "help me decide." I'm not against people helping me be less wrong about the actual content of the territory. I'm just against people helping me decide how to emotionally respond to it, provided we are both already not wrong about the territory itself.

If I am happy because I have plenty of food (in the map), but I actually don't (in the territory), I'd certainly like to be informed of that. It's just that I can handle the transition from happy to "oh shit!" all by myself, thank you very much.

In other words, my suspicion of anyone calling themselves an Empathetic Metaethicist is that they're going to try to slide in their own approved brand of ethics through the back door. This is also a worry I have about CEV. Hopefully future posts will alleviate this concern.

Replies from: lessdazed
comment by lessdazed · 2011-05-19T02:49:25.328Z · LW(p) · GW(p)

If you mean that in service of my goal of satisfying my actual desires, there is more of a danger of being misled when getting input from others as to whether my emotions are a good match for reality than when getting input as to whether reality matches my perception of it, I tentatively agree.

If you mean that getting input from others as to whether my emotions are a good match for reality has a greater cost than benefit, I disagree assuming basic advice filters similar to those used when getting input as to whether reality matches my perception of it. As per above, there will all else equal be a lower expected payoff for me getting advice in this area, even though the advantages are similar.

If you mean that there is a fundamental difference in kind between matching perception to reality and emotions to perceptions that makes getting input an act that is beneficial in the former case and corrosive in the latter, I disagree.

I have low confidence regarding what emotions are most appropriate for various crises and non-crises, and suspect what I think of as ideal are at best local peaks with little chance of being optimal. In addition, what I think of as optimal emotional responses are likely to be too resistant to exceptions. E.g., if one is trapped in a mine shaft the emotional response suitable for typical cases of being trapped is likely to consume too much oxygen.

I'm generally open to ideas regarding what my emotions should be in different situations, and how I can act to change my emotions.

comment by steven0461 · 2011-05-16T20:22:30.476Z · LW(p) · GW(p)

A lot of the issue with things like conceptual analysis, I think, is that people do them badly, and then others have to step in and waste even more words to correct them. If the worst three quarters of philosophers suddenly stopped philosophizing, the field would probably progress faster.

Replies from: lessdazed
comment by lessdazed · 2011-05-17T15:27:13.934Z · LW(p) · GW(p)

Agreed as literally stated, and also agree with your implication: this is especially true for philosophy in addition to other fields in which this is also true.

"other fields in which this is also true" is intentionally ambiguous, half implying that this is basically true for all other fields and half implying it's only true for a small subset, as I'm undecided as to which is the case.

Replies from: MBlume
comment by Perplexed · 2011-05-16T14:31:09.132Z · LW(p) · GW(p)

As one example, consider some commonly used definitions for 'morally good':

  • that which produces the most pleasure for the most people
  • that which is in accord with the divine will
  • ...

Those aren't definitions of 'morally good'. They are theories of the morally good. I seriously doubt that there are any real philosophers that are confused about the distinction.

Replies from: lukeprog, Peterdjones, Peterdjones
comment by lukeprog · 2011-05-16T17:36:47.892Z · LW(p) · GW(p)

Right, but part of each of these theories is that using one set of definitions for moral terms is better than using another set of definitions, often for reasons similar to the network-style conceptual analysis proposed by Jackson.

Replies from: Perplexed, Peterdjones
comment by Perplexed · 2011-05-17T04:54:13.939Z · LW(p) · GW(p)

If you are saying that meta-ethical definitions can never be perfectly neutral wrt a choice between ethical theories, then I have to agree. Every ethical theory comes dressed in a flattering meta-ethical evening gown that reveals the nice stuff but craftily hides the ugly bits.

But that doesn't mean that we shouldn't at least strive for neutrality. Personally, I would prefer to have the definition of "morally good" include consequential goods, deontological goods, and virtue goods. If the correct moral theory can explain this trinity in terms of one fundamental kind of good, plus two derived goods, well that is great. But that work is part of normative ethics, not meta-ethics. And it certainly is not accomplished by imposing a definition.

Replies from: lukeprog, Peterdjones
comment by lukeprog · 2011-05-24T04:59:57.693Z · LW(p) · GW(p)

I'm doing a better job of explaining myself over here.

comment by Peterdjones · 2011-05-18T14:50:57.174Z · LW(p) · GW(p)

Personally, I would prefer to have the definition of "morally good" "morally good" include consequential goods, deontological goods, and virtue goods.

All of those already include the pre-theoretic notion of "good".

Replies from: Perplexed
comment by Perplexed · 2011-05-18T17:43:19.109Z · LW(p) · GW(p)

Correct. Which is why I think it is a mistake if they are not accounted for in the post-theoretic notion.

comment by Peterdjones · 2011-05-18T15:01:05.147Z · LW(p) · GW(p)

Right, but part of each of these theories is that using one set of definitions for moral terms is better than using another set of definitions, often for reasons similar to the network-style conceptual analysis proposed by Jackson.

But then confusion about definitions is actually confusion about theories.

comment by Peterdjones · 2011-05-18T15:55:33.325Z · LW(p) · GW(p)

The idea that people by default have no idea at all what moral language is hard to credit, whether claimed of people in general, or claimed by individuals of themselves. Everyone, after all, is brought up from an early age with a great deal of moral exhortation, to do Good things and refrain from Naughty things. Perhaps not everybody gets very far along the Kohlberg scale, but no one is starting from scratch. People may not be able to articulate a clear definition, or not the kind of definition one would expect from a theory, but that does not mean one needs a theory of metaethics to give a meaning to "moral".

Replies from: Perplexed
comment by Perplexed · 2011-05-18T17:40:19.731Z · LW(p) · GW(p)

that does not mean one needs a theory of metaethics to give a meaning to "moral".

No. One only needs a theory of metaethics to prevent philosophers from giving it a disastrously wrong meaning.

comment by Peterdjones · 2011-05-16T19:25:43.050Z · LW(p) · GW(p)

Those aren't definitions of 'morally good'. They are theories of the morally good

exactly what I wanted to say!

comment by TheAncientGeek · 2013-11-19T12:29:41.099Z · LW(p) · GW(p)

Eliezer advises against reading mainstream philosophy because he thinks it will "teach very bad habits of thought that will lead people to be unable to do real work

Alternative hypothesis: it will teach good habits of thought that will allow people to recognise bad amateur philosophy.

Replies from: dxu
comment by dxu · 2014-11-17T04:56:38.973Z · LW(p) · GW(p)

It is unlikely that you will gain these "good habits of thought" allowing you to recognize "bad amateur philosophy" from reading mainstream philosophy when much of mainstream philosophy consists of what (I assume) you're calling "bad amateur philosophy".

Replies from: wedrifid, TheAncientGeek
comment by wedrifid · 2014-11-20T02:13:36.829Z · LW(p) · GW(p)

when much of mainstream philosophy consists of what (I assume) you're calling "bad amateur philosophy".

No, much of it is bad professional philosophy. It's like bad amateur philosophy except that students are forced to pretend it matters.

comment by TheAncientGeek · 2014-11-18T02:01:59.207Z · LW(p) · GW(p)

No. I'm calling the Sequences bad amateur philosophy.

Replies from: dxu
comment by dxu · 2014-11-18T02:12:27.225Z · LW(p) · GW(p)

If that's the case, I'd like to hear your reasoning behind this statement.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-18T11:52:53.779Z · LW(p) · GW(p)
  1. A significant number of postings don't argue towards a discernible point.

  2. A significant number of postings don't argue their point cogently.

  3. Lack of awareness of standard counterarguments, and alternative theories.

  4. Lack of appropriate response to objections.

None of this has anything to do with which answers are right or wrong. It is a form of the fallacy of grey to argue that since no philosophy comes up with definite answers, then it's all equally a failure. Philosophy isn't trying to be science, so it isn't broken science.

  1. A quick way of confirming this point might be to attempt to summarize the Less Wrong theory of ethics.

  2. Particularly the ones written as dialogues. I share Massimo Pigliuccis frustration

"I am very sympathetic both to Bayesian analysis (I have used it in my own research) and to its implications for philosophy of science (though there are some interesting objections that can be raised to it as a model of science tout court — see for example the chapter in Bayesianism here). Which is why the title of Yudkowsky’s column surprised the hell out of me! Alas, as I said, he provides no argument in that post for his suggestion that Bayesianism favors a many-worlds interpretation of quantum mechanics, or for the further claim that somehow this goes against scientific practice because the currently favored interpretation is the Copenhagen one. But then I noticed that the post was a follow up to two more, one entitled “If many-worlds had come first,” the other “The failures of Eld science.” Oh crap, now I had to go back and read those before figuring out what Yudkowsky was up to. (And before you ask, yes, those posts too linked to previous ones, but by then I had had enough.) Except that that didn’t help either. Both posts are rather bizarre, if somewhat amusing, fictional dialogues, one of which doesn’t even mention the word “Bayes” (the other refers to it tangentially a couple of times), and that certainly constitute no sustained argument at all. (Indeed, “The failures of Eld science” sounds a lot like the sort of narrative you find in Atlas Shrugged, and you know that’s not a compliment coming from me.)"

3 and 4. There's an example here. A poster makes a very pertinent objection tithe main post. No one responds, and the main post is to this day bandied around as establishing the point. Things don't work like that. If someone returns your serve, you're supposed to hit back, not walk off the court and claim the prize.

A knowledge of philosophy doesn't give you a basis of facts to build on,but it does load your brain with a network of argument and counterargument, and can prevent you wasting time by mounting elaborate defences of claims to which there are well known objections.

Replies from: Vaniver, dxu
comment by Vaniver · 2014-11-18T16:52:40.457Z · LW(p) · GW(p)

A knowledge of philosophy doesn't give you a basis of facts to build on,but it does load your brain with a network of argument and counterargument, and can prevent you wasting time by mounting elaborate defences of claims to which there are well known objections.

It seems to me that there are two views of philosophy that are useful here: one of them I'll term perspective, or a particular way of viewing the world, and the other one is comparative perspectives. That term is deliberately modeled after comparative religion because I think the analogy is useful; typically, one develops the practice of one's own religion and the understanding of other religions.

It seems to me that the Sequences are a useful guide for crystallizing the 'LW perspective' in readers, but are not a useful guide for placing the 'LW perspective' in the history of perspectives. (For that, one's better off turning to lukeprog, who has a formal education in philosophy.) Perhaps there are standard criticisms other perspectives make of this perspective, but whether or not that matters depends on whether you want to argue about this perspective or inhabit this perspective. If the latter, a criticism is not particularly interesting, but a patch is interesting.

That is to say, I think comparative perspectives (i.e. studying philosophy formally) has value, but it's a narrow kind of value and like most things the labor involved should be specialized. I also think that the best guide to philosophy X for laymen and the best guide to philosophy X for philosophers will look different, and Eliezer's choice to optimize for laymen was wise overall.

Replies from: Toggle, TheAncientGeek
comment by Toggle · 2014-11-18T18:44:55.210Z · LW(p) · GW(p)

Most of the content in the sequences isn't new as such, but it did draw from many different sources, most of which were largely confined to academia. In synthesis, the product is pretty original. To the best of my knowledge, the LessWrong perspective/community has antecedents but not an obvious historical counterpart.

In that light, I'd expect the catalyzing agent for such a perspective to be the least effective such agent that could successfully accomplish the task. (Or: to be randomly selected from the space of all possible effective agents, which is quite similar in practice.) We are the tool-users not because hominids are optimized for tool use, but because we were the first ones to do so with enough skill to experience a takeoff of civilization. So it's pretty reasonable to expect the sequences to be a little wibbly.

To continue your religious metaphor, Paul wrote in atrocious Greek, had confusingly strong opinions about manbeds, and made it in to scripture because he was instrumental in building the early church communities. Augustine persuasively developed a coherent metaphysic for the religion that reconciled it with the mainstream Neoplatonism of the day, helping to clear the way for a transition from persecuted minority to dominant memeplex, but is considered a 'doctor of the church' rather than an author of scripture because he was operating within and refining a more established culture.

The sequences were demonstrably effective in crystallizing a community, but are probably a lot less effective in communicating outside that community. TAG's objections may be especially relevant if LessWrong is to transition from a 'creche' online environment and engage in dialogue with cultural power brokers- a goal of the MIRI branch at a minimum.

Replies from: Vaniver, TheAncientGeek
comment by Vaniver · 2014-11-18T19:12:09.022Z · LW(p) · GW(p)

I wish I had more than one upvote to give this comment; entirely agreed.

Replies from: Toggle
comment by Toggle · 2014-11-18T20:38:13.439Z · LW(p) · GW(p)

Thank you! The compliment works just as well.

comment by TheAncientGeek · 2014-11-18T18:56:37.808Z · LW(p) · GW(p)

..and its not too iimportant what the community is crystallized around? Believing in things you can't justify or explain is something that an atheist community can safely borrow from religion?

Replies from: Vaniver
comment by Vaniver · 2014-11-18T19:29:01.083Z · LW(p) · GW(p)

and its not too iimportant what the community is crystallized around?

Of course it's important. What gives you another impression?

Believing in things you can't justify or explain is something that an atheist community can safely borrow from religion?

It's not clear to me where you're getting this. To be clear, I think that the LW perspective has different definitions of "believe," "justify," and "explain" from traditional philosophy, but I don't think that it gets its versions from religion. I also think that atheism is a consequence of LW's epistemology, not a foundation of it. (As a side note, the parts of religion that don't collapse when brought into a robust epistemology are solid enough to build on, and there's little to be gained by turning your nose up at their source.)

In this particular conversation, the religion analogy is used primarily in a social and historical sense. People believe things; people communicate and coordinate on beliefs. How has that communication and coordination happened in the past, and what can we learn from that?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-18T20:13:18.467Z · LW(p) · GW(p)

We can learn that "all for the cause, whatever it is" is a failure of rationality.

To be clear, I think that the LW perspective has different definitions of "believe," "justify," and "explain" from traditional philosophy,

I think the LW perspective has the same definitions...but possibly different theories from the various theories of traditional philosophy. (It also looks like LW has a different definition if "definition", which really confuses things)

the parts of religion that don't collapse when brought into a robust epistemology

Religious epistemology - dogmatism+vagueness - is just the problem

Replies from: Vaniver
comment by Vaniver · 2014-11-19T14:55:09.540Z · LW(p) · GW(p)

We can learn that "all for the cause, whatever it is" is a failure of rationality.

Entirely agreed.

Religious epistemology - dogmatism+vagueness - is just the problem

I don't see the dogmatism you're noticing--yes, Eliezer has strong opinions on issues I don't think he should have strong opinions on, but those strong opinions are only weakly transmitted to others and you'll find robust disagreement. Similarly, the vagueness I've noticed tends to be necessary vagueness, in the sense of "X is an open problem, but here's my best guess at how X will be solved. You'll notice that it's fuzzy here, there, and there, which is why I think the problem is still open."

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-19T15:43:47.263Z · LW(p) · GW(p)

So what actually is the LessWrongian theory of ethics?

And, assuming you don't know....why are there people who believe it, for some value of believe?

Replies from: Vaniver
comment by Vaniver · 2014-11-19T20:15:04.679Z · LW(p) · GW(p)

So what actually is the LessWrongian theory of ethics?

In order to answer this question, I'm switching to the anthropology of moral belief and practice (as lukeprog puts it here).

I don't think there's a single agreed-upon theory. The OP is part of lukeprog's sequence where he put forward a theory of meta-ethics he calls pluralistic moral reductionism, which he says here is not even an empathetic theory of meta-ethics, let alone applied ethics. Eliezer's sequence on meta-ethics suffers from the flaw that it's written 'in character,' and was not well-received. If you look at survey results, you see that the broadest statements we can make are things like "overall people here lean towards consequentialism."

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-23T18:46:21.305Z · LW(p) · GW(p)

Ok. You can't summarize it unambiguously either. So why do people believe it?

Replies from: dxu
comment by dxu · 2014-11-23T21:10:53.956Z · LW(p) · GW(p)

So why do people believe it? (emphasis mine)

From Vaniver's comment:

I don't think there's a single agreed-upon theory.

What "it" are you speaking of?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-24T02:25:06.668Z · LW(p) · GW(p)

The lesswrongian theory of ethics, If you don't believe there is such a singular entity, you couldn't say so...I'm hardly going to disagree.

Replies from: dxu
comment by dxu · 2014-11-24T05:02:29.590Z · LW(p) · GW(p)

I doubt you'll find anyone here seriously saying that we've found a definitive theory of metaethics. That is our eventual goal, yes, but right now, there are at best several competing theories. No absolutely correct theory has even been proposed, much less endorsed by the majority of LW. So the answer to your question ("Why do people believe it?") is, as far as I can tell, "They don't." My question, however, is why you think this is something really bad, as opposed to something just slightly bad.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-24T10:45:58.946Z · LW(p) · GW(p)

If you look upthread, youll see that what I think is really bad is advising people not to study mainstream philosophy.

I also think it bad to call philosophy diseased for not being able to solve problems you can't solve either.

And it might be an idea to add a warning to the metaethics sequences: "Before reading these million words, please note that they don't go anywhere".

comment by TheAncientGeek · 2014-11-18T18:46:12.753Z · LW(p) · GW(p)

"Crystalising" you team clarifying, or defending.

Communicating the content of a claim is of llimited use, unless you can make it persuasive. That in turn, requires defending it against alternatives. So the function you are trying to separate are actually very interconnected.

(Another disanalogy between philosophy and religion is that philosophy is less holistic, working more at the claim level)

Replies from: Vaniver
comment by Vaniver · 2014-11-18T19:10:08.716Z · LW(p) · GW(p)

"Crystalising" you team clarifying, or defending.

I mean clarifying. I use that term because some people look at the Sequences and say "but that's all just common sense!". In some ways it is, but in other ways a major contribution of the Sequences is to not just let people recognize that sort of common sense but reproduce it.

I understand that clarification and defense are closely linked, and am trying to separate intentionality more than I am methodology.

Another disanalogy between philosophy and religion is that philosophy is less holistic, working more at the claim level

I consider 'stoicism' to be a 'philosophy,' but I notice that Stoics are not particularly interested in debating the finer points of abstractions, and might even consider doing so dangerous to their serenity relative to other activities. A particularly Stoic activity is negative visualization- the practice of imagining something precious being destroyed, to lessen one's anxiety about its impermanence through deliberate acceptance, and to increase one's appreciation of its continued existence.

One could see this as an unconnected claim put forth by Stoics that can be evaluated on its own merits (we could give a grant to a psychologist to test whether or not negative visualization actually works), but it seems to me that it is obvious that in the universe where negative visualization works, Stoics would notice and either copy the practice from its inventors or invent it themselves, because Stoicism is fundamentally about reducing anxiety and achieving serenity, and this seems amenable to a holistic characterization. (The psychologist might find that negative visualization works differently for Stoics than non-Stoics, and might actually only be a good idea for Stoics.)

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-18T20:23:29.246Z · LW(p) · GW(p)

Your example of "a philosophy" is pretty much a religion. by current standard. By philosophy I meant the sort of thing typified by current anglophone philosophy.

Replies from: Toggle, Vaniver
comment by Toggle · 2014-11-18T20:36:36.902Z · LW(p) · GW(p)

That may be the disjunction. Current anglophone philosophy is basically the construction of an abstract system of thought, valued for internal rigor and elegance but largely an intellectual exercise. Ancient Greek philosophies were eudaimonic- instrumental constructions designed to promote happiness. Their schools of thought, literal schools where one could go, were social communities oriented around that goal. The sequences are much more similar to the latter ('rationalists win' + meetups), although probably better phrased as utilitarian rather than eudaimonic. Yudkowsky and Sartre are basically not even playing the same game.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-18T20:50:54.502Z · LW(p) · GW(p)

I'm delighted to hear that Clippie and Newcombs box are real-world, happiness promoting issues!

Replies from: Nornagest
comment by Nornagest · 2014-11-18T21:30:42.103Z · LW(p) · GW(p)

Clippy is pretty speculative, but analogies to Newcomb's problem come up in real-world decision-making all the time; it's a dramatization of a certain class of problem arising from decision-making between agents with models of each other's probable behavior (read: people that know each other), much like how the Prisoner's Dilemma is a dramatization of a certain type of coordination problem. It doesn't have to literally involve near-omniscient aliens handing out money in opaque boxes.

Replies from: Lumifer
comment by Lumifer · 2014-11-18T21:39:39.777Z · LW(p) · GW(p)

it's a dramatization of a certain class of problem arising from decision-making between agents with models of each other's probable behavior

Does it? It seems to me that once Omega stops being omniscient and becomes, basically, your peer in the universe, there is no argument not to two-box in Newcomb's problem.

Replies from: MarkusRamikin, TheOtherDave, Nornagest
comment by MarkusRamikin · 2014-11-18T22:03:15.874Z · LW(p) · GW(p)

Seems to me like you only transformed one side of the equation, so to speak. Reallife Newcomblike problems don't involve Omega, but they also don't (mainly) involve highly contrived thought-experiment-like choices regarding which we are not prepared to model each other.

Replies from: Lumifer
comment by Lumifer · 2014-11-18T22:23:21.511Z · LW(p) · GW(p)

That seems to me to expand the Newcomb's Problem greatly -- in particular, into the area where you know you'll meet Omega and can prepare by modifying your internal state. I don't want to argue definitions, but my understanding of the Newcomb's Problem is much narrower. To quote Wikipedia,

By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined.

and that's clearly not the situation of Joe and Kate.

Replies from: dxu
comment by dxu · 2014-11-19T02:23:30.654Z · LW(p) · GW(p)

Perhaps, but it is my understanding that an agent who is programmed to avoid reflective inconsistency would find the two situations equivalent. Is there something I'm missing here?

Replies from: Lumifer
comment by Lumifer · 2014-11-19T02:41:09.818Z · LW(p) · GW(p)

I don't know what "an agent who is programmed to avoid reflective inconsistency" would do. I am not one and I think no human is.

Replies from: dxu
comment by dxu · 2014-11-19T02:59:02.564Z · LW(p) · GW(p)

Reflective inconsistency isn't that hard to grasp, though, even for a human. All it's really saying is that a normatively rational agent should consider the questions "What should I do in this situation?" and "What would I want to pre-commit to do in this situation?" equivalent. If that's the case, then there is no qualitative difference between Newcomb's Problem and the situation regarding Joe and Kate, at least to a perfectly rational agent. I do agree with you that humans are not perfectly rational. However, don't you agree that we should still try to be as rational as possible, given our hardware? If so, we should strive to fit our own behavior to the normative standard--and unless I'm misunderstanding something, that means avoiding reflective inconsistency.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T03:01:32.244Z · LW(p) · GW(p)

All it's really saying is that a normatively rational agent should consider the questions "What should I do in this situation?" and "What would I want to pre-commit to do in this situation?" equivalent.

I don't consider them equivalent.

Replies from: dxu
comment by dxu · 2014-11-19T03:06:08.150Z · LW(p) · GW(p)

Fair enough. I'm not exactly qualified to talk about this sort of thing, but I'd still be interested to hear why you think the answers to these two ought to be different. (There's no guarantee I'll reply, though!)

Replies from: Lumifer
comment by Lumifer · 2014-11-19T03:15:37.926Z · LW(p) · GW(p)

Because reality operates in continuous time. In the time interval between now and the moment when I have to make a choice, new information might come in, things might change. Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.

Replies from: wedrifid
comment by wedrifid · 2014-11-19T14:00:27.356Z · LW(p) · GW(p)

Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.

Curiously, this particular claim is true only because Lumifer's primary claim is false. An ideal CDT agent released at time T with the capability to self modify (or otherwise precommit) will as rapidly as possible (at T + e) make a general precommitment to the entire class of things that can be regretted in advance only for the purpose of influencing decisions made after (T + e) (but continue with two-boxing type thinking for the purpose of boxes filled before T + e).

Replies from: Lumifer
comment by Lumifer · 2014-11-19T15:31:41.830Z · LW(p) · GW(p)

Curiously, this particular claim is true only because Lumifer's primary claim is false. An ideal CDT agent ...

Curiously enough, I made no claims about ideal CDT agents.

Replies from: wedrifid
comment by wedrifid · 2014-11-20T02:01:16.046Z · LW(p) · GW(p)

Curiously enough, I made no claims about ideal CDT agents.

True. CDT is merely a steel-man of your position that you actively endorsed in order to claim prestigious affiliation.

The comparison is actually rather more generous than what I would have made myself. CDT has no arbitrary discontinuity between at p=1 and p=(1-e) for example.

That said, the grandparent's point applies just as well regardless of whether we consider CDT, EDT, the corrupted Lumifer variant of CDT or most other naive but not fundamentally insane decision algorithms. In the general case there is a damn good reason to make an abstract precommittment as soon as possible. UDT is an exception only because such precomittment would be redundant.

comment by TheOtherDave · 2014-11-18T23:09:17.971Z · LW(p) · GW(p)

What, on your view, is the argument for not two-boxing with an omniscient Omega?
How does that argument change with a non-omniscient but skilled predictor?

Replies from: Lumifer
comment by Lumifer · 2014-11-19T02:24:11.463Z · LW(p) · GW(p)

If Omega is omniscient the two actions (one- and two-boxing) each have a certain outcome with the probability of 1. So you just pick the better outcome. If Omega is just a skilled predictor, there is no certain outcome so you two-box.

Replies from: dxu, wedrifid
comment by dxu · 2014-11-19T02:30:13.025Z · LW(p) · GW(p)

You are facing a modified version of Newcomb's Problem, which is identical to standard Newcomb except that Omega now has 99% predictive accuracy instead of ~100%. Do you one-box or two-box?

Replies from: Lumifer
comment by Lumifer · 2014-11-19T02:39:14.217Z · LW(p) · GW(p)

Two-box. From my point of view it's all or nothing (and it has to be not ~100%, but exactly 100%).

Replies from: dxu
comment by dxu · 2014-11-19T02:52:18.400Z · LW(p) · GW(p)

You get $1000 with 99% probability and $1001000 with 1% probability, for a final expected value of $101090. A one-boxer gets $1000000 with 99% probability and $0 with 1% probability, with a final expected value of $990000. Even with probabilistic uncertainties, you would still have been comparatively better off one-boxing. And this isn't just limited to high probabilities; theoretically any predictive power better than chance causes Newcomb-like situations.

In practice, this tends to go away with lower predictive accuracies because the relative rewards aren't high enough to justify one-boxing. Nevertheless, I have little to no trouble believing that a skilled human predictor can reach accuracies of >80%, in which case these Newcomb-like tendencies are indeed present.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T02:55:50.686Z · LW(p) · GW(p)

You get $1000 with 99% probability and $1001000 with 1% probability, for a final expected value of $101090. A one-boxer gets $1000000 with 99% probability and $0 with 1% probability, with a final expected value of $990000.

No, I don't think so.

Let's do things in temporal order.

Step 1: Omega makes a prediction and puts money into boxes.

What's the prediction and what's in the boxes?

Replies from: dxu
comment by dxu · 2014-11-19T03:02:55.194Z · LW(p) · GW(p)

Assuming you are a two-boxer, there is a 99% chance that there is nothing in Box B (and $1000 in Box A, as always), along with a 1% chance that Box B contains $1000000. If we're going with the most likely scenario, there is nothing in Box B.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T03:12:03.747Z · LW(p) · GW(p)

In the classic Newcomb's Problem Omega moves first before I can do anything. Step 1 happens before I made any choices.

If Omega is a good predictor, he'll predict my decision, but there is nothing I can do about it. I don't make a choice to be a "two-boxer" or a "one-boxer".

I can make a choice only after step 1, once the boxes are set up and unchangeable. And after step 1 everything is fixed so you should two-box.

Replies from: MarkusRamikin, nshepperd, TheOtherDave, dxu
comment by MarkusRamikin · 2014-11-20T09:53:48.404Z · LW(p) · GW(p)

In the classic Newcomb's Problem Omega moves first before I can do anything. Step 1 happens before I made any choices.

This is true both for the 99% and 100% accurate predictor, isn't it? Yet you say you one-box with the 100% one.

I can make a choice only after step 1, once the boxes are set up and unchangeable. And after step 1 everything is fixed so you should two-box.

Please answer me this:

What does 99% accuracy mean to you exactly, in this scenario? If you know that Omega can predict you with 99% accuracy, what reality does this correspond to for you? What do you expect to happen different, compared to if he could predict you with, say, 50% accuracy (purely chance guesses)?

Actually, let's make it more specific: suppose you do this same problem 1000 times, with a 99% Omega, what amount of money do you expect to end up with if you two-box? And what if you one-box?

The reason I am asking is that it appears to me like, the moment Omega stops being perfectly 100% accurate, you really stop believing he can predict you at all. It's like, if you're given a Newcomblike problem that involves "Omega can predict you with 99% accuracy", you don't actually accept this information (and are therefore solving a different problem).

It's unsafe to guess at another's thoughts, and I could be wrong. But I simply fail to see, based on the things you've said, how the "99% accuracy" information informs your model of the situation at all.

Replies from: Lumifer
comment by Lumifer · 2014-11-20T15:59:57.430Z · LW(p) · GW(p)

This is true both for the 99% and 100% accurate predictor, isn't it? Yet you say you one-box with the 100% one.

Yes, because 100% is achievable only through magic. Omniscience makes Omega a god and you can't trick an omniscient god.

That's why there is a discontinuity between P=1 and P=1-e -- we leave the normal world and enter the realm of magic.

What does 99% accuracy mean to you exactly, in this scenario? If you know that Omega can predict you with 99% accuracy, what reality does this correspond to for you?

In the frequentist framework this means that if you were to fork the universe and make 100 exact copies of it, in 99 copies Omega would be correct and in one of them he would be wrong.

In the Bayesian framework probabilities are degrees of belief and the local convention is to think of them as betting odds, so this means I should be indifferent which side to take of a 1 to 99 bet on the correctness of Omega's decision.

suppose you do this same problem 1000 times, with a 99% Omega, what amount of money do you expect to end up with if you two-box? And what if you one-box?

The question is badly phrased because it ignores the temporal order and so causality.

If you become omniscient for a moment and pick 1000 people who are guaranteed to two-box and 1000 people who are guaranteed to one-box, the one-box people will, of course, get more money from a 99% Omega. But it's not a matter of their choice, you picked them this way.

the moment Omega stops being perfectly 100% accurate, you really stop believing he can predict you at all

Not at all. I mentioned this before and I'll repeat it again: there is no link between Omega's prediction and the choice of a standard participant in the Newcomb's Problem. The standard participant does not have any advance information about Omega with his boxes and so cannot pre-commit to anything. He only gets to do something after the boxes become immutable.

At the core, I think, the issue is of causality and I'm not comfortable with the acausal manoeuvres that LW is so fond of.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2014-11-20T17:07:17.277Z · LW(p) · GW(p)

I asked what it means to you. Not sure why I got an explanation of bayesian vs frequentist probability.

there is no link between Omega's prediction and the choice of a standard participant in the Newcomb's Problem. The standard participant does not have any advance information about Omega with his boxes and so cannot pre-commit to anything.

You seem to believe precommitment is the only thing that makes your choice knowable to Omega in advance. But Omega got his track record of 99% accurate predictions somehow. Whatever algorithms are ultimately responsible for your choice, they - or rather their causal ancestors - exist in the world observed by Omega at the time he's filling his boxes. Unless you believe in some kind of acausal choicemaking, you are just as "committed" if you'd never heard of Newcomb's problem. However, from within the algorithms, you may not know what choice you're bound to make until you're done computing. Just as a deterministic chess playing program is still choosing a move, even if the choice in a given position is bound to be, say, Nf4-e6.

Indeed, your willingness (or lack thereof) to believe that, whatever the output of your thinking, Omega is 99% likely to have predicted it, is probably going to be a factor in Omega's original decision.

Replies from: Lumifer
comment by Lumifer · 2014-11-20T17:14:43.820Z · LW(p) · GW(p)

I asked what it means to you.

To me personally? Pretty much nothing, an abstract exercise with numbers. As I said before (though the post was heavily downvoted and probably invisible by now), I don't expect to meet Omega and his boxes in my future, so I don't care much, certainly not enough to pre-commit.

Or are you asking what 1% probability means to me? I suspect I have a pretty conventional perception of it.

You seem to believe precommitment is the only thing that makes your choice knowable to Omega in advance.

No, that's not the issue. We are repeating the whole line of argument I went through with dxu and TheOtherDave during the last couple of days -- see e.g. this and browse up and down this subthread. Keep in mind that some of my posts there were downvoted into invisibility so you may need to click on buttons to open parts of the subthread.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2014-11-20T19:12:24.009Z · LW(p) · GW(p)

Sigh. I wasn't asking if you care. I meant more something like this:

But NASA told Mr. Ullian that the probability of failure was more like 1 in 10^5. [...] “That means you could fly the shuttle every day for an average of 300 years between accidents — every day, one flight, for 300 years — which is obviously crazy!”

Feynman doesn't believe the number, but this is what it means to him: if he were to take the number seriously, this is the reality he thinks it would correspond to. That's what I meant when I asked "what does this number mean to you". What reality the "99% accuracy" (hypothetically) translates to for you when you consider the problem. What work it's doing in your model of it, never mind if it's a toy model.

Suppose you - or if you prefer, any not-precommitted participant - faces Omega, who presents his already-filled boxes, and the participant chooses to either one-box or two-box. Does the 99% accuracy mean you expect to afterwards find that Omega predicted that choice in 99 out of 100 cases on average? If so, can you draw up expected values for either choice? If not, how else do you understand that number?

the whole line of argument I went through with dxu and TheOtherDave during the last couple of days

OK, I re-read it and I think I see it.

the optimal choice for me after Stage 1 is to two-box

I think the issue lies in this "after" word. If determinism, then you don't get to first have a knowable-to-Omega disposition to either one-box or two-box, and then magically make an independent choice after Omega fills the boxes. The choice was already unavoidably part of the Universe before Stage 1, in the form of its causal ancestors, which are evidence for Omega to pick up to make his 99% accurate prediction. (So the choice affected Omega just fine, which is why I am not very fond of the word "acausal"). The standard intuition that places decisionmaking in some sort of a causal void removed from the rest of the Universe doesn't work too well when predictability is involved.

Replies from: Lumifer
comment by Lumifer · 2014-11-20T19:49:49.597Z · LW(p) · GW(p)

If determinism

Yep, that's another way to look at causality issue. I asked upthread if the correctness of the one-boxing "solution" implies lack of free will and, in fact, depends on the lack of free will. I did not get a satisfying answer (instead I got condescending nonsense about my corrupt variation of an ideal CDT agent).

If "the choice was already unavoidably part of the Universe before Stage" then it is not a "choice" as I understand the word. In this case the whole problem disappears since if the choice to one-box or two-box is predetermined, what are we talking about, anyway?

As is often the case, Christianity already had to deal with this philosophical issue of a predetermined choice -- see e.g. Calvinism.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2014-11-20T20:15:03.216Z · LW(p) · GW(p)

Still wouldn't mind getting a proper answer to my question...

And well, yeah, if you believe in a nondeterministic, acausal free will, then we may have an unbridgable disagreement. But even then... suppose we put the issue of determinism and free will completely aside for now. Blackbox it.

Imagine - put on your "take it seriously" glasses for a moment if I can have your indulgence - that a sufficiently advanced alien actually comes to Earth and in many, many trials establishes a 99% track record of predicting people's n-boxing choices (to keep it simple, it's 99% for one-boxers and also 99% for two-boxers).

Imagine also that, for whatever reason, you didn't precommit (maybe sufficiently reliable precommitment mechanisms are costly and inconvenient, inflation ate into the value of the prize, and the chance of being chosen by Omega for participation is tiny. Or just akrasia, I don't care). And then you get chosen for participation and accept (hey, free money).

What now? Do you have a 99% expectation that, after your choice, Omega will have predicted it correctly? Does that let you calculate expected values? If so, what are they? If not, in what way are you different from the historical participants who amounted to the 99% track record Omega's built so far (= participants already known to have found themselves predicted 99% of the time)?

Or are you saying that an Omega like that can't exist in the first place. In which case how is that different - other than in degree - from whenever humans predict other humans with better than chance accuracy?

Replies from: Lumifer
comment by Lumifer · 2014-11-20T20:38:51.942Z · LW(p) · GW(p)

then we have an unbridgable disagreement

But let me ask that question again, then. Does the correctness of one-boxing require determinism, aka lack of free will?

Does that let you calculate expected values?

Let's get a bit more precise here. There are two ways you can use this term. One is with respect to the future, to express the probability of something that hasn't happened yet. The other is with respect to lack of knowledge, to express that something already happened, but you just don't know what it is.

The meanings conveyed by these two ways are very different. In particular, when looking at Omega's two boxes, there is no "expected value" in the first sense. Whatever happened already happened. The true state of nature is that one distribution of money between the boxes has the probability of 1 -- it happened -- and the other distribution has the probability of 0 -- it did not happen. I don't know which one of them happened, so people talk about expected values in the sense of uncertainty of their beliefs, but that's quite a different thing.

So after Stage 1 in reality there are no expected values of the content of the boxes -- the boxes are already set and immutable. It's only my knowledge that's lacking. And in this particular setting it so happens that I can make my knowledge not matter at all -- by taking both boxes.

You approach also seems to have the following problem. Essentially, Omega views all people as divided into two classes: one-boxers and two-boxers. If belonging to such class is unchangeable (see predestination), the problem disappears since you can do nothing about it. However if you can change which class you belong to (e.g. before the game starts), you can change it after Stage 1 as well. So the optimal solution looks to be to get yourself into the one-boxing class before the game, but the, once Stage 1 happens, switch to the two-boxing class. And if you can't pull off this trick, well, why do you think you can change classes at all?

Replies from: MarkusRamikin, nshepperd
comment by MarkusRamikin · 2014-11-20T21:00:28.379Z · LW(p) · GW(p)

Does the correctness of one-boxing require determinism, aka lack of free will?

I don't think so, which is the gist of my last post - I think all it requires is taking Omega's track record seriously. I suppose this means I prefer EDT to CDT - it seems insane to me to ignore evidence, past performance showing that 99% of everyone who's two-boxed so far got out with much less money.

Essentially, Omega views all people as divided into two classes: one-boxers and two-boxers.

No more than a typical coin is either a header or a tailer. Omega can simply predict with high accuracy if it's gonna be heads or tails on the next, specific occasion... or if it's gonna be one or two boxes, already accounting for any tricks. Imagine you have a tell, like in poker, at least when facing someone as observant as Omega.

All right, I'm done here. Trying to get a direct answer to my question stopped feeling worthwhile.

comment by nshepperd · 2014-11-20T22:30:41.534Z · LW(p) · GW(p)

Does the correctness of one-boxing require determinism, aka lack of free will?

The fact that you think these things are the same thing is the problem. Determinism does not imply lack of "choice", not in any sense that matters.

To be absolutely clear:

No, one-boxing does not require lack of free will.

But it should also be obvious that for omega to predict you requires you to be predictable. Determinism provides this for the 100% accurate case. This is not any kind of contradiction.

If belonging to such class is unchangeable (see predestination), the problem disappears since you can do nothing about it.

No "changing" is required. You can't "change" the future any more than you can "change" the past. You simply determine it. Whichever choice you decide to make is the choice you were always going to make, and determines the class you are, and always were in.

Replies from: Lumifer
comment by Lumifer · 2014-11-21T01:03:44.043Z · LW(p) · GW(p)

Whichever choice you decide to make is the choice you were always going to make, and determines the class you are, and always were in.

Yes, I understand that. I call that lack of choice and absence of free will. Your terminology may differ.

Replies from: TheOtherDave, dxu
comment by TheOtherDave · 2014-11-21T01:31:21.576Z · LW(p) · GW(p)

Just so I'm clear: when you call that a lack of choice, do you mean to distinguish it from anything? That is, is there anything in the world you would call the presence of choice? Does the word "choice," for you, have a real referent?

Replies from: Lumifer
comment by Lumifer · 2014-11-21T01:51:34.423Z · LW(p) · GW(p)

Does the word "choice," for you, have a real referent?

Sure. I walk into a ice cream parlour, which flavour am I going to choose? Can you predict? Can anyone predict with complete certainty? If not, I'll make a choice.

Replies from: nshepperd, TheOtherDave
comment by nshepperd · 2014-11-21T02:35:42.997Z · LW(p) · GW(p)

This definition of choice is empty. If I can't predict which flavour you will buy based on knowing what flavours you like or what you want, you aren't choosing in any meaningful sense at all. You're just arbitrarily, capriciously, picking a flavour at random. Your "choice" doesn't even contribute to your own benefit.

Replies from: Lumifer
comment by Lumifer · 2014-11-21T02:39:14.010Z · LW(p) · GW(p)

Your "choice" doesn't even contribute to your own benefit.

You keep thinking that and I'll enjoy the delicious ice cream that I chose.

Replies from: nshepperd
comment by nshepperd · 2014-11-21T02:54:46.795Z · LW(p) · GW(p)

If it's delicious, then any observer who knows what you consider delicious could have predicted what you chose. (Unless there are a few flavours that you deem exactly equally delicious, in which case it makes no difference, and you are choosing at random between them.)

Replies from: Lumifer
comment by Lumifer · 2014-11-21T03:11:12.455Z · LW(p) · GW(p)

in which case it makes no difference, and you are choosing at random between them

Oh, no, it does make a difference for my flavour preferences are not stable and depend on a variety of things like my mood, the season, the last food I ate, etc. etc.

Replies from: nshepperd
comment by nshepperd · 2014-11-21T03:12:51.812Z · LW(p) · GW(p)

And all of those things are known by a sufficiently informed observer...

Replies from: Lumifer
comment by Lumifer · 2014-11-21T03:14:32.332Z · LW(p) · GW(p)

And all of those things are known by a sufficiently informed observer...

Show me one.

Replies from: nshepperd
comment by nshepperd · 2014-11-21T03:33:52.702Z · LW(p) · GW(p)

No need. It only needs to be possible for

Can anyone predict with complete certainty?

to be true!

Replies from: Lumifer
comment by Lumifer · 2014-11-21T03:35:16.243Z · LW(p) · GW(p)

So how do you know what's possible? Do you have data, by any chance? Pray tell!

Replies from: nshepperd
comment by nshepperd · 2014-11-21T03:43:23.832Z · LW(p) · GW(p)

Are you going to assert that your preferences are stored outside your brain, beyond the reach of causality? Perhaps in some kind of platonic realm?

Mood - check, that shows up in facial expressions, at least.

Season - check, all you have to do is look out the window, or look at the calendar.

Last food you ate - check, I can follow you around for a day, or just scan your stomach.

This line of argument really seems futile. Is it so hard to believe that your mind is made of parts, just like everything else in the universe?

Replies from: Lumifer
comment by Lumifer · 2014-11-21T03:45:07.995Z · LW(p) · GW(p)

So, show me.

comment by TheOtherDave · 2014-11-21T02:07:21.007Z · LW(p) · GW(p)

OK. Thanks for clarifying.

comment by dxu · 2014-11-21T01:11:28.859Z · LW(p) · GW(p)

Whichever choice you decide to make is the choice you were always going to make, and determines the class you are, and always were in.

Yes, I understand that.

So then just decide to one-box. You aren't something outside of physics; you are part of physics and your decision is as much a part of physics as anything else. Your decision to one-box or two-box is determined by physics, true, but that's not an excuse for not choosing! That's like saying, "The future is already set in stone; if I get hit by a car in the street, that's what was always going to happen. Therefore I'm going to stop looking both ways when I cross the street. After all, if I get hit, that's what physics said was going to happen, right?"

Replies from: Lumifer
comment by Lumifer · 2014-11-21T01:48:27.896Z · LW(p) · GW(p)

So then just decide to one-box

Errr... can I? nshepperd says

Whichever choice you decide to make is the choice you were always going to make

so I don't see anything I can do. Predestination is a bitch.

Your decision to one-box or two-box is determined by physics, true, but that's not an excuse for not choosing!

It's not an excuse, it's a reason. Que sera, sera -- what will be will be. I don't understand what is that "choosing" you speak of :-/

if I get hit by a car in the street, that's what was always going to happen

Yes, that's what you are telling me. It's just physics, right?

Therefore I'm going to stop looking both ways when I cross the street

Um, not my decision again. It was predetermined whether I would look both ways or not.

Replies from: nshepperd, dxu
comment by nshepperd · 2014-11-21T02:29:29.310Z · LW(p) · GW(p)

Choosing is deliberation, deliberation is choosing. Just consider the alternatives (one-box, two-box) and do the one that results in you having more money.

Whichever choice you decide to make is the choice you were always going to make

The keyword here is decide. Just because you were always going to make that choice doesn't mean you didn't decide. You weighed up the costs and benefits of each option, didn't you?

It really isn't hard. Just think about it, then take one box.

Replies from: EHeller, Lumifer
comment by EHeller · 2014-11-21T02:44:42.166Z · LW(p) · GW(p)

Choosing is deliberation, deliberation is choosing. Just consider the alternatives (one-box, two-box) and do the one that results in you having more money.

Clearly thats two boxing. Omega already made his choice, so if he thought I'd two box, I'll get;

-One box: nothing -two boxing: the small reward

if Omega thought I'd one box: -One box:big reward -two box: big reward + small reward

Two boxing results in more money no matter how Omega thought I'd chose.

Replies from: nshepperd, Lumifer, Jiro
comment by nshepperd · 2014-11-21T03:02:40.239Z · LW(p) · GW(p)

Missing the Point: now a major motion picture.

comment by Lumifer · 2014-11-21T02:47:24.832Z · LW(p) · GW(p)

Is that the drumbeat of nshepperd's head against the desk that I hear..? :-D

comment by Jiro · 2014-11-21T16:02:10.992Z · LW(p) · GW(p)

What if I try to predict what Omega does, and do the opposite?

That would mean that either 1) there are some strategies I am incapable of executing, or 2) Omega can't in principle predict what I do, since it is indirectly predicting itself.

Alternatively, what if instead of me trying to predict Omega, we run this with transparent boxes and I base my decision on what I see in the boxes, doing the opposite of what Omega predicted? Again, Omega is indirectly predicting itself.

Replies from: nshepperd, hairyfigment
comment by nshepperd · 2014-11-22T01:53:30.692Z · LW(p) · GW(p)

I don't see how this is relevant, but yes, in principle it's impossible to predict the universe perfectly. On account of the universe + your brain is bigger than your brain. Although, if you live in a bubble universe that is bigger than the rest of the universe, whose interaction with the rest of the universe is limited precisely to your chosen manipulation of the connecting bridge; basically, if you are AIXI, then you may be able to perfectly predict the universe conditional on your actions.

This has pretty much no impact on actual newcomb's though, since we can just define such problems away by making omega do the obvious thing to prevent such shenanigans ("trolls get no money"). For the purpose of the thought experiment, action-conditional predictions are fine.

IOW, this is not a problem with Newcomb's. By the way, this has been discussed previously.

Replies from: EHeller
comment by EHeller · 2014-11-22T02:29:05.461Z · LW(p) · GW(p)

You've now destroyed the usefulness of Newcomb as a potentially interesting analogy to the real world. In real world games, my opponent is trying to infer my strategy and I'm trying to infer theirs.

If Newcomb is only about a weird world where omega can try and predict the player's actions, but the player is not allowed to predict omega's, then its sort of a silly problem. Its lost most of its generality because you've explicitly disallowed the majority of strategies.

If you allow the player to pursue his own strategy, then its still a silly problem, because the question ends up being inconsistent (because if omega plays omega, nothing can happen).

Replies from: nshepperd
comment by nshepperd · 2014-11-22T02:59:43.138Z · LW(p) · GW(p)

In real world games, we spend most our time trying to make action-conditional predictions. "If I play Foo, then my opponent will play Bar". There's no attempting to circularly predict yourself with unconditional predictions. The sensible formulation of Newcomb's matches that.

(For example, transparent boxes: Omega predicts "if I fill both boxes, then player will ___" and fills the boxes based on that prediction. Or a few other variations on that.)

Replies from: EHeller
comment by EHeller · 2014-11-22T04:17:13.984Z · LW(p) · GW(p)

In many (probably most?) games we consider the opponents strategy, not simply their next move. Making moves in an attempt to confuse your opponent's estimation of your own strategy is a common tactic in many games.

Your "modified Newcomb" doesn't allow the chooser to have a strategy- they aren't allowed to say "if I predict Omega did X, I'll do Y." Its a weird sort of game where my opponent takes my strategy into account, but something keeps me from considering my opponents.

comment by hairyfigment · 2014-11-21T19:32:10.193Z · LW(p) · GW(p)

Can't Omega follow the strategy of 'Trolls get no money,' which by assumption is worse for you? I feel like this would result in some false positives, but perhaps not - and the scenario says nothing about the people who don't get to play in any case.

Replies from: Jiro
comment by Jiro · 2014-11-21T20:48:53.856Z · LW(p) · GW(p)

Can't Omega follow the strategy of 'Trolls get no money,

No, because that's fighting the hypothetical. Assume that he doesn't do that.

Replies from: wedrifid, dxu
comment by wedrifid · 2014-11-22T03:41:18.554Z · LW(p) · GW(p)

No, because that's fighting the hypothetical. Assume that he doesn't do that.

It is actually approximately the opposite of fighting the hypothetical. It is managing the people who are trying to fight the hypothetical. Precise wording of the details of the specification can be used to preempt such replies but for casual defininitions that assume good faith sometimes explicit clauses for the distracting edge cases need to be added.

Replies from: Jiro
comment by Jiro · 2014-11-23T03:43:48.228Z · LW(p) · GW(p)

It is fighting the hypothetical because you are not the only one providing hypotheticals. I am too; I'm providing a hypothetical where the player's strategy makes this the least convenient possible world for people who claim that having such an Omega is a self-consistent concept. Saying "no, you can't use that strategy" is fighting the hypothetical.

Moreover, the strategy "pick the opposite of what I predict Omega does" is a member of a class of strategies that have the same problem; it's just an example of such a strategy that is particularly clear-cut, and the fact that it is clear-cut and blatantly demonstrates the problem with the scenario is the very aspect that leads you to call it trolling Omega. "You can't troll Omega" becomes equivalent to "you can't pick a strategy that makes the flaw in the scenario too obvious".

Replies from: nshepperd, wedrifid
comment by nshepperd · 2014-11-23T12:10:31.637Z · LW(p) · GW(p)

If your goal is to show that Omega is "impossible" or "inconsistent", then having Omega adopt the strategy "leave both boxes empty for people who try to predict me / do any other funny stuff" is a perfectly legitimate counterargument. It shows that Omega is in fact consistent if he adopts such strategy. You have no right to just ignore that counterargument.

Indeed, Omega requires a strategy for when he finds that you are too hard to predict. The only reason such a strategy is not provided beforehand in the default problem description is because we are not (in the context of developing decision theory) talking about situations where you are powerful enough to predict Omega, so such a specification would be redundant. The assumption, for the purpose of illuminating problems with classical decision theory, is that Omega has vastly more computational resources than you do, so that the difficult decision tree that presents the problem will obtain.

By the way, it is extremely normal for there to be strategies you are "incapable of executing". For example, I am currently unable to execute the strategy "predict what you will say next, and counter it first", because I can't predict you. Computation is a resource like any other.

Replies from: Jiro, EHeller, dxu
comment by Jiro · 2014-11-23T16:50:26.196Z · LW(p) · GW(p)

If your goal is to show that Omega is "impossible" or "inconsistent", then having Omega adopt the strategy "leave both boxes empty for people who try to predict me / do any other funny stuff" is a perfectly legitimate counterargument.

If you are suggesting that Omega read my mind and think "does this human intend to outsmart me, Omega", then sure he can do that. But that only takes care of the specific version of the strategy where the player has conscious intent.

If you're suggesting "Omega figures out whether my strategy is functionally equivalent to trying to outsmart me", you're basically claiming that Omega can solve the halting problem by analyzing the situation to determine if it's an instance of the halting problem, and outputting an appropriate answer if that is the case. That doesn't work.

Indeed, Omega requires a strategy for when he finds that you are too hard to predict.

That still requires that he determine that I am too hard to predict, which either means solving the halting problem or running on a timer. Running on a timer is a legitimate answer, except again it means that there are some strategies I cannot execute.

The assumption, for the purpose of illuminating problems with classical decision theory, is that Omega has vastly more computational resources than you do, so that the difficult decision tree that presents the problem will obtain.

I thought the assumption is that I am a perfect reasoner and can execute any strategy.

Replies from: dxu, nshepperd
comment by dxu · 2014-11-23T21:13:32.201Z · LW(p) · GW(p)

I thought the assumption is that I am a perfect reasoner and can execute any strategy.

Why would this be the assumption?

comment by nshepperd · 2014-11-23T22:33:22.879Z · LW(p) · GW(p)

Running on a timer is a legitimate answer

There's your answer.

except again it means that there are some strategies I cannot execute.

I don't see how omega running his simulation on a timer makes any difference for this, but either way this is normal and expected. Problem resolved.

I thought the assumption is that I am a perfect reasoner and can execute any strategy.

Not at all. Though it may be convenient to postulate arbitrarily large computing power (as long as Omega's power is increased to match) so that we can consider brute force algorithms instead of having to also worry about how to make it efficient.

(Actually, if you look at the decision tree for Newcomb's, the intended options for your strategy are clearly supposed to be "unconditionally one-box" and "unconditionally two-box", with potentially a mixed strategy allowed. Which is why you are provided wth no information whatsoever that would allow you to predict omega. And indeed the decision tree explicitly states that your state of knowledge is identical whether omega fills or doesn't fill the box.)

Replies from: Jiro
comment by Jiro · 2014-11-24T01:54:24.063Z · LW(p) · GW(p)

I don't see how omega running his simulation on a timer makes any difference for this,

It's me who has to run on a timer. If I am only permitted to execute 1000 instructions to decide what my answer is, I may not be able to simulate Omega.

Though it may be convenient to postulate arbitrarily large computing power

Yes, I am assuming that I am capable of executing arbitrarily many instructions when computing my strategy.

the intended options for your strategy are clearly supposed to be "unconditionally one-box" and "unconditionally two-box", with potentially a mixed strategy allowed. Which is why you are provided wth no information whatsoever that would allow you to predict omega

I know what problem Omega is trying to solve. If I am a perfect reasoner, and I know that Omega is, I should be able to predict Omega without actually having knowledge of Omega's internals.

Actually, if you look at the decision tree for Newcomb's, the intended options for your strategy are clearly supposed to be "unconditionally one-box" and "unconditionally two-box",

Deciding which branch of the decision tree to pick is something I do using a process that has, as a step, simulating Omega. It is tempting to say "it doesn't matter what process you use to choose a branch of the decision tree, each branch has a value that can be compared independently of why you chose the branch", but that's not correct. In the original problem, if I just compare the branches without considering Omega's predictions, I should always two-box. If I consider Omega's predictions, that cuts off some branches in a way which changes the relative ranking of the choices. If I consider my predictions of Omega's predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.

Replies from: nshepperd, wedrifid, wedrifid
comment by nshepperd · 2014-11-24T03:37:38.604Z · LW(p) · GW(p)

Yes, I am assuming that I am capable of executing arbitrarily many instructions when computing my strategy.

But apparently you want to ignore the part when I said Omega has to have his own computing power increased to match. The fact that Omega is vastly more intelligent and computationally powerful than you is a fundamental premise of the problem. This is what stops you from magically "predicting him".

Look, in Newcomb's problem you are not supposed to be a "perfect reasoner" with infinite computing time or whatever. You are just a human. Omega is the superintelligence. So, any argument you make that is premised on being a perfect reasoner is automatically irrelevant and inapplicable. Do you have a point that is not based on this misunderstanding of the thought experiment? What is your point, even?

Replies from: Jiro
comment by Jiro · 2014-11-24T15:24:21.779Z · LW(p) · GW(p)

But apparently you want to ignore the part when I said Omega has to have his own computing power increased to match.

It's already arbitrary large. You want that expanded to match arbitrarily large?

Look, in Newcomb's problem you are not supposed to be a "perfect reasoner"

Asking "which box should you pick" implies that you can follow a chain of reasoning which outputs an answer about which box to pick.

It sounds like your decision making strategy fails to produce a useful result.

My decision making strategy is "figure out what Omega did and do the opposite". It only fails to produce a useful result if Omega fails to produce a useful result (perhaps by trying to predict me and not halting). And Omega goes first, so we never get to the point where I try my decision strategy and don't halt.

(And if you're going to respond with "then Omega knows in advance that your decision strategy doesn't halt", how's he going to know that?)

Furthermore, there's always the transparent boxes situation. Instead of explicitly simulating Omega, I implicitly simulate Omega by looking in the transparent boxes and determining what Omega's choice was.

What is your point, even?

That Omega cannot be a perfect predictor because being one no matter what strategy the human uses would imply being able to solve the halting problem.

Replies from: nshepperd
comment by nshepperd · 2014-11-25T03:05:00.819Z · LW(p) · GW(p)

It's already arbitrary large. You want that expanded to match arbitrarily large?

When I say "arbitrarily large" I do not mean infinite. You have some fixed computing power, X (which you can interpret as "memory size" or "number of computations you can do before the sun explodes the next day" or whatever). The premise of newcomb's is that Omega has some fixed computing power Q * X, where Q is really really extremely large. You can increase X as much as you like, as long as Omega is still Q times smarter.

Asking "which box should you pick" implies that you can follow a chain of reasoning which outputs an answer about which box to pick.

Which does not even remotely imply being a perfect reasoner. An ordinary human is capable of doing this just fine.

My decision making strategy is "figure out what Omega did and do the opposite". It only fails to produce a useful result if Omega fails to produce a useful result (perhaps by trying to predict me and not halting).

Two points: If Omega's memory is Q times large than yours, you can't fit a simulation of him in your head. So predicting by simulation is not going to work. Second, If Omega has Q times as much computing time as you, you can try to predict him (by any method) for X steps, at which point the sun explodes. Naturally, Omega simulates you for X steps, notices that you didn't give a result before the sun explodes, so leaves both boxes empty and flies away to safety.

That Omega cannot be a perfect predictor because being one no matter what strategy the human uses would imply being able to solve the halting problem.

Only under the artificial irrelevant-to-the-thought-experiment conditions that require him to care whether you'll one-box or two-box after standing in front of the boxes for millions of years thinking about it. Whether or not the sun explodes, or Omega himself imposes a time limit, a realistic Omega only simulates for X steps, then stops. No halting-problem-solving involved.

In other words, if "Omega isn't a perfect predictor" means that he can't simulate a physical system for an infinite number of steps in finite time then I agree but don't give a shit. Such a thing is entirely unneccessary. In the thought experiment, if you are a human, you die of aging after less than 100 years. And any strategy that involves you thinking in front of the boxes until you die of aging (or starvation, for that matter) is clearly flawed anyway.

Furthermore, there's always the transparent boxes situation. Instead of explicitly simulating Omega, I implicitly simulate Omega by looking in the transparent boxes and determining what Omega's choice was.

This example is less stupid since it is not based on trying to circularly predict yourself. But in this case Omega just makes action-conditional predictions and fills the boxes however he likes.

comment by wedrifid · 2014-11-24T02:09:27.838Z · LW(p) · GW(p)

If I consider my predictions of Omega's predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.

It sounds like your decision making strategy fails to produce a useful result. That is unfortunate for anyone who happens to attempt to employ it. You might consider changing it to something that works.

"Ha! What if I don't choose One box OR Two boxes! I can choose No Boxes out of indecision instead!" isn't a particularly useful objection.

comment by wedrifid · 2014-11-24T02:04:41.190Z · LW(p) · GW(p)

It's me who has to run on a timer.

No, Nshepperd is right. Omega imposing computation limits on itself solves the problem (such as it is). You can waste as much time as you like. Omega is gone and so doesn't care whether you pick any boxes before the end of time. This is a standard solution for considering cooperation between bounded rational agents with shared source code.

When attempting to achieve mutual cooperation (essentially what Newcomblike problems are all about) making yourself difficult to analyse only helps against terribly naive intelligences. ie. It's a solved problem and essentially useless for all serious decision theory discussion about cooperation problems.

comment by EHeller · 2014-11-24T01:43:35.491Z · LW(p) · GW(p)

If your goal is to show that Omega is "impossible" or "inconsistent", then having Omega adopt the strategy "leave both boxes empty for people who try to predict me / do any other funny stuff" is a perfectly legitimate counterargument. It shows that Omega is in fact consistent if he adopts such strategy. You have no right to just ignore that counterargument.

This contradicts the accuracy stated at the beginning. Omega can't leave both boxes empty for people who try to adopt a mixed strategy AND also maintain his 99.whatever accuracy on one-boxers.

And even if Omega has way more computational than I do, I can still generate a random number. I can flip a coin thats 60/40 one-box, two-box. The most accurate Omega can be, then, is to assume I one box.

Replies from: nshepperd
comment by nshepperd · 2014-11-24T03:10:56.934Z · LW(p) · GW(p)

This contradicts the accuracy stated at the beginning. Omega can't leave both boxes empty for people who try to adopt a mixed strategy AND also maintain his 99.whatever accuracy on one-boxers.

He can maintain his 99% accuracy on deterministic one-boxers, which is all that matters for the hypothetical.

Alternatively, if we want to explicitly include mixed strategies as an available option, the general answer is that Omega fills the box with probability = the probability that your mixed strategy one-boxes.

comment by dxu · 2014-11-23T16:30:14.063Z · LW(p) · GW(p)

All of this is very true, and I agree with it wholeheartedly. However, I think Jiro's second scenario is more interesting, because then predicting Omega is not needed; you can see what Omega's prediction was just by looking in (the now transparent) Box B.

As I argued in this comment, however, the scenario as it currently is is not well-specified; we need some idea of what sort of rule Omega is using to fill the boxes based on his prediction. I have not yet come up with a rule that would allow Omega to be consistent in such a scenario, though, and I'm not sure if consistency in this situation would even be possible for Omega. Any comments?

Replies from: wedrifid
comment by wedrifid · 2014-11-24T01:54:18.605Z · LW(p) · GW(p)

As I argued in this comment, however, the scenario as it currently is is not well-specified; we need some idea of what sort of rule Omega is using to fill the boxes based on his prediction.

Previous discussions of Transparent Newcomb's problem have been well specified. I seem to recall doing so in footnotes so as to avoid distraction.

I have not yet come up with a rule that would allow Omega to be consistent in such a scenario, though, and I'm not sure if consistency in this situation would even be possible for Omega. Any comments?

The problem (such as it is) is that there is ambiguity between the possible coherent specifications, not a complete lack. As your comment points out there are (merely) two possible situations for the player to be in and Omega is able to counter-factually predict the response to either of them, with said responses limited to a boolean. That's not a lot of permutations. You could specify all 4 exhaustively if you are lazy.

IF (Two box when empty AND One box when full) THEN X
IF ...

Any difficulty here is in choosing the set of rewards that most usefully illustrate the interesting aspects of the problem.

Replies from: dxu
comment by dxu · 2014-11-24T05:13:32.535Z · LW(p) · GW(p)

Any difficulty here is in choosing the set of rewards that most usefully illustrate the interesting aspects of the problem.

I'd say that about hits the nail on the head. The permutations certainly are exhaustively specifiable. The problem is that I'm not sure how to specify some of the branches. Here's all four possibilities (written in pseudo-code following your example):

  1. IF (Two box when empty And Two box when full) THEN X
  2. IF (One box when empty And One box when full) THEN X
  3. IF (Two box when empty And One box when full) THEN X
  4. IF (One box when empty And Two box when full) THEN X

The rewards for 1 and 2 seem obvious; I'm having trouble, however, imagining what the rewards for 3 and 4 should be. The original Newcomb's Problem had a simple point to demonstrate, namely that logical connections should be respected along with causal connections. This point was made simple by the fact that there's two choices, but only one situation. When discussing transparent Newcomb, though, it's hard to see how this point maps to the latter two situations in a useful and/or interesting way.

Replies from: wedrifid, CCC
comment by wedrifid · 2014-11-24T07:30:07.607Z · LW(p) · GW(p)

When discussing transparent Newcomb, though, it's hard to see how this point maps to the latter two situations in a useful and/or interesting way.

Option 3 is of the most interest to me when discussing the Transparent variant. Many otherwise adamant One Boxers will advocate (what is in effect) 3 when first encountering the question. Since I advocate strategy 2 there is a more interesting theoretical disagreement. ie. From my perspective I get to argue with (literally) less-wrong wrong people, with a correspondingly higher chance that I'm the one who is confused.

The difference between 2 and 3 becomes more obviously relevant when noise is introduced (eg. 99% accuracy Omega). I choose to take literally nothing in some situations. Some think that is crazy...

In the simplest formulation the payoff for three is undetermined. But not undetermined in the sense that Omega's proposal is made incoherent. Arbitrary as in Omega can do whatever the heck it wants and still construct a coherent narrative. I'd personally call that an obviously worse decision but for simplicity prefer to define 3 as a defect (Big Box Empty outcome).

As for 4... A payoff of both boxes empty (or both boxes full but contaminated with anthrax spores) seems fitting. But simply leaving the large box empty is sufficient for decision theoretic purposes.

Out of interest, and because your other comments on the subject seem well informed, what do you choose when you encounter Transparent Newcomb and find the big box empty?

Replies from: dxu
comment by dxu · 2014-11-24T19:53:58.698Z · LW(p) · GW(p)

what do you choose when you encounter Transparent Newcomb and find the big box empty?

This is a question that I find confusing due to conflicting intuitions. Fortunately, since I endorse reflective consistency, I can replace that question with the following one, which is equivalent in my decision framework, and which I find significantly less confusing:

"What would you want to precommit to doing, if you encountered transparent Newcomb and found the big box (a.k.a. Box B) empty?"

My answer to this question would be dependent upon Omega's rule for rewarding players. If Omega only fills Box B if the player employs the strategy outlined in 2, then I would want to precommit to unconditional one-boxing--and since I would want to precommit to doing so, I would in fact do so. If Omega is willing to reward the player by filling Box B even if the player employs the strategy outlined in 3, then I would see nothing wrong with two-boxing, since I would have wanted to precommit to that strategy in advance. Personally, I find the former scenario--the one where Omega only rewards people who employ strategy 2--to be more in line with the original Newcomb's Problem, for some intuitive reason that I can't quite articulate.

What's interesting, though, is that some people two-box even upon hearing that Omega only rewards the strategy outlined in 2--upon hearing, in other words, that they are in the first scenario described in the above paragraph. I would imagining that their reasoning process goes something like this: "Omega has left Box B empty. Therefore he has predicted that I'm going to two-box. It is extremely unlikely a priori that Omega is wrong in his predictions, and besides, I stand to gain nothing from one-boxing now. Therefore, I should two-box, both because it nets me more money and because Omega predicted that I would do so."

I disagree with this line of reasoning, however, because it is very similar to the line of reasoning that leads to self-fulfilling prophecies. As a rule, I don't do things just because somebody said I would do them, even if that somebody has a reputation for being extremely accurate, because then that becomes the only reason it happened in the first place. As with most situations involving acausal reasoning, however, I can only place so much confidence in me being correct, as opposed to me being so confused I don't even realize I'm wrong.

comment by CCC · 2014-11-24T09:13:43.452Z · LW(p) · GW(p)

It would seem to me that Omega's actions would be as follows:

  • IF (Two box when empty And Two box when full) THEN Empty
  • IF (One box when empty And One box when full) THEN Full
  • IF (Two box when empty And One box when full) THEN Empty or Full
  • IF (One box when empty And Two box when full) THEN Refuse to present boxes

Cases 1 and 2 are straightforward. Case 3 works for the problem, no matter which set of boxes Omega chooses to leave.

In order for Omega to maintain its high prediction accuracy, though, it is necessary - if Omega predicts that a given player will choose option 4 - that Omega simply refuse to present the transparent boxes to this player. Or, at least, that the number of players who follow the other three options should vastly outnumber the fourth-option players.

Replies from: dxu
comment by dxu · 2014-11-24T19:57:55.936Z · LW(p) · GW(p)

This is an interesting response because 4 is basically what Jiro was advocating earlier in the thread, and you're basically suggesting that Omega wouldn't even present the opportunity to people who would try to do that. Would you agree with this interpretation of your comment?

Replies from: CCC
comment by CCC · 2014-11-25T08:41:34.435Z · LW(p) · GW(p)

Yes, I would.

If we take the assumption, for the moment, that the people who would take option 4 form at least 10% of the population in general (this may be a little low), and we further take the idea that Omega has a track record of success in 99% or more of previous trials (as is often specified in Newcomb-like problems), then it is clear that whatever algorithm Omega is using to decide who to present the boxes to is biased, and biased heavily, against offering the boxes to such a person.

Consider:

P(P) = The probability that Omega will present the boxes to a given person.

P(M|P) = The probability that Omega will fill the boxes correctly (empty for a two-boxer, full for a one-boxer) P(M'|P) = The probability that Omega will fail to fill the boxes correctly

P(O) = The probability that the person will choose option 4

P (M'|O) = 1 (from the definition of option 4) therefore P(M|O) = 0

and if Omega is a perfect predictor, then P(M|O') = 1 as well.

P (M|P) = 0.99 (from the statement of the problem)

P (O) = 0.1 (assumed)

Now, of all the people to whom boxes are presented, Omega is only getting at most one percent wrong; P(M'|P) <= 0.01. Since P(M'|O) = 1, and P(M'|O')=0, it follows that P(P|O) <= 0.01.

If Omega is a less than perfect predictor, then P(M'|O')>0, and P(P|O)<0.01.

And, since P(P|O) = 0.01 < P(O) = 0.1, I therefore conclude that Omega must have a bias - and a fairly strong one - against presenting the boxes to such perverse players.

comment by wedrifid · 2014-11-24T01:03:37.519Z · LW(p) · GW(p)

I am too; I'm providing a hypothetical where the player's strategy makes this the least convenient possible world for people who claim that having such an Omega is a self-consistent concept.

It may be the least convenient possible world. More specifically it is the minor inconvenience of being careful to specify the problem correctly so as not to be distracted. Nshepperd gives some of the reason typically used in such cases.

Moreover, the strategy "pick the opposite of what I predict Omega does" is a member of a class of strategies that have the same problem

What happens when you try to pick the the opposite of what you predict Omega does is something like what happens when you try to beat Deep Fritz 14 at chess while outrunning a sports car. You just fail. Your brain is a few of pounds of fat approximately optimised for out-competing other primates for mating opportunities. Omega is a super-intelligence. The assumption that Omega is smarter than the player isn't an unreasonable one and is fundamental to the problem. Defying it is a particularly futile attempt to fight the hypothetical by basically ignoring it.

Generalising your proposed class to executing maximally inconvenient behaviours in response to, for example, the transparent Newcomb's problem is where it gets actually gets (tangentially) interesting. In that case you can be inconvenient without out-predicting the superintelligence and so the transparent Newcomb's problem requires more care with the if clause.

comment by dxu · 2014-11-21T20:56:22.071Z · LW(p) · GW(p)

In the first scenario, I doubt you would be able to predict Omega with sufficient accuracy to be able to do what you're suggesting. Transparent boxes, though, are interesting. The problem is, the original Newcomb's Problem had a single situation with two possible choices involved; tranparent Newcomb, however, involves two situations:

  1. Transparent Box B contains $1000000.
  2. Transparent Box B contains nothing.

It's unclear from this what Omega is even trying to predict; is he predicting your response to the first situation? The second one? Both? Is he following the rule: "If the player two-boxes in either situation, fill Box B with nothing"? Is he following the rule: "If the player one-boxes in either situation, fill Box B with $1000000"? The problem isn't well-specified; you'll have to give a better description of the situation before a response can be given.

Replies from: Jiro
comment by Jiro · 2014-11-21T22:16:48.644Z · LW(p) · GW(p)

In the first scenario, I doubt you would be able to predict Omega with sufficient accuracy to be able to do what you're suggesting.

That falls under 1) there are some strategies I am incapable of executing.

tranparent Newcomb, however, involves two situations:

  1. Transparent Box B contains $1000000.
  2. Transparent Box B contains nothing.

The transparent scenario is just a restatement of the opaque scenario with transparent boxes instead of "I predict what Omega does". If you think the transparent scenario involves two situations, then the opaque scenario involves two situations as well. (1=opaque box B contains $1000000, and I predict that Omega put in $1000000 and 2=opaque box B contains nothing, and I predict that Omega puts in nothing.) If you object that we have no reason to think both of those opaque situations are possible, I can make a similar objection to the transparent situations.

comment by Lumifer · 2014-11-21T02:38:38.200Z · LW(p) · GW(p)

Just because you were always going to make that choice doesn't mean you didn't decide.

Yes, it does, for the meaning of "decide" that I use.

It really isn't hard. Just think about it, then take one box.

LOL. It really isn't hard. Just think about it, then accept Jesus as your personal saviour... X-)

Replies from: nshepperd
comment by nshepperd · 2014-11-21T03:01:05.241Z · LW(p) · GW(p)

Or think about it, then take two boxes.

Either way, you decide how much money you get, and the contents of the boxes are your fault.

Replies from: EHeller, Lumifer
comment by EHeller · 2014-11-21T03:13:10.097Z · LW(p) · GW(p)

What I've done for Newcomb problems is that I've precommitted to one boxing, but then I've paid a friend to follow me at all times. Just before I chose the boxes, he is to perform complicated neurosurgery to turn me into a two boxer. That way I maximize my gain.

Replies from: nshepperd, ike
comment by nshepperd · 2014-11-21T03:25:30.219Z · LW(p) · GW(p)

That's clever, but of course it won't work. Omega can predict the outcome of neurosurgery.

Replies from: EHeller
comment by EHeller · 2014-11-21T15:53:01.838Z · LW(p) · GW(p)

Better wipe my memory of getting my friend to follow me then.

Also, I have built a second Omega, and given it to others. They are instructed to two box if 2 Omega predicts 1 Omega thinks they'll one box, and visa versa.

comment by ike · 2014-11-21T03:26:29.486Z · LW(p) · GW(p)

.. and that costs less than $1000?

comment by Lumifer · 2014-11-21T03:13:14.830Z · LW(p) · GW(p)

the contents of the boxes are your fault

If I am predestined, nope, not my fault. In fact, in the full determinism case I'm not sure there's "me" at all.

But anyway, how about that -- you introduce me to Omega first, and I'll think about his two boxes afterwards...

comment by dxu · 2014-11-21T04:53:32.869Z · LW(p) · GW(p)

Um, not my decision again. It was predetermined whether I would look both ways or not.

So the next time you cross the street, are you going to look both ways or not? You can't calculate the physical consequences of every particle interaction taking place in your brain, so taking the route the universe takes, i.e. just let every play out at the lowest level, is not an option for you and your limited processing power. And yet, for some reason, I suspect you'll probably answer that you will look both ways, despite being unable to actually predict your brain-state at the time of crossing the street. So if you can't actually predict your decisions perfectly as dictated by physics... how do you know that you'll actually look both ways next time you cross the street?

The answer is simple: you don't know for certain. But you know that, all things being equal, you prefer not getting hit by a car to getting hit by a car. And looking both ways helps to lower the probability of getting hit by a car. Therefore, given knowledge of your preferences and your decision algorithm, you will choose to look both ways.

Note that nowhere in the above explanation was determinism violated! Every step of the physics plays out as it should... and yet we observe that your choice still exists here! Determinism explains free will, not explains it away; just because everything is determined doesn't mean your choice doesn't exist! You still have to choose; if I ask you if you were forced to reply to my comment earlier by the Absolute Power of Determinism, or if you chose to write that comment of your own accord, I suspect you'll answer the latter.

Likewise, Omega may have predicted your decision, but that decision still falls to you to make. Just because Omega predicted what you would do doesn't mean you can get away with not choosing, or choosing sub-optimally. If I said, "I predict that tomorrow Lumifer will jump off a cliff," would you do it? Of course not. Conversely, if I said, "I predict that tomorrow Lumifer will not jump off a cliff," would you do it? Still of course not. Your choice exists regardless of whether there's some agent out there predicting what you do.

Replies from: Lumifer
comment by Lumifer · 2014-11-21T05:24:35.826Z · LW(p) · GW(p)

Therefore, given knowledge of your preferences and your decision algorithm, you will choose to look both ways.

Well, actually, it depends. Descending from flights of imagination down to earth, I sometimes look and sometimes don't. How do I know there isn't a car coming? In some cases hearing is enough. It depends.

nowhere in the above explanation was determinism violated!

You are mistaken. If my actions are predetermined, I chose nothing. You may prefer to use the word "choice" within determinism, I prefer not to.

just because everything is determined doesn't mean your choice doesn't exist

Yes, it does mean that. And, I'm afraid, just you asserting something -- even with force -- doesn't make it automatically true.

Your choice exists regardless of whether there's some agent out there predicting what you do.

Of course, but that's not we are talking about. We are talking about whether choice exists at all.

Replies from: dxu
comment by dxu · 2014-11-21T05:41:57.245Z · LW(p) · GW(p)

just because everything is determined doesn't mean your choice doesn't exist

Yes, it does mean that.

Okay, it seems like we're just arguing definitions now. Taboo "choice" and any synonyms. Now that we have done that, I'm going to specify what I mean when I use the word "choice": the deterministic output of your decision algorithm over your preferences given a certain situation. If there is something in this definition that you feel does not capture the essence of "choice" as it relates to Newcomb's Problem, please point out exactly where you think this occurs, as well as why it is relevant in the context of Newcomb's Problem. In the meantime, I'm going to proceed with this definition.

So, in the above quote of mine, replacing "choice" with my definition gives you:

just because everything is determined doesn't mean the deterministic output of your decision algorithm over your preferences given a certain situation doesn't exist

We see that the above quote is trivially true, and I assert that "the deterministic output of your decision algorithm over your preferences given a certain situation" is what matters in Newcomb's Problem. If you have any disagreements, again, I would ask that you outline exactly what those disagreements are, as opposed to providing qualitative objections that sound pithy but don't really move the discussion forward. Thank you in advance for your time and understanding.

Replies from: Lumifer
comment by Lumifer · 2014-11-21T07:16:34.502Z · LW(p) · GW(p)

I'm going to specify what I mean when I use the word "choice": the deterministic output of your decision algorithm over your preferences given a certain situation.

Sure, you can define the word "choice" that way. The problem is, I don't have that. I do not have a decision algorithm over my preferences that produces some deterministic output given a certain situation. Such a thing does not exist.

You may define some agent for whom your definition of "choice" would be valid. But that's not me, and not any human I'm familiar with.

Replies from: dxu, nshepperd
comment by dxu · 2014-11-21T20:29:36.846Z · LW(p) · GW(p)

The problem is, I don't have that. I do not have a decision algorithm over my preferences that produces some deterministic output given a certain situation. Such a thing does not exist.

What is your basis for arguing that it does not exist?

You may define some agent for whom your definition of "choice" would be valid. But that's not me, and not any human I'm familiar with.

What makes humans so special as to exempted from this?

Keep in mind that my goal here is not perpetuate disagreement or to scold you for being stupid; it's to resolve whatever differences in reasoning are causing our disagreement. Thus far, your comments have been annoyingly evasive and don't really help me understand your position better, which has caused me to update toward you not actually having a coherent position on this. Presumably, you think you do have a coherent position, in which case I'd be much gratified if you'd just lay out everything that leads up to your position in one fell swoop rather than forcing myself and others to ask questions repeatedly in hope of clarification. Thank you.

Replies from: MarkusRamikin, Lumifer
comment by MarkusRamikin · 2014-11-22T14:56:51.990Z · LW(p) · GW(p)

I think it became clear that this debate is pointless the moment proving determinism became a prerequisite for getting anywhere.

I did try a different approach, but that was mostly dodged. I suspect Lumifer wants determinism to be a prerequisite; the freedom to do that slippery debate dance of theirs is so much greater then.

Either way, yeah. I'd let this die.

comment by Lumifer · 2014-11-22T02:01:20.989Z · LW(p) · GW(p)

What is your basis for arguing that it does not exist?

Introspection.

What's your basis for arguing that it does exist?

What makes humans so special as to exempted from this?

Tsk, tsk. Such naked privileging of an assertion.

to resolve whatever differences in reasoning are causing our disagreement.

Well, the differences are pretty clear. In simple terms, I think humans have free will and you think they don't. It's quite an old debate, at least a couple of millennia old and maybe more.

I am not quite sure why do you have difficulties accepting that some people think free will exists. It's not a that unusual position to hold.

Replies from: dxu, nshepperd
comment by dxu · 2014-11-22T04:14:47.090Z · LW(p) · GW(p)

Introspection.

No offense, but this is a textbook example of an answer that sounds pithy but tells me, in a word, nothing. What exactly am I supposed to get out of this? How am I supposed to argue against this? This is a one-word answer that acts as a blackbox, preventing anyone from actually getting anything worthwhile out of it--just like "emergence". I have asked you several times now to lay out exactly what your disagreement is. Unless you and I have wildly varying definitions of the word "exactly", you have repeatedly failed to do so. You have displayed no desire to actually elucidate your position to the point where it would actually be arguable. I would characterize your replies to my requests so far as a near-perfect example of logical rudeness. My probability estimate of you actually wanting to go somewhere with this conversation is getting lower and lower...

Tsk, tsk. Such naked privileging of an assertion.

This is a thinly veiled expression of contempt that again asserts nothing. The flippancy this sort of remark exhibits suggests to me that you are more interested in winning than in truth-seeking. If you think I am characterizing your attitude uncharitably, please feel free to correct me on this point.

In simple terms, I think humans have free will and you think they don't.

Taboo "free will" and try to rephrase your argument without ever using that phrase or any synonymous terms/phrases. (An exception would be if you were trying to refer directly to the phrase, in which case you would put it in quotation marks, e.g. "free will".) Now then, what were you saying?

Replies from: Lumifer
comment by Lumifer · 2014-11-22T05:26:14.716Z · LW(p) · GW(p)

What exactly am I supposed to get out of this?

You are supposed to get out of this that you're asking me to prove a negative and I don't see a way to do this other than say "I've looked and found nothing" (aka introspection). How do you expect me to prove that I do NOT have a deterministic algorithm running my mind?

How am I supposed to argue against this?

You are not supposed to argue against this. You are supposed to say "Aha, so this a point where we disagree and there doesn't appear to be a way to prove it one way or another".

you have repeatedly failed to do so.

From my point of view you repeatedly refused to understand what I've been saying. You spent all your time telling me, but not listening.

This is a thinly veiled expression of contempt that again asserts nothing.

Oh, it does. It asserts that you are treating determinism as a natural and default answer and the burden is upon me to prove it wrong. I disagree.

Taboo "free will" and try to rephrase your argument without ever using that phrase or any synonymous terms/phrases.

Why? This is the core of my position. If you think I'm confused by words, tell me how am I confused. It the problem that you don't understand me? I doubt this.

comment by nshepperd · 2014-11-22T02:11:53.577Z · LW(p) · GW(p)

Are you talking about libertarian free will? The uncaused causer? I would have hoped that LWers wouldn't believe such absurd things. Perhaps this isn't the right place for you if you still reject reductionism.

Replies from: TheAncientGeek, Lumifer
comment by Lumifer · 2014-11-22T02:39:55.673Z · LW(p) · GW(p)

Perhaps this isn't the right place for you

LOL. Do elaborate, it's going to be funny :-)

comment by nshepperd · 2014-11-21T07:40:47.398Z · LW(p) · GW(p)

If you're just going to provide every banal objection that you can without evidence or explanation in order to block discussion from moving forward, you might as well just stop posting.

comment by nshepperd · 2014-11-19T03:55:05.087Z · LW(p) · GW(p)

I can make a choice only after step 1, once the boxes are set up and unchangeable.

It's common to believe that we have the power to "change" the future but not the past. Popular conceptions of time travel such as Back To The Future show future events wavering in and out of existence as people deliberate about important decisions, to the extent of having a polaroid from the future literally change before our eyes.

All of this, of course, is a nonsense in deterministic physics. If any part of the universe is "already" determined, it all is (and by the way quantum "uncertainty" doesn't change this picture in any interesting way). So there is not much difference between controlling the past and controlling the future, except that we don't normally get an opportunity to control the past, due to the usual causal structure of the universe.

In other words, the boxes are "already set up and unchangeable" even if you decide before being scanned by Omega. But you still get to decide whether they are unchangeable in a favourable or unfavourable way.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T03:57:53.471Z · LW(p) · GW(p)

It's common to believe that we have the power to "change" the future but not the past.

That's the free-will debate. Does the "solution" to one-box depend on rejection of free will?

Replies from: nshepperd
comment by nshepperd · 2014-11-19T04:07:48.905Z · LW(p) · GW(p)

Do you believe that objects in the future waver in and out of existence as you deliberate?

(On the free will debate: The common conception of free will is confused. But that doesn't mean our will isn't free, or imply fatalism.)

Replies from: Lumifer
comment by Lumifer · 2014-11-19T04:22:58.951Z · LW(p) · GW(p)

I am aware of the LW (well, EY's, I guess) position on free will. But here we are discussing the Newcomb's Problem. We can leave free will to another time. Still, what about my question?

comment by TheOtherDave · 2014-11-19T04:32:39.838Z · LW(p) · GW(p)

If Omega is a good predictor, he'll predict my decision, but there is nothing I can do about it. I don't make a choice to be a "two-boxer" or a "one-boxer".

Well, if that's true -- that is, if whether you are the sort of person who one-boxes or two-boxes in Newcomblike problems is a fixed property of Lumifer that you can't influence in any way -- then you're right that there's no point to thinking about which choice is best with various different predictors. After all, you can't make a choice about it, so what difference does it make which choice would be better if you could?

Similarly, in most cases, given a choice between accelerating to the ground at 1G and doing so at .01 G once I've fallen off a cliff, I would do better to choose the latter... but once I fall off the cliff, I don't actually have a choice, so that doesn't matter at all.

Many people who consider it useful to think about Newcomblike problems, by contrast, believe that there is something they can do about it... that they do indeed make a choice to be a "two-boxer" or a "one-boxer."

Replies from: Lumifer
comment by Lumifer · 2014-11-19T04:35:31.965Z · LW(p) · GW(p)

whether you are the sort of person who one-boxes or two-boxes in Newcomblike problems is a fixed property of Lumifer that you can't influence in any way

It's not a fixed property, it's undetermined. Go ask a random person whether he one-boxes or two-boxes :-)

Replies from: TheOtherDave
comment by TheOtherDave · 2014-11-19T04:42:55.993Z · LW(p) · GW(p)

Correction accepted. Consider me to have repeated the comment with the word "fixed" removed, if you wish. Or not, if you prefer.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T04:53:09.464Z · LW(p) · GW(p)

I don't anticipate meeting Omega and his two boxes. Therefore I don't find pre-committing to a particular decision in this situation useful.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-11-19T05:40:07.337Z · LW(p) · GW(p)

I'm not sure I understand.

Earlier, you seemed to be saying that you're incapable of making such a choice. Now, you seem to be saying that you don't find it useful to do so, which seems to suggest... though not assert... that you can.

So, just to clarify: on your view, are you capable of precommitting to one-box or two-box? And if so, what do you mean when you say that you can't make a choice to be a "one-boxer" -- how is that different from precommitting to one-box?

Replies from: Lumifer
comment by Lumifer · 2014-11-19T05:47:08.108Z · LW(p) · GW(p)

I, personally, have heard of the Newcomb's Problem so one can argue that I am capable of pre-committing. However a tiny minority of the world's population have heard of that problem and, as far as I know, the default formulation of the Newcomb's Problem assumes that the subject had no advance warning. Therefore in general case there is no pre-committment and the choice does not exist.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-11-19T06:13:27.976Z · LW(p) · GW(p)

So, I asked:

on your view, are you capable of precommitting to one-box or two-box?

You have answered that "one can argue that" you are capable of it.
Which, well, OK, that's probably true.
One could also argue that you aren't, I imagine.

So... on your view, are you capable of precommitting?

Because earlier you seemed to be saying that you weren't able to.
I think you're now saying that you can (but that other people can't).
But it's very hard to tell.

I can't tell whether you're just being slippery as a rhetorical strategy, or whether I've actually misunderstood you.

That aside: it's not actually clear to me that precommitting to oneboxing is necessary. The predictor doesn't require me to precommit to oneboxing, merely to have some set of properties that results in me oneboxing. Precommitment is a simple example of such a property, but hardly the only possible one.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T06:48:00.729Z · LW(p) · GW(p)

I can precommit, but I don't want to. Other people (in the general case) cannot precommit because they have no idea about the Newcomb's Problem.

The predictor doesn't require me to precommit to oneboxing, merely to have some set of properties that results in me oneboxing.

Sure, but that has nothing to do with my choices.

Replies from: dxu, TheOtherDave
comment by dxu · 2014-11-19T20:25:46.845Z · LW(p) · GW(p)

Sure, but that has nothing to do with my choices.

See, that's where I disagree. If you choose to one-box, even if that choice is made on a whim right before you're required to select a box/boxes, Omega can predict that choice with accuracy. This isn't backward causation; it's simply what happens when you have a very good predictor. The problem with causal decision theory is that it neglects these sorts of acausal logical connections, instead electing to only keep track of casual connections. If Omega can predict you with high-enough accuracy, he can predict choices that you would make given certain information. If you take a random passerby and present them with a formulation of Newcomb's Problem, Omega can analyze that passerby's disposition and predict in advance how that passerby's disposition will affect his/her reaction to that particular formulation of Newcomb's problem, including whether he/she will two-box or one-box. Conscious precommitment is not required; the only requirement is that you make a choice. If you or any other person chooses to one-box, regardless of whether they've previously heard of Newcomb's Problem or made a precommitment, Omega will predict that decision with whatever accuracy we specify. Then the only questions are "How high of an accuracy do we need?", followed by "Can humans reach this desired level of accuracy?" And while I'm hesitant to provide an absolute threshold for the first question, I do not hesitate at all to answer the second question with, "Yes, absolutely." Thus we see that Newcomb-like situations can and do pop up in real life, with merely human predictors.

If there are any particulars you disagree with in the above explanation, please let me know.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T20:37:37.696Z · LW(p) · GW(p)

If Omega can predict you with high-enough accuracy, he can predict choices that you would make given certain information.

Sure, I agree, Omega can do that.

However when I get to move, when I have the opportunity to make a choice, Omega is already done with his prediction. Regardless of what his prediction was, the optimal choice for me after Stage 1 is to two-box.

My choice cannot change what's in the boxes -- only Omega can determine what's in the boxes and I have no choice with respect to his prediction.

Replies from: dxu, TheOtherDave
comment by dxu · 2014-11-19T20:55:35.645Z · LW(p) · GW(p)

Well, if you reason that way, you will end up two-boxing. And, of course, Omega will know that you will end up two-boxing. Therefore, he will put nothing in Box B. If, on the other hand, you had chosen to one-box instead, Omega would have known that, too. And he would have put $1000000 in Box B. If you say, "Oh, the contents of the boxes are already fixed, so I'm gonna two-box!", there is not going to be anything in Box B. It doesn't matter what reasoning you use to justify two-boxing, or how elaborate your argument is; if you end up two-boxing, you are going to get $1000 with probability (Omega's-predictive-power)%. Sure, you can say, "The boxes are already filled," but guess what? If you do that, you're not going to get any money. (Well, I mean, you'll get $1000, but you could have gotten $1000000.) Remember, the goal of a rationalist is to win. If you want to win, you will one-box. Period.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T20:58:27.505Z · LW(p) · GW(p)

If, on the other hand, you had chosen to one-box instead

Notice the tense you are using: "had chosen". When did that choice happen? (for a standard participant)

Replies from: dxu
comment by dxu · 2014-11-19T21:02:32.893Z · LW(p) · GW(p)

You chose to two-box in this hypothetical Newcomb's Problem when you said earlier in this thread that you would two-box. Fortunately, since this is a hypothetical, you don't actually gain or lose any utility from answering as you did, but had this been a real-life Newcomb-like situation, you would have. If (I'm actually tempted to say "when", but that discussion can be held another time) you ever encounter a real-life Newcomb-like situation, I strongly recommend you one-box (or whatever the equivalent of one-boxing is in that situation).

Replies from: Lumifer
comment by Lumifer · 2014-11-19T21:04:48.749Z · LW(p) · GW(p)

had this been a real-life Newcomb-like situation

I don't believe real-life Newcomb situations exist or will exist in my future.

I also think that the local usage of "Newcomb-like" is misleading in that it is used to refer to situations which don't have much to do with the classic Newcomb's Problem.

I strongly recommend you one-box

You recommendation was considered and rejected :-)

Replies from: dxu
comment by dxu · 2014-11-20T00:13:10.446Z · LW(p) · GW(p)

I don't believe real-life Newcomb situations exist or will exist in my future.

It is my understanding that Newcomb-like situations arise whenever you deal with agents who possess predictive capabilities greater than chance. It appears, however, that you do not agree with this statement. If it's not too inconvenient, could you explain why?

Replies from: Lumifer
comment by Lumifer · 2014-11-20T01:08:25.170Z · LW(p) · GW(p)

Can you define what is a "Newcomb-like" situation and how can I distinguish such from a non-Newcomb-like one?

comment by TheOtherDave · 2014-11-19T20:43:24.075Z · LW(p) · GW(p)

when I get to move, when I have the opportunity to make a choice, Omega is already done with his prediction.

You have elsewhere agreed that you (though not everyone) have the ability to make choices that affect Omega's prediction (including, but not limited to, the choice of whether or not to precommit to one-boxing).

That seems incompatible with your claim that all of your relevant choices are made after Omega's prediction.

Have you changed your mind? Have I misunderstood you? Are you making inconsistent claims in different branches of this conversation? Do you not see an inconsistency? Other?

Replies from: Lumifer
comment by Lumifer · 2014-11-19T20:47:24.904Z · LW(p) · GW(p)

Here when I say "I" I mean "a standard participant in the classic Newcomb's Problem". A standard participant has no advance warning.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-11-19T20:57:49.044Z · LW(p) · GW(p)

Ah. OK. And just to be clear: you believe that advance warning is necessary in order to decide whether to one-box or two-box... it simply isn't possible, in the absence of advance warning, to make that choice; rather, in the absence of advance warning humans deterministically two-box. Have I understood that correctly?

Replies from: Lumifer
comment by Lumifer · 2014-11-19T21:02:42.973Z · LW(p) · GW(p)

it simply isn't possible, in the absence of advance warning, to make that choice

Correct.

in the absence of advance warning humans deterministically two-box

Nope. I think two-boxing is the right thing to do but humans are not deterministic, they can (and do) all kinds of stuff. If you run an empirical test I think it's very likely that some people will two-box and some people will one-box.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-11-19T23:35:22.297Z · LW(p) · GW(p)

Gotcha: they don't have a choice in which they do, on your account, but they might do one or the other. Correction accepted.

Incidentally, for the folks downvoting Lumifer here, I'm curious as to your reasons. I've found many of their earlier comments annoyingly evasive, but now they're actually answering questions clearly. I disagree with those answers, but that's another question altogether.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T23:43:59.835Z · LW(p) · GW(p)

I'm curious as to your reasons

There are a lot of behaviorists here. If someone doesn't see the light, apply electric prods until she does X-)

Replies from: TheOtherDave
comment by TheOtherDave · 2014-11-20T00:10:30.311Z · LW(p) · GW(p)

It would greatly surprise me if anyone here believed that downvoting you will influence your behavior in any positive way.

Replies from: Lumifer
comment by Lumifer · 2014-11-20T02:40:21.113Z · LW(p) · GW(p)

You think it's just mood affiliation, on a rationalist forum? INCONCEIVABLE! :-D

Replies from: TheOtherDave
comment by TheOtherDave · 2014-11-20T03:36:35.745Z · LW(p) · GW(p)

I'm curious: do you actually believe I think that, or are you saying it for some other reason?
Either way: why?

Replies from: Lumifer
comment by Lumifer · 2014-11-20T06:35:15.633Z · LW(p) · GW(p)

A significant part of the time I operate in the ha-ha only serious mode :-)

The grandparent post is a reference to a quote from Princess Bride.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-11-20T06:51:19.942Z · LW(p) · GW(p)

Yes, you do, and I understand the advantages of that mode in terms of being able to say stuff without being held accountable for it.

I find it annoying.

That said, you are of course under no obligation to answer any of my questions.

Replies from: Lumifer
comment by Lumifer · 2014-11-20T15:41:21.670Z · LW(p) · GW(p)

without being held accountable for it.

In which way am I not accountable? I am here, answering questions, not deleting my posts.

Sure, I often prefer to point to something rather than plop down a full specification. I am also rather fond of irony and sarcasm. But that's not exactly the same thing as avoiding accountability, is it?

If you want highly specific answers, ask highly specific questions. If you feel there is ambiguity in the subject, resolve it in the question.

comment by TheOtherDave · 2014-11-19T18:09:09.223Z · LW(p) · GW(p)

OK. Thanks for clarifying your position.

comment by dxu · 2014-11-19T03:20:07.975Z · LW(p) · GW(p)

I don't make a choice to be a "two-boxer" or a "one-boxer".

If you said earlier in this thread that you would two-box, you are a two-boxer. If you said earlier in this thread that you would one-box, you are a one-boxer. If Omega correctly predicts your status as a one-boxer/two-boxer, he will fill Box B with the appropriate amount. Assuming that Omega is a good predictor, his prediction is contingent on your disposition as a one-boxer or a two-boxer. This means you can influence Omega's prediction (and thus the contents of the boxes) simply by choosing to be a one-boxer. If Omega is a good-enough predictor, he will even be able to predict future changes in your state of mind. Therefore, the decision to one-box can and will affect Omega's prediction, even if said decision is made AFTER Omega's prediction.

This is the essence of being a reflectively consistent agent, as opposed to a reflectively inconsistent agent. For an example of an agent that is reflectively inconsistent, see causal decision theory. Let me know if you still have any qualms with this explanation.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T03:26:52.490Z · LW(p) · GW(p)

If you said earlier in this thread that you would two-box, you are a two-boxer. If you said earlier in this thread that you would one-box, you are a one-boxer.

Oh, I can't change my mind? I do that on regular basis, you know...

This means you can influence Omega's prediction (and thus the contents of the boxes) simply by choosing to be a one-boxer.

This implies that I am aware that I'll face the Newcomb's problem.

Let's do the Newcomb's Problem with a random passer-by picked from the street -- he has no idea what's going to happen to him and has never heard of Omega or the Newcomb's problem before. Omega has to make a prediction and fill the boxes before that passer-by gets any hint that something is going to happen.

So, Step 1 happens, the boxes are set up, and our passer-by is explained the whole game. What should he do? He never chose to be a one-boxer or a two-boxer because he had no idea such things existed. He can only make a choice now and the boxes are done and immutable. Why should he one-box?

Replies from: dxu
comment by dxu · 2014-11-19T03:34:17.180Z · LW(p) · GW(p)

Oh, I can't change my mind? I do that on regular basis, you know...

It seems unlikely to me that you would change your mind about being a one-boxer/two-boxer over the course of a single thread. Nevertheless, if you did so, I apologize for making presuppositions.

So, Step 1 happens, the boxes are set up, and our passer-by is explained the whole game. What should he do? He never chose to be a one-boxer or a two-boxer because he had no idea such things existed. He can only make a choice now and the boxes are done and immutable. Why should he one-box?

As I wrote in my earlier comment:

If Omega is a good-enough predictor, he will even be able to predict future changes in your state of mind. Therefore, the decision to one-box can and will affect Omega's prediction, even if said decision is made AFTER Omega's prediction.

If our hypothetical passerby chooses to one-box, then to Omega, he is a one-boxer. If he chooses to two-box, then to Omega, he is a two-boxer. There's no "not choosing", because if you make a choice about what to do, you are choosing.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T03:43:03.586Z · LW(p) · GW(p)

The only problem is that you have causality going back in time. At the time of Omega's decision the passer-by's state with respect to one- or two-boxing is null, undetermined, does not exist. Omega can scan his brain or whatever and make his prediction, but the passer-by is not bound by that prediction and has not (yet) made any decisions.

The first chance our passer-by gets to make a decision is after the boxes are fixed. His decision (as opposed to his personality, preferences, goals, etc.) cannot affect Omega's prediction because causality can't go backwards in time. So at this point, after step 2, the only time he can make a decision, he should two-box.

Replies from: dxu
comment by dxu · 2014-11-19T03:50:15.134Z · LW(p) · GW(p)

As far as I'm aware, what you're saying is basically the same thing as what causal decision theory says. I hate to pass the buck, but So8res has written a very good post on this already; anything I could say here has already been said by him, and better. If you've read it already, then I apologize; if not, I'd say give it a skim and see what you think of it.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T04:00:35.935Z · LW(p) · GW(p)

As far as I'm aware, what you're saying is basically the same thing as what causal decision theory says.

So8res' post points out that

CDT is the academic standard decision theory. Economics, statistics, and philosophy all assume (or, indeed, define) that rational reasoners use causal decision theory to choose between available actions.

It seems I'm in good company :-)

comment by wedrifid · 2014-11-19T02:49:20.650Z · LW(p) · GW(p)

If Omega is just a skilled predictor, there is no certain outcome so you two-box.

Unless you like money and can multiply, in which case you one box and end up (almost but not quite certainly) richer.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T02:53:19.073Z · LW(p) · GW(p)

Unless you like money and can multiply, in which case you one box

Wat iz zat "multiply" thang u tok abut?

comment by Nornagest · 2014-11-18T21:47:52.521Z · LW(p) · GW(p)

Think of the situation in the last round of an iterated Prisoner's Dilemma with known bounds. Because of the variety of agents you might be dealing with, the payoffs there aren't strictly Newcomblike, but they're closely related; there's a large class of opposing strategies (assuming reasonably bright agents with some level of insight into your behavior, e.g. if you are a software agent and your opponent has access to your source code) which will cooperate if they model you as likely to cooperate (but, perhaps, don't model you as a CooperateBot) and defect otherwise. If you know you're dealing with an agent like that, then defection can be thought of as analogous to two-boxing in Newcomb.

comment by Vaniver · 2014-11-19T15:07:33.566Z · LW(p) · GW(p)

By philosophy I meant the sort of thing typified by current anglophone philosophy.

You may note several posts ago that I noticed the word 'philosophy' was not useful and tried to substitute it with other, less loaded, terms in order to more effectively communicate my meaning. This is a specific useful technique with multiple subcomponents (noticing that it's necessary, deciding how to separate the concepts, deciding how to communicate the separation), that I've gotten better at because of time spent here.

Yes, comparative perspectives is much more about claims and much less about holism than any individual perspective- but for a person, the point of comparing perspectives is to choose one whereas for a professional arguer the point of comparing perspectives is to be able to argue more winningly, and so the approaches and paths they take will look rather different.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-19T16:15:31.654Z · LW(p) · GW(p)

Professionals are quite capable of passionately backing a particular view. If amateurs are uninterested in arguing - your claim, not mine - that means they are uninterested in truth seeking. People who adopt beliefs they can't defend are adopting beliefs as clothing

comment by dxu · 2014-11-18T20:17:59.415Z · LW(p) · GW(p)

1 and 2 seem to mostly be objections to the presentation of the material as opposed to the content. Most of these criticisms are ones I agree with, but given the context (the Sequences being "bad amateur philosophy"), they seem largely tangential to the overall point. There are plenty of horrible math books out there; would you use that fact to claim that math itself is flawed?

As for 3 and 4, I note that the link you provided is not an objection per se, but more of an expression of surprise: "What, doesn't everyone know this?" Note also that this comment actually has a reply attached to it, which rather undermines your point that "people on LW don't respond to criticisms". I'm sure you have other examples of objections being ignored, but in my opinion, this one probably wasn't the best example to use if you were trying to make a point.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-18T20:31:48.959Z · LW(p) · GW(p)

1 and 2 seem to mostly be objections to the presentation of the material as opposed to the content.

Not in the sense that I don't like the font. Lack of justification or point are serious issues.

There are plenty of horrible math books out there; would you use that fact to claim that math itself is flawed?

EDIT I have already said that this isn't about that is right .or wrong.

I can find out what math is from good books. If the Sequences are putting forward original ideas, I have nowhere else to go,. Of course, in many cases, I can't tell whether they are, And the author can't tell me whether his philosophy is new because he doesn't know the old philosophy.

comment by [deleted] · 2015-02-10T01:42:23.317Z · LW(p) · GW(p)

The dichotomy between the Austere and the Empathic meta ethicist may well be false. I'd like to see more support for it, and specifically for the implicit claim that a question cannot be coherent unless we fully understand all its terms. Answering that claim may involve asking whether we can refer to something with a term when we do not fully understand what we are referring to (although the answer to that is surely "yes!").

comment by casebash · 2014-03-14T13:01:58.932Z · LW(p) · GW(p)

I think that some basic conceptual analysis can be important for clarifying discussion given that many of these words are used and will continue to be used. For example, it is useful to know that "justified true belief" is a useful first approximation of what is meant by knowledge, but that the situation is actually slightly more complicated by that.

On the other hand, I don't expect that this will work for all concepts. Some concepts are extremely slippery and will lack enough of a shared meaning that we can provide a single definition. In these cases, we can simply point out the key features that these cases tend to have in common.

comment by BobTheBob · 2011-05-22T18:07:48.852Z · LW(p) · GW(p)

Some thoughts on this and related LW discussions. They come a bit late - apols to you and commentators if they've already been addressed or made in the commentary:

1) Definitions (this is a biggie).

There is a fair bit of confusion on LW, it seems to me, about just what definitions are and what their relevance is to philosophical and other discussion. Here's my understanding - please say if you think I've gone wrong.

If in the course of philosophical debate, I explicitly define a familiar term, my aim in doing so is to remove the term from debate - I fix the value of a variable to restrict the problem. It'd be good to find a real example here, but I'm not convinced defining terms happens very often in philosophical or other debate. By way of a contrived example, one might want to consider, in evaluating some theory, the moral implications of actions made under duress (a gun held to the head) but not physically initiated by an external agent (a jostle to the arm). One might say, "Define 'coerced action' to mean any action not physically initiated but made under duress" (or more precise words to the effect). This done, it wouldn't make sense simply to object that my conclusion regarding coerced actions doesn't apply to someone physically pushed from behind - I have stuipulated for the sake of argument I'm not talking about such cases. (in this post, you distinguish stipulation and definition - do you have in mind a distinction I'm glossing over?)

Contrast this to the usual case for conceptual analyses, where it's assumed there's a shared concept ('good', 'right', 'possible', 'knows', etc), and what is produced is meant to be a set of necessary and sufficient conditions meant to capture the concept. Such an analysis is not a definition. Regarding such analyses, typically one can point to a particular thing and say, eg, "Our shared concept includes this specimen, it lacks a necessary condition, therefore your analysis is mistaken" - or, maybe "Intuitively, this specimen falls under our concept, it lacks...". Such a response works only if there is broad agreement that the specimen falls under the concept. Usually this works out to be the case.

I haven't read the Jackson book, so please do correct me if you think I've misunderstood, but I take it something like this is his point in the paragraphs you quote. Tom and Jack can define 'right action' to mean whatever they want it to. In so doing, however, we cease to have any reason to think they mean by the term what we intuitively do. Rather, Jackson is observing, what Tom and Jack should be doing is saying that rightness is that thing (whatever exactly it is) which our folk concepts roughly converge on, and taking up the task of refining our understanding from there - no defining involved.

You say,

... Jackson supposes that we can pick out which platitudes of moral discourse matter, and how much they matter, for determining the meaning of moral terms

Well, not quite. The point I take it is rather that there simply are 'folk' platitudes which pick-out the meanings of moral terms - this is the starting point. 'Killing people for fun is wrong', 'Helping elderly ladies across the street is right' etc, etc. These are the data (moral intuitions, as usually understood). If this isn't the case, there isn't even a subject to discuss. Either way, it has nothing to do with definitions.

Confusion about definitions is evident in the quote from the post you link to. To re-quote:

...the first person is speaking as if 'sound' means acoustic vibrations in the air; the second person is speaking as if 'sound' means an auditory experience in a brain. If you ask "Are there acoustic vibrations?" or "Are there auditory experiences?", the answer is at once obvious. And so the argument is really about the definition of the word 'sound'.

Possibly the problem is that 'sound' has two meanings, and the disputants each are failing to see that the other means something different. Definitions are not relevant here, meanings are. (Gratuitous digression: what is "an auditory experience in a brain"? If this means something entirely characterizable in terms of neural events, end of story, then plausibly one of the disputants would say this does not capture what he means by 'sound' - what he means is subjective and ineffable, something neural events aren't. He might go on to wonder whether that subjective, ineffable thing, given that it is apparently created by the supposedly mind-independent event of the falling of a tree, has any existence apart from his self (not to be confused with his brain!). I'm not defending this view, just saying that what's offered is not a response but rather a simple begging of the question against it. End of digression.)

2) In your opening section you produce an example meant to show conceptual analysis is silly. Looks to me more like a silly attempt at an example of conceptual analysis. If you really want to make your case, why not take a real example of a philosophical argument -preferably one widely held in high regard at least by philosophers? There's lots of 'em around.

3) In your section The trouble with conceptual analysis, you finally explain,

The trouble is that philosophers often take this "what we mean by" question so seriously that thousands of pages of debate concern which definition to use... .

As explained above, philosophical discussion is not about "which definition to use" -it's about (roughly, and among other things) clarifying our concepts. The task is difficult but worthwhile because the concepts in question are important but subtle.

Within 20 seconds of arguing about the definition of 'desire', someone will say, "Screw it. Taboo 'desire' so we can argue about facts and anticipations, not definitions."

If you don't have the patience to do philosophy, or you don't think it's of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don't think that in doing this latter thing you'll address the question philosophy is interested in, or that you've said anything at all so far to show philosophy isn't worth doing. In this connection, one of the real benefits of doing philosophy is that it encourages precision and attention to detail in thinking. You say Eliezer Yudkowsky "...advises against reading mainstream philosophy because he thinks it will 'teach very bad habits of thought that will lead people to be unable to do real work.'" The original quote continues, "...assume naturalism! Move on! NEXT!" Unfortunately Eliezer has a bad habit of making unclear and undefended or question-begging assertions, and this is one of them. What are the bad habits, and how does philosophy encourage them? And what precisely is meant by 'naturalism'? To make the latter assertion and simultaneously to eschew the responsibility of articulating what this commits you to is to presume you can both have your cake and eat it too. This may work in blog posts -it wouldn't pass in serious discussion.

(Unlike some on this blog, I have not slavishly pored through Eliezer's every post. If there is somewhere a serious discussion of the meaning of 'naturalism' which shows how the usual problems with normative concepts like 'rational' can successfully be navigated, I will withdraw this remark).

comment by Matt_Simpson · 2011-05-16T17:02:33.355Z · LW(p) · GW(p)

What happen to philosophers like Hume who tried to avoid "mere disputes of words?" Seriously, as much as many 20th century philosophers liked Hume, especially the first book of the Treatise (e.g., the positivists), why didn't they pick up on that?

(I seem to remember some flippant remark making fun of philosophers for these disputes in the Treatise but google finds me nothing)

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2011-05-16T17:57:19.276Z · LW(p) · GW(p)

Getting hung up on the meanings of words is an attractor. Even if your community starts out consciously trying to avoid it, it's very easy to get sucked back in. Here is a likely sequence of steps.

  1. All this talk about words is silly! We care about actually implementing our will in the real world!

  2. Of course, we want to implement our will precisely. We need to know how things are precisely and how we want them to be precisely, so that we can figure out what we should do precisely.

  3. So, we want to formulate all this precise knowledge and to perform precise actions. But we're a community, so we're going to have to communicate all this knowledge and these plans among ourselves. Thus, we're going to need a correspondingly precise language to convey all these precise things to one another.

  4. Okay, so let's get started on that precise language. Take the word A. What, precisely, does it mean? Well, what precisely are the states of affairs such that the word A applies? Wait, what precisely is a "state of affairs"? . . .

And down the rabbit-hole you go.

comment by JanetK · 2011-05-16T10:11:36.704Z · LW(p) · GW(p)

I would like to see some enlargement on the concept of definition. It is usually treated as a simple concept: A means B or C or D; which one depending on Z. But when we try to pin down C for instance, we find that it has a lot of baggage - emotional, framing, stylistic etc. So does B and D. And in no case is the baggage of any of them the same as the baggage of A. None of - defining terms or tabooing words or coining new words - really works all that well in the real world, although they of course help. Do you see a way around this fuzziness?

Another 'morally good' definition for your list is 'that which will not make the doer feel guilty or shameful in future'. It is no better than the others but quite different.

Replies from: fubarobfusco
comment by fubarobfusco · 2011-05-16T23:12:49.343Z · LW(p) · GW(p)

Another 'morally good' definition for your list is 'that which will not make the doer feel guilty or shameful in future'. It is no better than the others but quite different.

I don't like this one. It implies that successful suicide is always morally good.

comment by scientism · 2011-05-16T16:30:32.214Z · LW(p) · GW(p)

I don't think you're arguing against conceptual analysis, instead you want to treat a particular conceptual analysis (reductive physicalism) as gospel. What is the claim that there are two definitions of sound that we can confuse, the acoustic vibrations in the air and the auditory experience in a brain, if it's not a reductive conceptual analysis of the concept of sound?

Replies from: lukeprog
comment by lukeprog · 2011-05-16T17:12:26.531Z · LW(p) · GW(p)

Like I said at the beginning:

I won't argue that everything that has ever been called 'conceptual analysis' is misguided. Instead, I'll give examples of common kinds of conceptual analysis that corrupt discussions of morality and other subjects.

comment by Will_Sawin · 2011-05-16T16:15:05.950Z · LW(p) · GW(p)

The definition of "right action" is the kind of action you should do.

You don't need to know what "should" means, you just need to do what you should do and not do what you shouldn't do.

One should be able to cash out arguments about the "definition" of "right" as arguments about the actual nature of shouldness.

Replies from: lukeprog, Vladimir_Nesov
comment by lukeprog · 2011-05-16T17:15:35.494Z · LW(p) · GW(p)

Defining 'right' in terms of 'should' gets us nowhere; it just punts to another symbol. Thus, I don't yet know what you're trying to say in this comment. Could you taboo 'should' for me?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-05-17T01:36:42.552Z · LW(p) · GW(p)

Only through the use of koans. Consider the dialog in:

http://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles

Could you explain what "If A, then B" means, tabooing "if/then","therefore",etc.?

Here is another way:

If a rational agent becomes aware that the statement "I should do X" is true, then it will either proceed to do X or proceed to realize that it cannot do X (at least for now).

ETA: Here is a simple Python function (I think I coded it correctly):

def square (x): y=x*x return y

"return" is not just another symbol. It is not a gensym. It is functional. The act of returning and producing an output is completely separate from and non-reducible-to everything else that a subroutine can do.

Rational agents use "should" the same way this subroutine uses "return". It controls their output.

comment by Vladimir_Nesov · 2011-05-16T23:00:12.020Z · LW(p) · GW(p)

You don't need to know what "should" means, you just need to do what you should do and not do what you shouldn't do.

But better understanding of what "should" means helps, although it's true that you should do what you should even if you have no idea what "should" means.

Replies from: Amanojack, Will_Sawin
comment by Amanojack · 2011-05-17T07:28:09.898Z · LW(p) · GW(p)

it's true that you should do what you should even if you have no idea what "should" means.

How do I go about interpreting that statement if I have no idea what "should" means?

Replies from: Vladimir_Nesov, lessdazed, Will_Sawin
comment by Vladimir_Nesov · 2011-05-17T11:48:08.896Z · LW(p) · GW(p)

Use your shouldness-detector, even if it has no user-serviceable parts within. Shouldness-detector is that white sparkly sphere over there.

comment by lessdazed · 2011-05-17T15:59:24.388Z · LW(p) · GW(p)

I think it means something analogous to "you can staple even if you have no idea what "kramdrukker" means". (I don't speak Afrikaans, but that's what a translator program just said is "stapler" in Africaans.)

~~~~~~~

I think "should" is a special case of where a "can" sentence gets infected by the sentence's object (because the object is "should") to become a "should" sentence.

"You can hammer the nail." But should I? It's unclear. "You can eat the fish." But should I? It's unclear. "You can do what you should do." But should I? Yes - I definitely should, just because I can. So, "You can do what you should do" is equivalent to"You should do what you should do".

In other words, I interpret the statement by Vladmir to be an instance of what we can generally say about "can" statements, of which "should" happens to be a special case in which there is infection from "should" to "can" such that it is more natural in English to not write "can" at all.

This allows us to go from uncontroversial "can" statements to "should" statements, all without learning Africaans!

This feel like novel reasoning by my part (i.e. the whole "can" being infected bit) as to how Vladmir's statement is true, and I'd appreciate comments or a similarly reasoned source I might be partially remembering and repeating.

Replies from: None
comment by [deleted] · 2011-05-18T01:10:27.525Z · LW(p) · GW(p)

So, "You can do what you should do" is equivalent to"You should do what you should do".

If these are equivalent, then the truth of the second statement should entail the truth of the first. But "You should do what you should do" is ostensibly a tautology, while "You can do what you should do" is not, and could be false.

One out you might want to take is to declare "S should X" only meaningful when ability and circumstance allow S to do X; when "S can X." But then you just have two clear tautologies, and declaring them equivalent is not suggestive of much at all.

Replies from: lessdazed
comment by lessdazed · 2011-05-18T01:49:06.100Z · LW(p) · GW(p)

Decisive points.

As you have shown them to not be equivalent, I would have done better to say:

"You can do what you should do" entails "You should do what you should do".

But if the latter statement is truly a tautology, that obviously doesn't help. If I then add your second edit, that by "should" I mean "provided one is able to", I am at least less wrong...but can my argument avoid being wrong only by being vacuous?

I think so.

comment by Will_Sawin · 2011-05-18T01:45:28.679Z · LW(p) · GW(p)

If you don't know what "should" means, how do you decide what to do?

This is another instance in which you can't argue morality into a rock.

comment by Will_Sawin · 2011-05-17T01:41:27.522Z · LW(p) · GW(p)

If knowing what "should" means helped something, then knowledge of a definition could lead to real actionable information. This seems, on the face of it, absurd.

I think either:

"XYZ things are things that maximize utility"

or:

"XYZ things are things that you should do"

can count as a definition of XYZ, but not both, just as:

"ABC things are red things"

pr

"ABC things are round things"

can count as a definition of ABC things, but not both. (Since if you knew both, then you would learn that red things are round and round things are red.)

comment by Dan_Moore · 2011-05-17T12:47:47.774Z · LW(p) · GW(p)

I was under the impression that the example of an unobserved tree falling in the woods is taken as a naturalized version of Schrodinger's Cat experiment. So the question of whether it makes a sound is not necessarily about the definition of a sound.

Replies from: lukeprog
comment by lukeprog · 2011-05-17T13:21:41.245Z · LW(p) · GW(p)

Nope.

Replies from: Dan_Moore
comment by Dan_Moore · 2011-05-17T18:27:25.049Z · LW(p) · GW(p)

The Wikipedia article you linked has a See Also: Schrodinger's Cat link.

comment by TimFreeman · 2011-05-16T15:58:19.496Z · LW(p) · GW(p)

Empathic Metaethics is hard, but it's what needs to be done to answer Alex's question, and it's what needs to be done to build a Friendly AI.

You're missing a possible path forward here. Perhaps we aren't the ones that need to do it. If we can implement empathy, we can get the Friendly AI to do it.

Replies from: Oscar_Cunningham, drethelin
comment by Oscar_Cunningham · 2011-05-16T16:53:27.089Z · LW(p) · GW(p)

Downvoter here. Is there a custom of always explaining downvotes? Should there be one?

I down voted because it was a post about AI (yawn), and in particular a stupid one. But looking at it again I see that it may not be as stupid as I thought, downvote revoked.

Replies from: Tyrrell_McAllister, CuSithBell, wedrifid, lessdazed, Will_Sawin
comment by Tyrrell_McAllister · 2011-05-16T17:43:30.410Z · LW(p) · GW(p)

Downvoter here. Is there a custom of always explaining downvotes? Should there be one?

No and no. However, it's usually good when downvoted commenters learn why they got downvoted.

Replies from: lessdazed
comment by lessdazed · 2011-05-17T16:03:18.086Z · LW(p) · GW(p)

The most interesting comments are left by downvoters.

"Downvoters leave the most interesting comments", my original formulation, is false in one of its natural interpretations.

Upvoted ;-)

comment by CuSithBell · 2011-05-16T17:19:41.737Z · LW(p) · GW(p)

Oftentimes the reason for a downvote may be nonobvious (for example, if there are multiple potential points of contention in a single comment). If you wish to indicate disapproval of one thing in particular, or draw the commenter's attention to a particular error you expect they will desire to correct, or something along those lines, it can be a good idea to explain your reason for dissent.

Replies from: lessdazed
comment by lessdazed · 2011-05-17T16:13:28.591Z · LW(p) · GW(p)

One unique thing I haven't heard others appreciate about the strictly dumb comment system of voting in one of two directions is that it leaves the voted upon with a certain valuable thought just within reach.

That thought is: "there are many reasons people downvote, each has his or her own criteria at different times. Some for substantive disagreement, others for tone, some because they felt their time wasted in reading it, others because they thought others would waste their time reading it, some for failing to meet the usual standard of the author, some for being inferior to a nearby but lesser ranked comment, etc."

People have a hard enough time understanding that as it is. Introduce sophistication into the voting system, and far fewer will take it to heart, as it will be much less obvious.

Replies from: CuSithBell
comment by CuSithBell · 2011-05-17T18:20:25.171Z · LW(p) · GW(p)

Intriguing. Starting from that thought it can be frustrating not to know which of those things is the case (and thus: what, if any, corrective action might be in order). I hadn't really thought about how alternate voting systems might obscure the thought itself. I'd think that votes + optional explanations would highlight the fact that there could be any number of explanations for a downvote...

Do we have any good anecdotes on this?

comment by wedrifid · 2011-05-16T17:14:54.434Z · LW(p) · GW(p)

Downvoter here. Is there a custom of always explaining downvotes? Should there be one?

No! I don't have enough time to write comments for all the times I downvote. And I'd rather not read pages and pages of "downvoted because something you said in a different thread offended me" every week or two.

Just click and go. If you wish to also verbalize disapproval then by all means put word to the specific nature of your contempt, ire or disinterest.

Replies from: Swimmer963, None
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-18T01:31:42.744Z · LW(p) · GW(p)

downvoted because something you said in a different thread offended me.

I'm somewhat upset and disappointed that adults would do this. It seems like a very kindergartener thing. Would you go around upvoting all of a user's comments because you liked one? I wouldn't, and I have a tendency to upvote more than I downvote. Why downvote a perfectly good, reasonable comment just because another comment by the same user wasn't as appealing to you?

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2011-05-18T14:26:47.937Z · LW(p) · GW(p)

Why downvote a perfectly good, reasonable comment just because another comment by the same user wasn't as appealing to you?

I don't think that wedrifid was saying that he does this. (I'm not sure that you were reading him that way.) I think that he just expects that, if explaining downvotes were the norm, then he would read a comment every week or so saying, "downvoted because something you said in a different thread offended me".

Replies from: Swimmer963, wedrifid
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-19T01:42:01.473Z · LW(p) · GW(p)

I didn't interpret the comment as meaning that wedrifid would downvote on this policy, or that he advocated. It's probably true that there are people who do. That just makes me sad.

comment by wedrifid · 2011-05-19T02:01:50.404Z · LW(p) · GW(p)

I think that he just expects that, if explaining downvotes were the norm, then he would read a comment every week or so saying, "downvoted because something you said in a different thread offended me".

Yes, although not so much 'a comment every week or so' as 'a page or two every week or so'.

comment by [deleted] · 2011-05-18T01:21:01.666Z · LW(p) · GW(p)

then by all means put word to the specific nature of your contempt, ire or disinterest.

I do very much hope LWers can occasionally disagree with an idea, and downvote it, without feeling contempt or ire. If not, we need to have a higher proportion of social skill and emotional intelligence posts.

Replies from: wedrifid
comment by wedrifid · 2011-05-18T06:13:04.425Z · LW(p) · GW(p)

I do very much hope LWers can occasionally disagree with an idea, and downvote it, without feeling contempt or ire.

It's a good thing I included even mere disinterest in the list of options. You could add 'disagreement' too - although some people object to downvoting just because you disagree.

comment by lessdazed · 2011-05-17T16:07:27.581Z · LW(p) · GW(p)

It seems to me that framing the issue of a (possible) social custom in terms of whether there should be a rule that covers all situations is a debate tactic designed to undermine support for a custom similar to the all-encompassing one used in framing.

The answer to whether there should be a custom that always applies is pretty much always going to be no, which doesn't tell us about similar customs (like one of usually or often explaining downvotes) even though it seems like it does.

comment by Will_Sawin · 2011-05-18T01:46:28.106Z · LW(p) · GW(p)

There is a custom of often explaining downvotes, and there should be one of doing so more frequently.

Replies from: steven0461, wedrifid
comment by steven0461 · 2011-05-18T19:26:58.965Z · LW(p) · GW(p)

Most of the time when I vote something down, I would not try calling the person out if the same comment were made in an ordinary conversation. Explaining a downvote feels like calling someone out, and if I explained my downvotes a lot, I'd feel like I was being aggressive. Now, it's possible that unexplained downvotes feel equally aggressive. But really, all a downvote should mean is that someone did the site a disservice equal in size to the positive contribution represented by a mere one upvote.

Replies from: Will_Sawin, CuSithBell, komponisto
comment by Will_Sawin · 2011-05-18T20:30:54.827Z · LW(p) · GW(p)

I mostly find unexplained downvotes aggressive because I find it frustrating in that I made some kind of mistake but no one wants to explain it to me so that I can do better next time.

Replies from: steven0461
comment by steven0461 · 2011-05-21T22:25:33.175Z · LW(p) · GW(p)

It's not that often that mistakes are unambiguous and uncontroversial once pointed out. A lot of the time, the question isn't "do I want to point out his mistake so he can do better next time", but "do I want to commit to having a probably fruitless debate about this".

Replies from: Will_Sawin
comment by Will_Sawin · 2011-05-21T23:33:19.560Z · LW(p) · GW(p)

Do you think that every time a mistake would, in fact, be unambiguous and uncontroversial, it should be pointed out?

If so, do you think more downvotes should be explained?

From my experience it seems like the first quote implies the second.

Replies from: steven0461
comment by steven0461 · 2011-05-21T23:58:02.082Z · LW(p) · GW(p)

I think this site is already extremely good at calling out unambiguous and uncontroversial mistakes.

comment by CuSithBell · 2011-05-18T20:16:27.317Z · LW(p) · GW(p)

But really, all a downvote should mean is that someone did the site a disservice equal in size to the positive contribution represented by a mere one upvote.

I don't understand this interpretation of down/upvotes. Is it normative? Intentionally objective rather than subjective? Is this advice to downvoters or the downvoted? Could you please clarify?

comment by komponisto · 2011-05-18T19:56:00.845Z · LW(p) · GW(p)

Explaining a downvote feels like calling someone out, and if I explained my downvotes a lot, I'd feel like I was being aggressive. Now, it's possible that unexplained downvotes feel equally aggressive

To me they feel more aggressive, since they imply that the person doesn't have enough status to deserve an explanation from the downvoter.

An equivalent behavior in real-life interaction would be saying something like "you fail", followed by rudely ignoring the person when they attempted to follow up.

Replies from: steven0461
comment by steven0461 · 2011-05-21T22:17:02.824Z · LW(p) · GW(p)

Not sure the status implication is accurate. When I vote down someone high-status, I don't feel any particular compulsion to explain myself. If anything, it makes me anticipate that I'm unlikely to change anyone's mind.

I think a much closer analogy than saying "you fail" is frowning.

Would you prefer that I posted a lot of comments starting with "I voted this down because", or that I didn't vote on comments I think detract from the site?

comment by wedrifid · 2011-05-19T01:58:41.905Z · LW(p) · GW(p)

There is a custom of often explaining downvotes, and there should be one of doing so more frequently.

I prefer not having downvotes explained. It is irritating when the justification is a bad one and on average results in me having less respect for the downvoter.

and there should be one of doing so more frequently.

I reject your normative assertion but respect your personal preference to have downvotes explained to you. I will honour your preference and explain downvotes of your comments while at the same time countering the (alleged) norm of often explaining downvotes.

In this instance I downvoted the parent from 1 to 0. This is my universal policy whenever someone projects a 'should' (of the normative kind not ) onto others that I don't agree with strongly. I would prefer that kind of thing to happen less frequently.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-05-19T02:54:38.774Z · LW(p) · GW(p)

About what fraction of downvotes have bad justifications? Is this a serious problem (measured on the level of importance of the karma system)? Is there anything that can be done about it?

I was certainly not aware of this problem.

My assertion of a norm was based on the idea that downvotes on lesswrong are often explained but usually not explained, and deviating from this fraction would bring, on average, less respect from the community, thus constituting a norm. I think the definitions of "often" and "norm" are general enough to make this statement true.

Replies from: ewjordan, wedrifid
comment by ewjordan · 2011-05-19T18:42:51.211Z · LW(p) · GW(p)

Is there anything that can be done about it?

I don't know how much of a problem it is, but there's definitely something that can be done about it: instead of a "dumb" karma count, use some variant of Pagerank on the vote graph.

In other words, every person is a node, every upvote that each person gets from another user is a directed edge (also signed to incorporate downvotes), every person starts with a base amount of karma, and then you iteratively update the user karma by weighting each inbound vote by the karma of the voter.

When I say "variant of Pagerank", I mean that you'd probably also have to fudge some things in there as well for practical reasons, like weighting votes by time to keep up with an evolving community, adding a bias so that a few top people don't completely control the karma graph, tuning the base karma that people receive based on length of membership and/or number of posts, weighting submissions separately from comments, avoiding "black hat SEO" tricks, etc. You know, all those nasty things that make Google a lot more than "just" Pagerank at web scale...

IMO doing something like this would improve most high traffic comment systems and online communities substantially (Hacker News could desperately use something like that to slow its slide into Reddit territory, for instance), though it would severely de-democratize them; somehow I doubt people around here would have much of a problem with that, though. The real barrier is that it would be a major pain in the ass to actually implement, and would take several iterations to really get right. It also might be difficult to retrofit an existing voting system with anything like that because sometimes they don't store the actual votes, but just keep a tally, so it would take a while to see if it actually helped at all (you couldn't backtest on the existing database to tune the parameters properly).

Replies from: Will_Sawin
comment by Will_Sawin · 2011-05-19T21:05:33.874Z · LW(p) · GW(p)

I think they do store the votes because otherwise you'd be able to upvote something twice.

However my understanding is that changing lesswrong, even something as basic as what posts are displayed on the front page, is difficult, and so it makes sense why they haven't implemented this.

comment by wedrifid · 2011-05-19T03:37:39.072Z · LW(p) · GW(p)

About what fraction of downvotes have bad justifications? Is this a serious problem (measured on the level of importance of the karma system)? Is there anything that can be done about it?

It's just karma. Not a big deal.

My assertion of a norm was based on the idea that downvotes on lesswrong are often explained but usually not explained, and deviating from this fraction would bring, on average, less respect from the community, thus constituting a norm. I think the definitions of "often" and "norm" are general enough to make this statement true.

I was responding to "and there should be one of doing so more frequently". If you declare that the community should adopt a behaviour and I don't share your preference about the behaviour in question then I will downvote the assertion. Because I obviously prefer that people don't tell others to do things that I don't want others to be doing. In fact there is a fairly high bar on what 'should be a norm' claims I don't downvote. All else being equal I prefer people don't assert norms.

comment by drethelin · 2011-05-16T16:48:18.604Z · LW(p) · GW(p)

How can you possibly create an AI that reasons morally the way you want it to unless you can describe how that moral reasoning works?

Replies from: TimFreeman
comment by TimFreeman · 2011-05-16T17:00:53.693Z · LW(p) · GW(p)

How can you possibly create an AI that reasons morally the way you want it to unless you can describe how that moral reasoning works?

People want stuff. I suspect there is no simple description of what people want. The AI can infer what people want from their behavior (using the aforementioned automated empathy), take the average, and that's the AI's utility function.

If there is no simple description of what people want, a bunch of people debating the structure of this non-simple thing on a web site isn't going to give clarity on the issue.

ETA:

Then we can tell you what the right thing to do is, and even help bring your feelings into alignment with that truth - as you go on to help save the world rather than being filled with pointless existential angst that the universe is made of math.

Hoping to change people's feelings as part of an FAI implementation is steering toward failure. You'll have to make the FAI based on the assumption that the vast majority of people won't be persuaded by anything you say, unless you've had a lot more success persuading people than I have.

Replies from: Oscar_Cunningham, lessdazed
comment by Oscar_Cunningham · 2011-05-16T17:33:34.664Z · LW(p) · GW(p)

a bunch of people debating the structure of this non-simple thing on a web site

Downvoted for unnecessary status manoeuvring against the rest of LessWrong. Why should the location of discussion affect its value? Especially since the issue isn't even one where people need to be motivated to act, but simply one that requires clear-headed thought.

Replies from: lessdazed
comment by lessdazed · 2011-05-17T16:27:28.062Z · LW(p) · GW(p)

Why should the location of discussion affect its value?

Because the anonymity of the internet causes discussions to derail in aggressive posturing as many social restraints are absent. Also because much communication is non verbal. Also because the internet presents a low barrier for entry into the conversation.

Mostly, a communication has value separate from where it is posted (although the message is not independent from the messenger, e.g. with the advent of the internet scholarly articles often influence their field while being read by relevant people in the editing stages by peers and go unread in their final draft form) but all else equal, knowing where a conversation is taking place helps one guess at its value. So you are mostly right.

Recently, I heard a novel anti-singularity argument. That "...we have never witnessed a greater intelligence, therefore we have no evidence that one’s existence is possible.". Not that intelligence isn't very useful (a common but weak argument), but that one can't extrapolate beyond the smartest human ever and believe it likely that a slightly greater level of intelligence is possible. Talk about low barriers to entry into the conversation! This community is fortunately good at policing itself.

Now if only I could find an example of unnecessary status manoevering ;-).

comment by lessdazed · 2011-05-17T16:33:31.109Z · LW(p) · GW(p)

Hoping to change people's feelings as part of an FAI implementation is steering toward failure.

I didn't read this post as having direct implications for FAI convincing people of things. I think that for posts in which the FAI connection is tenuous, LW is best served by discussing rationality without it, so as to appeal to a wider audience.

I'm still intrigued by how the original post might be relevant for FAI in a way that I'm not seeing. Is there anything beyond, "here is how to shape the actions of an inquirer, P.S. an FAI could do it better than you can"? Because that postscript could go lots of places, and so pointing out it would fit here doesn't tell me much.

Replies from: TimFreeman
comment by TimFreeman · 2011-05-17T17:13:03.631Z · LW(p) · GW(p)

I'm still intrigued by how the original post might be relevant for FAI in a way that I'm not seeing.

I didn't quite understand what you said you were seeing, but I'll try to describe the relevance.

The normal case is people talk about moral philosophy with a fairly relaxed emotional tone, from the point of view "it would be nice if people did such-and-such, they usually don't, nobody's listening to us, and therefore this conversation doesn't matter much". If you're thinking of making an FAI, the emotional tone is different because the point of view is "we're going to implement this, and we have to get it right because if it's wrong the AI will go nuts and we're all going to DIE!!!" But then you try to sound nice and calm anyway because accurately reflecting the underlying emotions doesn't help, not to mention being low-status.

I think most talk about morality on this website is from the more tense point of view above. Otherwise, I wouldn't bother with it, and I think many of the other people here wouldn't either. A minority might think it's an armchair philosophy sort of thing.

The problem with these discussions is that you have to know the design of the FAI is correct, so that design has to be as simple as possible. If we come up with some detailed understanding of human morality and program it into the FAI, that's no good -- we'll never know it's right. So IMO you need to delegate the work of forming a model of what people want to the FAI and focus on how to get the FAI to correctly build that model, which is simpler.

However, if lukeprog has some simple insight, it might be useful in this context. I'm expectantly waiting for his next post on this issue.

Replies from: lessdazed
comment by lessdazed · 2011-05-17T17:47:21.047Z · LW(p) · GW(p)

The part that got my attention was: "You'll have to make the FAI based on the assumption that the vast majority of people won't be persuaded by anything you say."

Some people will be persuaded, and some won't be, and the AI has to be able to tell them apart reliably regardless, so I don't see assumptions about majorities coming into play, instead they seem like an unnecessary complication once you grant the AI a certain amount of insight into individuals that is assumed as the basis for the AI being relevant.

I.e., if it (we) has (have) to make assumptions for lack of understanding about individuals, the game is up anyway. So we still approach the issue from the standpoint of individuals (such as us) influencing other individuals, because an FAI doesn't need separate group parameters, and because it doesn't, it isn't an obviously relevantly different scenario than anything else we can do and it can theoretically do better.

comment by CharlesR · 2011-05-16T14:20:00.529Z · LW(p) · GW(p)

The physicists have a clear definition of what sound is. So why can't we just say Barry is confused?

Replies from: ArisKatsaris, Oscar_Cunningham
comment by ArisKatsaris · 2011-05-16T18:45:53.896Z · LW(p) · GW(p)

You don't get to call people confused just because they use a different definition than the one you prefer. You may say that they speak a different language than you do, but they're not confused in regards to their own minds, or as to how their words maps onto a territory.

Downvoted for a very basic map-territory confusion.

Replies from: CharlesR
comment by CharlesR · 2011-05-16T19:23:20.074Z · LW(p) · GW(p)

I'm okay with being wrong. It's why I ask the question.

comment by Oscar_Cunningham · 2011-05-16T14:29:03.015Z · LW(p) · GW(p)

Sound is a mechanical wave that is an oscillation of pressure transmitted through a solid, liquid, or gas, composed of frequencies within the range of hearing and of a level sufficiently strong to be heard, or the sensation stimulated in organs of hearing by such vibrations.

Replies from: CharlesR, CharlesR
comment by CharlesR · 2011-05-16T14:53:23.643Z · LW(p) · GW(p)

I endorse that first bit.

comment by CharlesR · 2011-05-16T14:59:07.311Z · LW(p) · GW(p)

I endorse the first part.

Replies from: gimpf, CuSithBell, JohnD
comment by gimpf · 2011-05-16T19:00:44.126Z · LW(p) · GW(p)

There is nothing to "endorse". The same English word can mean two different things. Both are valid things to talk about, depending on context.

Replies from: CharlesR
comment by CharlesR · 2011-05-16T19:22:12.855Z · LW(p) · GW(p)

If I were to say, "Evolution is the idea that men are descended from chimpanzees," would you let me have my definition or would you say I was confused?

Replies from: gimpf, nhamann, ArisKatsaris
comment by gimpf · 2011-05-16T19:37:11.374Z · LW(p) · GW(p)

edit: No, not confused, but wrong.

If you want to say that "Evolution is the idea that men are descended from chimpanzees" is a definition, it is simply wrong, except within a Creationist circle, where such straw men may be used. We are then in "you can not arbitrarily define words" land. If I am not mistaken, the appropriate Sequence post is linked in the post.

Being confused about something and being wrong about something are two different things. Saying that a falling tree does not generate vibrations in the air is wrong; discussion whether it makes sound without recognizing that you want to talk about vibrations, is confused.

comment by nhamann · 2011-05-16T19:57:48.739Z · LW(p) · GW(p)

Have you read A Human's Guide to Words? You seem to be confused about how words work.

Replies from: CharlesR
comment by CharlesR · 2011-05-16T20:50:35.886Z · LW(p) · GW(p)

I haven't read the entire sequence but have studied some of the entries. I've had this question--is it right to call it a confusion?--ever since I read Taboo Your Words but didn't ask about it until now.

comment by ArisKatsaris · 2011-05-16T19:27:14.062Z · LW(p) · GW(p)

Neither, I would say that you were either horribly mistaken or deliberately misconstruing (lying) about what other people meant when they talked about evolution. It would become a lie for certain the second time you said it.

Replies from: CharlesR
comment by CharlesR · 2011-05-16T19:39:49.798Z · LW(p) · GW(p)

Wow. I had to go the dictionary because I thought I might be using confuse incorrectly. I mean definition 3 of the New Oxford American.

confuse: identify wrongly, mistake : a lot of people confuse a stroke with a heart attack | purchasers might confuse the two products.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-05-16T20:00:23.514Z · LW(p) · GW(p)

You didn't use the verb "confuse with", you used the word "confused" as an adjective, which has a slightly different meaning. Why didn't you go look "confused" up? I'm increasing the probability estimate you're being deliberately disingenuous here.

But even if you were just mistaken about typical usage, not intentionally disingenuous, it would have been better still if you tried to understand the meaning I'm trying to communicate to you instead of debating the definitions.

Replies from: lessdazed, CharlesR
comment by lessdazed · 2011-05-17T16:51:14.669Z · LW(p) · GW(p)

I'm increasing the probability estimate you're being deliberately disingenuous here.

I'm not getting that vibe at all.

Is the problem which part of speech is being used, or is it whether or not the verb is being used reflexively?

"I fed my kitten." This sentence is ambiguous. "I fed my kitten tuna." "I fed my kitten to a mountain lion."

One can feed a kitten (reflexive) an item to that kitten, or one can feed the kitten to an animal.

The adjective is derived from the non-reflexive verb in this case, but can not both the verb and adjective both hold both meanings, depending on whether or not context makes them reflexive?

Other languages routinely mark the difference between reflexive and non-reflexive verbs.

comment by CharlesR · 2011-05-16T20:18:21.831Z · LW(p) · GW(p)

I'm going to grant that my use of confused was mistaken and just rephrase: Physicists have a clear theory of sound. So why can't we just say Barry is wrong?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-05-16T20:54:42.529Z · LW(p) · GW(p)

He'd be wrong if he was talking about what physicists talk about when they refer to sound. He'd not be wrong if he was talking about what lots of other people talk about when they refer to "sound".

"Sound" is a word that in our language circumscribes two different categories of phenomena -- the acoustic vibration (that doesn't require a listener), and the qualia of the sense of hearing (that does require a listener). In the circumstances of the English language the two meanings use the same word. That doesn't necessitate for one meaning to be valid and the other meaning to be invalid. They're both valid, they're just different.

If I say "you have the right to bear arms" I mean a different thing with the words 'arms' than if I say "human arms are longer than monkey arms", but that doesn't make one meaning of the words 'arms' wrong and the other right.

Replies from: lessdazed
comment by lessdazed · 2011-05-17T16:53:38.177Z · LW(p) · GW(p)

The analogy I've always appreciated was that my map has one pixel for both my apartment and my neighbors. So why do they get mad when I go through the window and shower there? It's mine too, just look at the map, sheesh!

comment by CuSithBell · 2011-05-16T19:23:34.138Z · LW(p) · GW(p)

Where do definitions come from?

Replies from: Amanojack
comment by Amanojack · 2011-05-17T06:28:53.737Z · LW(p) · GW(p)

Usage. Dave interprets a sign from Jenny as referring to something, then he tries using the same sign to refer to the same thing, and if that usage of the sign is easily understood it tends to spread like that. The dictionary definition just records the common usages that have developed in the population.

For instance, how does the alien know what Takahiro means when he extends his index finger toward earth in this Japanese commercial? The alien just assumes it means he can find more chocolate bars on planet earth. If the alien gets to earth and finds more chocolate, he(?) is probably going to decide that his interpretation of the sign is at least somewhat reliable, and update for future interactions with humans.

Replies from: CuSithBell
comment by CuSithBell · 2011-05-17T13:56:37.491Z · LW(p) · GW(p)

I'd agree that's generally how it works. I apologize, I probably should have said something like "Where do you think definitions come from?", I was trying to figure out CharlesR's thought process re: physicalists, above.

Replies from: CharlesR
comment by CharlesR · 2011-05-17T16:53:46.204Z · LW(p) · GW(p)

My problem with Barry was he wants to include the words perception of in his definition for sound but has different rules when talking about light.

That was yesterday. I've updated. I'll write more when I've had a chance to clarify my thoughts.

Replies from: CuSithBell
comment by CuSithBell · 2011-05-17T18:03:07.541Z · LW(p) · GW(p)

Okay, thanks!

comment by JohnD · 2011-05-16T16:37:00.477Z · LW(p) · GW(p)

How could you endorse the first part without endorsing the second part? Doesn't the first part already include the second part?

After all, it says "within the range of hearing and of a level sufficiently strong to be heard". What could that mean if not "sufficient to generate the sensation stimulated in organs of hearing by such vibrations"?

Replies from: CharlesR
comment by CharlesR · 2011-05-16T18:15:30.631Z · LW(p) · GW(p)

This is the part I endorse.

"Sound is a mechanical wave that is an oscillation of pressure transmitted through a solid, liquid, or gas."

It does not require the presence of a listener. Nor need it be in a certain range of frequencies. (That would just be a sound you cannot hear.)

What I am saying is, when Barry replies as he does, why don't we just say, "You are confused about what is and is not sound. Go ask the physicists, 'What is sound?' and then we can continue this conversation, or if you don't want to bother, you can take my word for it."

When physicists have a consensus view of a phenomenon, we shouldn't argue over definitions. We should use their definitions, provisionally, of course.

No one thinks it makes sense to argue over what is or is not an atom. I don't see why 'sound' should be in a different category.

Replies from: None
comment by [deleted] · 2011-05-16T18:55:21.949Z · LW(p) · GW(p)

I would need more detail to evaluate the modified scenario. As it stands, what I wrote seems trivially to survive the new challenge.