Morality Isn't Logical

post by Wei Dai (Wei_Dai) · 2012-12-26T23:08:09.419Z · LW · GW · Legacy · 86 comments

Contents

86 comments

What do I mean by "morality isn't logical"? I mean in the same sense that mathematics is logical but literary criticism isn't: the "reasoning" we use to think about morality doesn't resemble logical reasoning. All systems of logic, that I'm aware of, have a concept of proof and a method of verifying with high degree of certainty whether an argument constitutes a proof. As long as the logic is consistent (and we have good reason to think that many of them are), once we verify a proof we can accept its conclusion without worrying that there may be another proof that makes the opposite conclusion. With morality though, we have no such method, and people all the time make moral arguments that can be reversed or called into question by other moral arguments. (Edit: For an example of this, see these posts.)

Without being a system of logic, moral philosophical reasoning likely (or at least plausibly) doesn't have any of the nice properties that a well-constructed system of logic would have, for example, consistency, validity, soundness, or even the more basic property that considering arguments in a different order, or in a different mood, won't cause a person to accept an entirely different set of conclusions. For all we know, somebody trying to reason about a moral concept like "fairness" may just be taking a random walk as they move from one conclusion to another based on moral arguments they encounter or think up.

In a recent post, Eliezer said "morality is logic", by which he seems to mean... well, I'm still not exactly sure what, but one interpretation is that a person's cognition about morality can be described as an algorithm, and that algorithm can be studied using logical reasoning. (Which of course is true, but in that sense both math and literary criticism as well as every other subject of human study would be logic.) In any case, I don't think Eliezer is explicitly claiming that an algorithm-for-thinking-about-morality constitutes an algorithm-for-doing-logic, but I worry that the characterization of "morality is logic" may cause some connotations of "logic" to be inappropriately sneaked into "morality". For example Eliezer seems to (at least at one point) assume that considering moral arguments in a different order won't cause a human to accept an entirely different set of conclusions, and maybe this is why. To fight this potential sneaking of connotations, I suggest that when you see the phrase "morality is logic", remind yourself that morality isn't logical.

 

86 comments

Comments sorted by top scores.

comment by Rob Bensinger (RobbBB) · 2012-12-27T20:20:42.132Z · LW(p) · GW(p)

Taboo both "morality" and "logical" and you may find that you and Eliezer have no disagreement.

LessWrongers routinely disagree on what is meant by "morality". If you think "morality" is ambiguous, then stipulate a meaning ('morality₁ is...') and carry on. If you think people's disagreement about the content of "morality" makes it gibberish, then denying that there are moral truths, or that those truths are "logical," will equally be gibberish. Eliezer's general practice is to reason carefully but informally with something in the neighborhood of our colloquial meanings of terms, when it's clear that we could stipulate a precise definition that adequately approximates what most people mean. Words like 'dog' and 'country' and 'number' and 'curry' and 'fairness' are fuzzy (if not outright ambiguous) in natural language, but we can construct more rigorous definitions that aren't completely semantically alien.

Surprisingly, we seem to be even less clear about what is meant by "logic". A logic, simply put, is a set of explicit rules for generating lines in a proof. And "logic," as a human practice, is the use and creation of such rules. But people informally speak of things as "logical" whenever they have a 'logicalish vibe,' i.e., whenever they involve especially rigorous abstract reasoning.

Eliezer's standard use of 'logical' takes the 'abstract' part of logicalish vibes and runs with them; he adopts the convention that sufficiently careful purely abstract reasoning (i.e., reasoning without reasoning about any particular spatiotemporal thing or pattern) is 'logical,' whereas reasoning about concrete things-in-the-world is 'physical.' Of course, in practice our reasoning is usually a mix of logical and physical; but Eliezer's convention gives us a heuristic for determining whether some x that we appeal to in reasoning is logical (i.e., abstract, nonspatial) or physical (i.e., concrete, spatially located). We can easily see that if the word 'fairness' denotes anything (i.e., it's not like 'unicorn' or 'square circle'), it must be denoting a logical/abstract sort of thingie, since fairness isn't somewhere. (Fairness, unlike houses and candy, does not decompose into quarks and electrons.)

By the same reasoning, it becomes clear that things like 'difficulty' and 'the average South African male' and 'the set of prime numbers' and 'the legal system of Switzerland' are not physical objects; there isn't any place where difficulty literally is, as though it were a Large Object hiding someplace just out of view. It's an abstraction (or, in EY's idiom, a 'logical' construct) our brains posit as a tool for thinking, in the same fundamental way that we posit numbers, sets, axioms, and possible worlds. The posits of literary theory are frequently 'logical' (i.e., abstract) in Eliezer's sense, when they have semantic candidates we can stipulate as having adequately precise characteristics. Eliezer's happy to be big-tent here, because he's doing domain-general epistemology and (meta)physics, not trying to lay out the precise distinctions between different fields in academia. And doing so highlights the important point that reasoning about what's moral is not categorically unlike reasoning about what's difficult, or what's a planet, or what's desirable, or what's common, or what's illegal; our natural-language lay-usage may underdetermine the answers to those questions, but there are much more rigorous formulations in the same semantic neighborhood that we can put to very good use.

So if we mostly just mean 'abstract' and 'concrete,' why talk about 'logical' and 'physical' at all? Well, I think EY is trying to constrain what sorts of abstract and concrete posits we take seriously. Various concepts of God, for instance, qualify as 'abstract' in the sense that they are not spatial; and psychic rays qualify as 'concrete' in the sense that they occur in specific places; but based on a variety of principles (e.g., 'abstract things have no causal power in their own right' and 'concrete things do not travel faster than light' or 'concrete things are not irreducibly "mental"'), he seeks to tersely rule out the less realistic spatial and non-spatial posits some people make, so that the epistemic grown-ups can have a more serious discussion amongst themselves.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-28T00:50:07.950Z · LW(p) · GW(p)

he adopts the convention that sufficiently careful purely abstract reasoning (i.e., reasoning without reasoning about any particular spatiotemporal thing or pattern) is 'logical,'

If this is the case, then I think he has failed to show that morality is logic, unless he's using an extremely lax standard of "sufficiently careful". For example, I think that "sufficiently careful" reasoning must at a minimum be using a method of reasoning that is not sensitive to the order in which one encounters arguments, and is not sensitive to the mood one is in when considering those arguments. Do you think Eliezer has shown this? Or alternatively, what standard of "sufficiently careful" do you think Eliezer is using when he says "morality is logic"?

Replies from: RobbBB, RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-28T04:43:54.593Z · LW(p) · GW(p)

I'd split up Eliezer's view into several distinct claims:

  1. A semantic thesis: Logically regimented versions of fairness, harm, obligation, etc. are reasonable semantic candidates for moral terms. They may not be what everyone actually means by 'fair' and 'virtuous' and so on, but they're modest improvements in the same way that a rigorous genome-based definition of Canis lupus familiaris would be a reasonable improvement upon our casual, everyday concept of 'dog,' or that a clear set of thermodynamic thresholds would be a reasonable regimentation of our everyday concept 'hot.'

  2. A metaphysical thesis: These regimentations of moral terms do not commit us to implausible magical objects like Divine Commands or Irreducible 'Oughtness' Properties In Our Fundamental Physics. All they commit us to are the ordinary objects of physics, logic, and mathematics, e.g., sets, functions, and causal relationships; and sets, functions, and causality are not metaphysically objectionable.

  3. A normative thesis: It is useful to adopt moralityspeak ourselves, provided we do so using a usefully regimented semantics. The reasons to refuse to talk in a moral idiom are, in part thanks to 1 and 2, not strong enough to outweigh the rhetorical and self-motivational advantages of adopting such an idiom.

It seems clear to me that you disagree with thesis 1; but if you granted 1 (e.g., granted that 'a function that takes inequitable distributions of resources between equally deserving agents into equitable distributions thereof' is not a crazy candidate meaning for the English word 'fairness'), would you still disagree with 2 and 3? And do you think that morality is unusual in failing 1-style regimentation, or do you think that we'll eventually need to ditch nearly all English-language terms if we are to attain rigor?

Replies from: Eliezer_Yudkowsky, Wei_Dai, TAG, lukeprog
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-29T00:53:24.198Z · LW(p) · GW(p)

I like this splitup!

(From the great-grandparent.)

Eliezer's standard use of 'logical' takes the 'abstract' part of logicalish vibes and runs with them; he adopts the convention that sufficiently careful purely abstract reasoning (i.e., reasoning without reasoning about any particular spatiotemporal thing or pattern) is 'logical,' whereas reasoning about concrete things-in-the-world is 'physical.'

I think I want to make a slightly stronger claim than this; i.e. that by logical discourse we're thinning down a universe of possible models using axioms.

One thing I didn't go into, in this epistemology sequence, is the notion of 'effectiveness' or 'formality', which is important but I didn't go into as much because my take on it feels much more standard - I'm not sure I have anything more to say about what constitutes an 'effective' formula or axiom or computation or physical description than other workers in the field. This carries a lot of the load in practice in reductionism; e.g., the problem with irreducible fear is that you have to appeal to your own brain's native fear mechanisms to carry out predictions about it, and you can never write down what it looks like. But after we're done being effective, there's still the question of whether we're navigating to a part of the physical universe, or narrowing down mathematical models, and by 'logical' I mean to refer to the latter sort of thing rather than the former. The load of talking about sufficiently careful reasoning is mostly carried by 'effective' as distinguished from empathy-based predictions, appeals to implicit knowledge, and so on.

I also don't claim to have given morality an effective description - my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms - but the metaphysical and normative claim is that these reasons-for-action both have an effective description (descriptively speaking) and that any idealized or normative version of them would still have an effective description (normatively speaking).

Replies from: Wei_Dai, Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-29T21:09:39.102Z · LW(p) · GW(p)

Let me try a different tack in my questioning, as I suspect maybe your claim is along a different axis than the one I described in the sibling comment. So far you've introduced a bunch of "moving parts" for your metaethical theory:

  • moral arguments
  • implicit reasons-for-action
  • effective descriptions of reasons-for-action
  • utility function

But I don't understand how these are supposed to fit together, in an algorithmic sense. In decision theory we also have missing modules or black boxes, but at least we specify their types and how they interact with the other components, so we can have some confidence that everything might work once we fill in the blanks. Here, what are the types of each of your proposed metaethical objects? What's the "controlling algorithm" that takes moral arguments and implicit reasons-for-action, and produces effective descriptions of reasons-for-action, and eventually the final utility function?

As you argued in Unnatural Categories (which I keep citing recently), reasons-for-action can't be reduced the same way as natural categories. But it seems completely opaque to me how they are supposed to be reduced, besides that moral arguments are involved.

Am I asking for too much? Perhaps you are just saying that these must be the relevant parts, and let's figure out both how they are supposed to work internally, and how they are supposed to fit together?

comment by Wei Dai (Wei_Dai) · 2012-12-29T04:40:58.572Z · LW(p) · GW(p)

my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms

So would it be fair to say that your actual moral arguments do not consist of sufficiently careful reasoning?

these reasons-for-action both have an effective description (descriptively speaking)

Is there a difference between this claim and the claim that our actual cognition about morality can be described as an algorithm? Or are you saying that these reasons-for-action constitute (currently unknown) axioms which together form a consistent logical system?

Can you see why I might be confused? The former interpretation is too weak to distinguish morality from anything else, while the latter seems too strong given our current state of knowledge. But what else might you be saying?

any idealized or normative version of them would still have an effective description (normatively speaking).

Similar question here. Any you saying anything beyond that any idealized or normative way of thinking about morality is still an algorithm?

comment by Wei Dai (Wei_Dai) · 2012-12-28T20:06:27.499Z · LW(p) · GW(p)

but if you granted 1 (e.g., granted that 'a function that takes inequitable distributions of resources between equally deserving agents into equitable distributions thereof' is not a crazy candidate meaning for the English word 'fairness'), would you still disagree with 2 and 3?

If I grant 1, I currently can't think of any objections to 2 and 3 (which doesn't mean that I won't if I took 1 more seriously and therefore had more incentive to look for such objections).

And do you think that morality is unusual in failing 1-style regimentation, or do you think that we'll eventually need to ditch nearly all English-language terms if we are to attain rigor?

I think at a minimum, it's unusually difficult to do 1-style regimentation for morality (and Eliezer himself explained why in Unnatural Categories). I guess one point I'm trying to make is that whatever kind of reasoning we're using to attempt this kind of regimentation is not the same kind of reasoning that we use to think about some logical object after we have regimented it. Does that make sense?

comment by TAG · 2023-06-25T12:05:27.091Z · LW(p) · GW(p)

A metaphysical thesis: These regimentations of moral terms do not commit us to implausible magical objects like Divine Commands or Irreducible ‘Oughtness’ Properties

If oughtness, nornmativity, isn't irteducible, it's either reducible or nonexistent. If it's nonexistent, how can you have morality at all? If it's reducible, where's the reduction?

comment by lukeprog · 2013-01-09T06:53:32.836Z · LW(p) · GW(p)

RobbBB probably knows this, but I'd just like to mention that the three claims listed above, at least as stated there, are common to many metaethical approaches, not just Eliezer's. Desirism is one example. Other examples include the moral reductionisms of Richard Brandt, Peter Railton, and Frank Jackson.

comment by Rob Bensinger (RobbBB) · 2012-12-28T01:23:40.301Z · LW(p) · GW(p)

By "morality" you seem to mean something like 'the set of judgments about mass wellbeing ordinary untrained humans arrive at when prompted.' This is about like denying the possibility of arithmetic because people systematically make errors in mathematical reasoning. When the Pythagoreans reasoned about numbers, they were not being 'sufficiently careful;' they did not rigorously define what it took for something to be a number or to have a solution, or stipulate exactly what operations are possible; and they did not have a clear notion of the abstract/concrete distinction, or of which of these two domains 'number' should belong to. Quite plausibly, Pythagoreans would arrive at different solutions in some cases based on their state of mind or the problems' framing; and certainly Pythagoreans ran into disagreements they could not resolve and fell into warring camps as a result, e.g., over whether there are irrational numbers.

But the unreasonableness of the disputants, no matter how extreme, cannot infect the subject matter and make that subject matter intrinsically impossible to carefully reason with. No matter how extreme we make the Pythagoreans' eccentricities, as long as they continue to do something math-ish, it would remain possible for a Euclid or Yudkowsky to arise from the sea-foam and propose a regimentation of their intuitions, a more carefully formalized version of their concepts of 'number,' 'ratio,' 'proof,' etc.

I take it that Eliezer thinks we are very much in the position today of inhabiting a global, heavily schismatized network of Pythagorean Cults of Morality. Those cults are irrational, and their favored concepts would need to be made more precise and careful before the questions they ask could be assigned determinate answers (even in principle). But the subject matter those cults are talking about -- how to cultivate human well-being, how to distribute resources equitably, how to balance preferences in a way most people would prefer, etc. -- is not intrinsically irrational or mystical or ineffable. The categories in question are tracking real property clusters, though perhaps not yet with complete applicability-to-any-old-case; no matter how much of a moral anti-realist you are, for instance, you can't reasonably hold that 'fairness' doesn't have its own set of satisfaction conditions that fail to coincide with other moral (or physical, mathematical, etc.) concepts.

Another way of motivating the idea that morality is 'logical': Decision theory is 'logical', and morality is a special sort of decision theory. If we can carefully regiment the satisfaction conditions for an individual's preferences, then we can regiment the satisfaction conditions for the preferences of people generally; and we can isolate the preferences that people consider moral vs. amoral; and if we can do all that, what skeptical challenge could block an algorithm that recognizably maps what we call 'fair' and 'unfair' and 'moral' and 'immoral,' that couldn't equally well block an algorithm that recognizably maps what we call 'preferred' and 'distasteful' and 'delicious'...? How carelessly do people have to reason with x such that we can conclude that it's impossible to reason carefully with x?

Replies from: Wei_Dai, TimS
comment by Wei Dai (Wei_Dai) · 2012-12-28T20:33:13.386Z · LW(p) · GW(p)

But the unreasonableness of the disputants, no matter how extreme, cannot infect the subject matter and make that subject matter intrinsically impossible to carefully reason with.

I think I've been careful not to claim that morality is impossible to carefully reason with, but just that we don't know how to carefully reason with it yet and given our current state of knowledge, it may turn out to be impossible to carefully reason with.

Another way of motivating the idea that morality is 'logical': Decision theory is 'logical', and morality is a special sort of decision theory.

With decision theory, we're also in a "non-logical" state of reasoning, where we don't yet have a logical definition of what constitutes correct decision theory and therefore can't just apply logical reasoning. What's helpful in the case of decision theory is that it seems reasonable to assume that when we do come up with such a logical definition, it will be relatively simple. This helps tremendously in guiding our search, and partly compensates for the fact that we do not know how to reason carefully during this search. But with "morality", we don't have this crutch since we think it may well be the case that "value is complex".

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-28T23:53:07.626Z · LW(p) · GW(p)

we don't know how to carefully reason with it yet and given our current state of knowledge, it may turn out to be impossible to carefully reason with.

I agree that it's going to take a lot of work to fully clarify our concepts. I might be able to assign a less remote probability to 'morality turns out to be impossible to carefully reason with' if you could give an example of a similarly complex human discourse that turned out in the past to be 'impossible to carefully reason with'.

High-quality theology is an example of the opposite; we turned out to be able to reason very carefully (though admittedly most theology is subpar) with slightly regimented versions of concepts in natural religion. At least, there are some cases where the regimentation was not completely perverse, though the crazier examples may be more salient in our memories. But the biggest problem with was metaphysical, not semantic; there just weren't any things in the neighborhood of our categories for us to refer to. If you have no metaphysical objections to Eliezer's treatment of morality beyond your semantic objections, then you don't think a regimented morality would be problematic for the reasons a regimented theology would be. So what's a better example of a regimentation that would fail because we just can't be careful about the topic in question? What symptoms and causes would be diagnostic of such cases?

What's helpful in the case of decision theory is that it seems reasonable to assume that when we do come up with such a logical definition, it will be relatively simple.

By comparison, perhaps. But it depends a whole lot on what we mean by 'morality'. For instance, do we mean:?

  • Morality is the hypothetical decision procedure that, if followed, tends to maximize the amount of positively valenced experience in the universe relative to negatively valenced experience, to a greater extent than any other decision procedure.

  • Morality is the hypothetical decision procedure that, if followed, tends to maximize the occurrence of states of affairs that agents prefer relative to states they do not prefer (taking into account that agents generally prefer not to have their preferences radically altered).

  • Morality is any decision procedure that anyone wants people in general to follow.

  • Morality is the human tendency to construct and prescribe rules they want people in general to follow.

  • Morality is anything that English-language speakers call "morality" with a certain high frequency.

If "value is complex," that's a problem for prudential decision theories based on individual preferences, just as much as it is for agent-general moral decision theories. But I think we agree both there's a long way to go in regimenting decision theory, and that there's some initial plausibility and utility in trying to regiment a moralizing class of decision theories; whether we call this regimenting procedure 'logicizing' is just a terminological issue.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-01-22T02:09:37.431Z · LW(p) · GW(p)

But it depends a whole lot on what we mean by 'morality'.

What I mean by "morality" is the part of normativity ("what you really ought, all things considered, to do") that has to do with values (as opposed to rationality).

I agree that it's going to take a lot of work to fully clarify our concepts. I might be able to assign a less remote probability to 'morality turns out to be impossible to carefully reason with' if you could give an example of a similarly complex human discourse that turned out in the past to be 'impossible to carefully reason with'.

In general, I'm not sure how to show a negative like "it's impossible to reason carefully about subject X", so the best I can do is exhibit some subject that people don't know how to reason carefully about and intuitively seems like it may be impossible to reason carefully about. Take the question, "Which sets really exist?" (Do large cardinals exist, for example?) Is this a convincing example to you of another subject that may be impossible to reason carefully about?

comment by TimS · 2012-12-28T01:35:34.678Z · LW(p) · GW(p)

I take it that Eliezer thinks we are very much in the position today of inhabiting a global, heavily schismatized network of Pythagorean Cults of Morality. Those cults are irrational, and their favored concepts would need to be made more precise and careful before the questions they ask could be assigned determinate answers (even in principle). But the subject matter those cults are talking about -- how to cultivate human well-being, how to distribute resources equitably, -- is not intrinsically irrational or mystical or ineffable. The categories in question are tracking real property clusters, though perhaps not yet with complete applicability-to-any-old-case; no matter how much of a moral anti-realist you are, for instance, you can't reasonably hold that 'fairness' doesn't have its own set of satisfaction conditions that fail to coincide with other moral (or physical, mathematical, etc.) concepts.

Haven't we been in this position since before mathematics was a thing. The lack of progress towards consensus in that period of time seems disheartening.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-28T02:07:10.633Z · LW(p) · GW(p)

The natural number line is one of the simplest structures a human being is capable of conceiving. The idea of a human preference is one of the most complex structures a human being has yet encountered. And we have a lot more emotional investment and evolutionary baggage interfering with carefully axiomatizing our preferences than with carefully axiomatizing the numbers. Why should we be surprised that we've made more progress with regimenting number theory than with regimenting morality or decision theory in the last few thousand years?

Replies from: TimS
comment by TimS · 2012-12-28T04:17:54.368Z · LW(p) · GW(p)

In terms of moral theory, we appear to have made no progress at all. We don't even agree on definitions.

Mathematics may or might not be an empirical discipline, but if you get your math wrong badly enough, you lose the ability to pay rent.

If morality paid rent in anticipated experience, I'd expect societies that had more correct morality to do better and societies with less correct morality to do worse. Morality is so important that I expect marginal differences to have major impact. And I just don't see the evidence that such an impact is or ever did happen.

So, have I misread history? Or have I made a mistake in predicting that chance differences in morality should have major impacts on the prosperity of a society? (Or some other error?)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-28T04:39:26.687Z · LW(p) · GW(p)

In terms of moral theory, we appear to have made no progress at all. We don't even agree on definitions.

But defining terms is the trivial part of any theory; if you concede that we haven't even gotten that far (and that term-defining is trivial), then you'll have a much harder time arguing that if we did agree on definitions we'd still have made no progress. You can't argue that, because if we all have differing term definitions, then that on its own predicts radical disagreement about almost anything; there is no need to posit a further explanation.

If morality paid rent in anticipated experience

Morality pays rent in anticipated experience in the same three basic ways that mathematics does:

  1. Knowing about morality helps us predict the behavior of moralists, just as knowing about mathematics helps us predict the behavior of mathematicians (including their creations). If you know that people think murder is bad, you can help predict why murder is so rare; just as knowing mathematicians' beliefs about natural numbers helps us predict what funny squiggly lines will occur on calculators. This, of course, doesn't require any commitment to moral realism, just as it doesn't require a commitment to mathematical realism.

  2. Inasmuch as the structure of moral reasoning mirrors the structure of physical systems, we can predict how physical systems will change based on what our moral axioms output. For instance, if our moral axioms are carefully tuned to parallel the distribution of suffering in the world, we can use them to predict what sorts of brain-states will be physically instantiated if we perform certain behaviors. Similarly, if our number axioms are carefully tuned to parallel the changes in physical objects (and heaps thereof) in the world, we can use them to predict how physical objects will change when we translate them in spacetime.

  3. Inasmuch as our intuitions give rise to our convictions about mathematics and morality, we can use the aforementioned convictions to predict our own future intuitions. In particular, an especially regimented mathematics or morality, that arises from highly intuitive axioms we accept, will often allow us to algorithmically generate what we would reflectively find most intuitive before we can even process the information sufficiently to generate the intuition. A calculator gives us the most intuitive and reflectively stable value for 142857 times 7 before we've gone to the trouble of understanding why or that this is the most intuitive value; similarly, a sufficiently advanced utility-calculator, programmed with the rules you find most reflectively intuitive, would generate the ultimately intuitive answers for moral dilemmas before you'd even gone to the trouble of figuring out on your own what you find most intuitive. And your future intuitions are future experiences; so the propositions of mathematics and morality, interestingly enough, serve as predictors for your own future mental states, at least when those mental states are sufficiently careful and thought out.

But all of these are to some extent indirect. It's not as though we directly observe that SSSSSSS0 is prime, any more than we directly observe that murder is bad. We either take it as a given, or derive it from something else we take as a given; but regardless, there can be plenty of indirect ways that the 'logical' discourse in question helps us better navigate, manipulate, and predict our environments.

If morality paid rent in anticipated experience, I'd expect societies that had more correct morality to do better and societies with less correct morality to do worse.

There's a problem here: What are we using to evaluate 'doing better' vs. 'doing worse'? We often use moral superiority itself as an important measure of 'betterness;' we think it's morally right or optimal to maximize human well-being, so we judge societies that do a good job of this as 'better.' At the very least, moral considerations like this seem to be a part of what we mean by 'better.' If you're trying to bracket that kind of success, then it's not clear to me what you even mean by 'better' or 'prosperity' here. Are you asking whether moral fortitude correlates with GDP?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-29T01:03:44.984Z · LW(p) · GW(p)

(Some common senses of "moral fortitude" definitely cause GDP, at minimum in the form of trust between businesspeople and less predatory bureaucrats. But this part is equally true of Babyeaters.)

comment by HalMorris · 2012-12-27T04:13:36.802Z · LW(p) · GW(p)

There's a pseudo-theorem in math that is sometimes given to 1st year graduate students (at least in my case, 35 years ago), which is that

All natural numbers are interesting.

Natural numbers consist of {1, 2, 3, ...} -- actually a recent hot topic of conversation on LW ("natural numbers" is sometimes defined to include 0, but everything that follows will work either way).

The "proof" used the principle of mathematical induction (one version of which is):

If P(n) is true for n=1, and the assertion "m is the smallest integer such that !P(m)" leads to a contradiction, then P(n) is true for all natural numbers.

and also uses the fact (from the Peano construction of the natural numbers?) that every non-empty subset of natural numbers has a smallest element.

PROOF:

1 is interesting.

Suppose theorem is false. Then some number m is the smallest uninteresting number. But then wouldn't that be interesting?

Contradiction. QED.

The illustrates a pitfall of mixing (qualities that don't really belong in a mathematical statement) with (rigorous logic), and in general, if you take a quality that is not rigorously defined, and apply a sufficiently long train of logic to it, you are liable to "prove" nonsense.

(Note: the logic just applied is equivalent to P(1) => P(2) => P(3), ...which is infinite and hence long enough.)

It is my impression that certain contested (though "proven") assertions about economics suffer from this problem, and it's hard, for me at least, to think of a moral proposition that wouldn't risk this sort of pitfall.

Replies from: magfrump, gwern
comment by magfrump · 2012-12-27T19:04:16.592Z · LW(p) · GW(p)

Okay but if I honestly believe that all natural numbers are interesting and thought of this proof as pretty validly matching my intuitions, what does that mean?

Replies from: HalMorris
comment by HalMorris · 2012-12-27T23:17:17.505Z · LW(p) · GW(p)

Unless you turn "interesting" into something rigorously defined and precisely communicated to others, what it means is that all natural numbers are {some quality that is not rigorously defined and can't be precisely communicated to others}.

Replies from: magfrump
comment by magfrump · 2012-12-28T05:56:33.826Z · LW(p) · GW(p)

I guess I feel that even if I haven't defined "interesting" rigorously, I still have some intuitions for what "interesting" means, large parts of which will be shared by my intended audience.

For example, I could make the empirical prediction that if someone names a number I could talk about it for a bit and then they would agree it was interesting (I mean this as a toy example; I'm not sure I could do this.)

One could then take approximations of these conversations, or even the existence of these conversations, and define interesting* to be "I can say a unique few sentences about historic results surrounding this number and related mathematical factoids." Which then might be a strong empirical predictor of people claiming something is interesting.

So I feel like there's something beyond a useless logical fact being expressed by my intuitions here.

comment by gwern · 2012-12-27T19:25:54.437Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Interesting_number_paradox and http://en.wikipedia.org/wiki/Berry_paradox

Replies from: HalMorris
comment by HalMorris · 2012-12-27T23:40:05.982Z · LW(p) · GW(p)

I can't tell what this is. The first link might imply that Gwern thinks I misstated the Interesting Number Paradox (I looked at the Wikipedia article before I wrote my post, but went with my memory, and there are multiple equivalent ways of saying it, but if you think I got it wrong ....? Or maybe it was offered as a handy reference.

The Berry Paradox sounds like a very different kettle of fish ... with more real complexity.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-01-06T12:34:46.754Z · LW(p) · GW(p)

Or maybe it was offered as a handy reference.

I would bet on this one.

More meta: Perhaps your priors for "if someone replies to my comment, they disagree with me" are too high. ;-) Maybe not for internet in general, but LW is not an average internet site.

comment by MugaSofer · 2012-12-27T03:46:02.695Z · LW(p) · GW(p)

For all we know, somebody trying to reason about a moral concept like "fairness" may just be taking a random walk as they move from one conclusion to another based on moral arguments they encounter or think up.

Well. Not a purely random walk. A weighted one.

Isn't this true of all beliefs? And isn't rationality just increasing the weight in the right direction?

comment by Vladimir_Nesov · 2012-12-27T02:00:29.593Z · LW(p) · GW(p)

The word "morality" needs to be made more specific for this discussion. One of the things you seem to be talking about is mental behavior that produces value judgments or their justifications. It's something human brains do, and we can in principle systematically study this human activity in detail, or abstractly describe humans as brain activity algorithms and study those algorithms. This characterization doesn't seem particularly interesting, as you might also describe mathematicians in this way, but this won't be anywhere close to an efficient route to learning about mathematics or describing what mathematics is.

"Logic" and "mathematics" are also somewhat vague in this context. In one sense, "mathematics" may refer to anything, as a way of considering things, which makes the characterization empty of content. In another sense, it's the study of the kinds of objects that mathematicians typically study, but in this sense it probably won't refer to things like activity of human brains or particular physical universes. "Logic" is more specific, it's a particular way of representing and processing mathematical ideas. It allows describing the things you are talking about and obtaining new information about them that wasn't explicit in the original description.

Morality in the FAI-relevant sense is a specification of what to do with the world, and as such it isn't concerned with human cognition. The question of the nature of morality in this sense is a question about ways of specifying what to do with the world. Such specification would need to be able to do at least these two things: (1) it needs to be given with much less explicit detail than what can be extracted from it when decisions about novel situations need to be made, which suggests that the study of logic might be relevant, and (2) it needs to be related to the world, which suggests that the study of physics might be relevant.

This question about the nature of morality is separate from the question of how to pinpoint the right specification of morality to use in a FAI, out of all possible specifications. The difficulty of finding the right morality seems mostly unrelated to describing what kind of thing morality is. If I put a note with a number written on it in a box, it might be perfectly accurate to say that the box contains a number, even though it might be impossible to say what that number is, precisely, and even if people aren't able to construct any interesting models of the unknown number.

comment by crap · 2012-12-27T09:15:37.895Z · LW(p) · GW(p)

People do all sorts of sloppy reasoning; everyday logic also arrives at both A and ~A ; any sort of fuzziness leads to that. To actually be moral, it is necessary that you can't arrive at both A and ~A at will - otherwise your morality provides no constraint.

Replies from: DanArmak
comment by DanArmak · 2012-12-27T14:16:09.236Z · LW(p) · GW(p)

Different people can disagree about pretty much any moral question. Any one person's morality may be stable enough not to arrive at A and also ~A, but since the result still dependent most of all on that person's upbringing and culturally endorsed belief, morality is not very useful as logic. (Of course it is useful as morality: our brains are built that way.)

Replies from: crap
comment by crap · 2012-12-27T14:39:52.132Z · LW(p) · GW(p)

Difference in values is a little overstated, I think. Practically, there's little difference between what people say they'd do in Milgram experiment, but a huge difference between what they actually do.

Replies from: DanArmak
comment by DanArmak · 2012-12-28T12:18:36.174Z · LW(p) · GW(p)

I'm not sure how to parse your grammar.

Are you saying that different people all say they will do the same ('good') thing on Milgram, but in practice different people do different things on Milgram (some 'good' some 'bad')?

Or are you saying that there is a large difference between what people say they would do on Milgram, and between what they actually do?

(Because replications of Milgram are prohibited by modern ethics boards, the data is weaker than I'd like it to be.)

You also say that I overstate the difference in values between people. But Milgram ran his experiment just once on very homogenous people: all from the same culture. If he'd compared it to widely differing cultures, I expect at least some of the time the compliance rates would differ significantly.

comment by timtyler · 2012-12-29T12:25:34.577Z · LW(p) · GW(p)

In a recent post, Eliezer said "morality is logic"

The actual quote is:

morality is (and should be) logic, not physics

comment by A1987dM (army1987) · 2012-12-27T12:52:28.249Z · LW(p) · GW(p)

Eliezer said "morality is logic", by which he seems to mean... well, I'm still not exactly sure what, but one interpretation is that a person's cognition about morality can be described as an algorithm, and that algorithm can be studied using logical reasoning. (Which of course is true, but in that sense both math and literary criticism as well as every other subject of human study would be logic.)

Thank you -- I knew I ADBOCed with Eliezer's meta-ethics, but I had trouble putting down in words the reason.

comment by Shmi (shminux) · 2012-12-27T18:12:36.855Z · LW(p) · GW(p)

You are not using the same definition of logic EY does. For him logic is everything that is not physics in his physics+logic (or territory+maps, in the previously popular terms) picture of the world. Mathematical logic is a tiny sliver of what he calls "logic". For comparison, in an instrumentalist description there are experiences+models, and EY's logic is roughly equivalent to "models" (maps, in the map-territory dualism), of which mathematics is but one.

comment by [deleted] · 2012-12-27T00:08:20.917Z · LW(p) · GW(p)

With morality though, we have no such method,

Every act of lying is morally prohibited / This act would be a lie // This act is morally prohibited.

So here I have a bit of moral reasoning, the conclusion of which follows from the premises. The argument is valid, so if the premises are true, the conclusion can be considered proven. So given that I can give you valid proofs for moral conclusions, in what way is morality not logical?

doesn't have any of the nice properties of that a well-constructed system of logic would have, for example, consistency, validity, soundness...

The above example of moral reasoning (assume for the sake of simplicity that this is my entire moral system) is consistant, and valid, and (if you accept the premises) sound. Anyone who accepts the premises must accept the conclusion. One might waver on acceptance of the premises (this is true for every subject) but the conclusion follows from them regardless of what one's mood is.

All that said, our moral reasoning is often fraught. But I don't think makes morality peculiar. The mistakes we often make with regard to moral reasoning don't seem to be different in kind from the mistakes we make in, say, economics. Ethics, they say, is not an exact science.

Replies from: Wei_Dai, ddxxdd, army1987
comment by Wei Dai (Wei_Dai) · 2012-12-27T01:23:15.008Z · LW(p) · GW(p)

I should have given some examples of the kind of moral reasoning I'm referring to.

Replies from: crap, None
comment by crap · 2012-12-27T09:48:32.325Z · LW(p) · GW(p)

1st link is ambiguity aversion.

Morality is commonly taken to describe what one will actually do when they are trading off private gains vs other people's losses. See this as example of moral judgement. Suppose Roberts is smarter. He will quickly see that he can donate 10% to charity, and it'll take longer for him to reason about value of cash that was not given to him (reasoning that may stop him from pressing the button), so there will be a transient during which he pushes the button, unless he somehow suppresses actions during transients. It's an open ended problem 'unlike logic' because consequences are difficult to evaluate.

edit: been in a hurry.

comment by [deleted] · 2012-12-27T05:36:33.262Z · LW(p) · GW(p)

Ah, thank you, that is helpful.

In the case of 'circular altruism', I confess I'm quite at a loss. I've never really managed to pull an argument out of there. But if we're just talking about the practice of quantifying goods in moral judgements, then I agree with you there's no strongly complete ethical calculus that's going to do render ethics a mathematical science. But in at least in 'circular reasoning' EY doesn't need quite so strong a view: so far as I can tell, he's just saying that our moral passions conflict with our reflective moral judgements. And even if we don't have a strongly complete moral system, we can make logically coherent reflective moral judgements. I'd go so far as to say we can make logically coherent reflective literary criticism judgements. Logic isn't picky.

So while, on the one hand, I'm also (as yet) unconvinced about EY's ethics, I think it goes too far in the opposite direction to say that ethical reasoning is inherently fuzzy or illogical. Valid arguments are valid arguments, regardless.

comment by ddxxdd · 2012-12-27T00:30:59.296Z · LW(p) · GW(p)

Every act of lying is morally prohibited / This act would be a lie // This act is morally prohibited.

So here I have a bit of moral reasoning, the conclusion of which follows from the premises.

The problem is that when the conclusion is "proven wrong" (i.e. "my gut tells me that it's better to lie to an Al Qaeda prison guard than to tell him the launch codes for America's nuclear weapons"), then the premises that you started with are wrong.

So if I'm understanding Wei_Lai's point, it's that the name of the game is to find a premise that cannot and will not be contradicted by other moral premises via a bizarre hypothetical situation.

I believe that Sam Harris has already mastered this thought experiment. Paraphrased from his debate with William Lane Craig:

"There exists a hypothetical universe in which there is the absolute most amount of suffering possible. Actions that move us away from that universe are considered good; actions that move us towards that universe are considered bad".

Replies from: palladias, None, jsalvatier, buybuydandavis
comment by palladias · 2012-12-27T01:46:57.026Z · LW(p) · GW(p)

I believe that Sam Harris has already mastered this thought experiment. Paraphrased from his debate with William Lane Craig:
"There exists a hypothetical universe in which there is the absolute most amount of suffering possible. Actions that move us away from that universe are considered good; actions that move us towards that universe are considered bad".

This is why I find Harris frustrating. He's stating something pretty much everyone agrees with, but they all make different substitutions for the variable "suffering." And then Harris is vague about what he personally plugs in.

Replies from: evand
comment by evand · 2012-12-30T22:55:42.243Z · LW(p) · GW(p)

At least as paraphrased here, the definition of "move towards" is very unclear. Is it a universe with more suffering? A universe with more suffering right now? A universe with more net present suffering, according to some discount rate? What if I move to a universe with more suffering both right now and for all possible future discount rates, assuming no further action, but for which future actions that greatly reduce suffering are made easier? (In other words, does this system get stuck in local optimums?)

I think there is much that this approach fails to solve, even if we all agree on how to measure suffering.

(Included in "how to measure suffering" is a bit of complicated stuff like average vs total utilitarianism, and how to handle existential risks, and how to do probability math on outcomes that produce a likelihood of suffering.)

comment by [deleted] · 2012-12-27T00:35:09.952Z · LW(p) · GW(p)

The problem is that when the conclusion is "proven wrong"...then the premises that you started with are wrong.

I hope so! It would be terribly awkward to find ourselves with true premises, valid reasoning, and a false conclusion. But unless by 'gut feeling' you mean a valid argument with true premises, then gut feelings can't prove anything wrong.

So if I'm understanding Wei_Lai's point, it's that the name of the game is to find a premise that cannot and will not be contradicted by other moral premises via a bizarre hypothetical situation.

Perhaps, though that wouldn't speak to whether or not morality is logical. If Wai Dai's point is that morality is, at best, axiomatic, then sure. But so is Peano arithmetic, and that's as logical as can be.

Replies from: ddxxdd
comment by ddxxdd · 2012-12-27T00:52:32.640Z · LW(p) · GW(p)

I just stumbled into this discussion after reading an article about why mathematicians and scientists dislike traditional, Socratic philosophy, and my mindset is fresh off that article.

It was a fantastic read, but the underlying theme that I feel is relevant to this discussion is this:

  • Socratic philosophy treats logical axioms as "self-evident truths" (i.e. I think, therefore I am).

  • Mathematics treats logical axioms as "propositions", and uses logic to see where those propositions lead (i.e. if you have a line and a point, the number/amount of lines that you can draw through the point that's parallel to the original line determines what type of geometry you are working with (multidimensional, spherical, or flat-plane geometry)).

  • Scientists treat logical axioms as "hypotheses", and logical "conclusions" as testable statements that can determine whether those axioms are true or not (i.e. if this weird system known as "quantum mechanics" were true, then we would see an interference pattern when shooting electrons through a screen with 2 slits).

So I guess the point that we should be making is this: which philosophical approach towards logic should we take to study ethics? I believe Wei_Lai would say that the first approach, treating ethical axioms as "self-evident truths" is problematic due to the fact that a lot of hypothetical situations (like my example before) can create a lot of contradictions between various ethical axioms (i.e. choosing between telling a lie and letting terrorists blow up the planet).

Replies from: None, JonathanLivengood, BerryPick6, whowhowho
comment by [deleted] · 2012-12-27T01:18:00.998Z · LW(p) · GW(p)

Socratic philosophy treats logical axioms as "self-evident truths" (i.e. I think, therefore I am).

I read the article. It's interesting (I liked the thing about pegs and strings), but I don't think the guy's (nor you) read a lot of actual Greek philosophy. I don't mean that as an attack (why would you want to, after all?), but it makes some of his, and your claims a little strange.

Socrates, in the Platonic dialogues, is unwilling to take the law of non-contradiction as an axiom. There just aren't any axioms in Socratic philosophy, just discussions. No proofs, just conversations. Plato (and certainly not Socrates) doesn't have doctrines, and Plato is totally and intentionally merciless with people who try to find Platonic doctrines.

Also, Plato and Socrates predate, for most purposes, logic.

Mathematics treats logical axioms as "propositions", and uses logic to see where those propositions lead

Right, Aristotle largely invented (or discovered) that trick. Aristotle's logic is consistant and strongly complete (i.e. it's not axiomatic, and relies on no external logical concepts). Euclid picked up on it, and produced a complete and consistant mathematics. So (some) Greek philosophy certainly shares this idea with modern mathematics.

Scientists treat logical axioms as "hypotheses", and logical "conclusions" as testable statements that can determine whether those axioms are true or not

I don't think scientists treat logical axioms as hypotheses. Logical axioms aren't empirical claims, and aren't really subject to testing. But Aristotle's work on biology, meteorology, etc. forwards plenty of empirical hypotheses, along with empirical evidence for them. Textual evidence suggests Aristotle performed lots of experiments, mostly in the form of vivisection of animals. He was wrong about pretty much everything, but his method was empirical.

This is to say nothing of contemporary philosophy, which certainly doesn't take very much as 'self-evident truth'. I can assure you, no one gets anywhere with that phrase anymore, in any study.

I believe Wei_Lai would say that the first approach, treating ethical axioms as "self-evident truths" is problematic due to the fact that a lot of hypothetical situations (like my example before) can create a lot of contradictions between various ethical axioms (i.e. choosing between telling a lie and letting terrorists blow up the planet).

Not if those ethical axioms actually are self-evident truths. Then hypothetical situations (no matter how uncomfortable they make us) can't disrupt them. But we might, on the basis of these situations, conclude that we don't have any self-evident moral axioms. But, as you neatly argue, we don't have any self-evident mathematical axioms either.

Replies from: ddxxdd
comment by ddxxdd · 2012-12-27T02:07:48.254Z · LW(p) · GW(p)

Thanks for taking the time to read and respond to the article, and for the critique; you are correct in that I am not well-versed in Greek philosophy. With that being said, allow me to try to expand my framework to explain what I'm trying to get at:

  • Scientists, unlike mathematicians, don't always frame their arguments in terms of pure logic (i.e. If A and B, then C). However, I believe that the work that comes from them can be treated as logical statements.

Example: "I think that heat is transferred between two objects via some sort of matter that I will call 'phlogiston'. If my hypothesis is true, than an object will lose mass as it cools down." 10 days later: "I have weighed an object when it was hot, and I weighed it when it was cold. The object did not lose any mass. Therefore, my hypothesis is wrong".

In logical terms: Let's call the Theory of Phlogiston "A", and let's call the act of measuring a loss of mass with a loss of heat "C".

  1. If A, then C.

  2. Physical evidence is obtained

  3. If Not C, then Not A.

Essentially, the scientific method involves the creation of a hypothesis "A", and a logical consequence of that hypothesis, "If A then C". Then physical evidence is presented in favor of, or against "C". If C is disproven, then A is disproven.

This is what I mean when I say that hypotheses are "axioms", and physical experiments are "conclusions".

  • In response to this statement:

Socrates, in the Platonic dialogues, is unwilling to take the law of non-contradiction as an axiom. There just aren't any axioms in Socratic philosophy, just discussions. No proofs, just conversations. Plato (and certainly not Socrates) doesn't have doctrines, and Plato is totally and intentionally merciless with people who try to find Platonic doctrines.

"No proofs, just conversations". In the framework that I'm working in, every single statement is either a premise or a conclusion. In addition, every single statement is either a "truth" (that we are to believe immediately), a "proposition" (that we are to entertain the logical implications of), or part of a "hypothesis/implication" pair (that we are suppose to believe with a level of skepticism until an experiment verifies it or disproves it). I believe that every single statement that has ever been made in any field of study falls into one of those 3 categories, and I'm saying that we need to discuss which category we need to place statements that are in the field of ethics.

In the field of philosophy, from my limited knowledge, I think that these discussions lead to conclusions that we need to believe as "truth", whether or not they are supported by evidence (i.e. John Rawl's "Original Position").

Replies from: None, whowhowho, BerryPick6
comment by [deleted] · 2012-12-27T04:54:23.406Z · LW(p) · GW(p)

This is what I mean when I say that hypotheses are "axioms", and physical experiments are "conclusions".

I see. You're right that philosophers pretty much never do anything like that. Except experimental philosophers, but thus far most of that stuff is just terrible.

"In the framework that I'm working in..."

That's a good framework with with to approach any philosophical text, including and especially the Platonic dialogues. I just wanted to stress the fact that the dialogues aren't treatises presented in a funny way. You're supposed to argue with Socrates, against him, yell at his interlocutors, try to patch up the arguments with premises of your own. It's very different from, say, Aristotle or Kant or whatever, where its a guy presenting a theory.

In the field of philosophy, from my limited knowledge, I think that these discussions lead to conclusions that we need to believe as "truth"

Would you mind if I go on for a bit? I have thoughts on this, but I don't quite know how to present them briefly. Anyway:

Students of Physics should go into a Physics class room or book with an open mind. They should be ready to learn new things about the world, often surprising things (relative to their naive impressions) and should often try to check their prejudices at the door. None of us are born knowing physics. It's something we have to go out and learn.

Philosophy isn't like that. The right attitude walking into a philosophy classroom is irritation. It is an inherently annoying subject, and its practitioners are even worse. You can't learn philosophy, and you can't become an expert at it. You can't even become good at it. Being a philosopher is no accomplishment whatsoever. You can just do philosophy, and anyone can do it. Intelligence is good, but it can be a hindrance too, same with education.

Doing philosophy means asking questions about things to which you really ought to already know the answers, like the difference between right and wrong, whether or not you're in control of your actions, what change is, what existing is, etc. Philosophy is about asking questions to which we ought to have the answers, but don't.

We do philosophy by talking to each other. If that means running an experiment, good. If that means just arguing, fine. There's no method, no standards, and no body of knowledge, unless you say there is, and then convince someone, and then there is until someone convinces you otherwise.

Scientists and mathematicians don't hate philosophy. They tend to love philosophers, or at least the older ones do. Young scientists and mathematicians do hate philosophers, and with good reason: part of being a young scientist or mathematician is developing a refined mental self-discipline, and that means turning your back on any froo-froo hand wavy BS and getting down to work. Philosophy is the most hateful thing in the world when you're trying to be wrong as little as possible. But once that discipline is in place, and people are confident in their ability to sort out good arguments from bad ones, facts from speculation, philosophy starts to look like fun.

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-27T12:58:44.091Z · LW(p) · GW(p)

The second part of your post is terrific. :)

comment by whowhowho · 2013-02-07T18:27:23.107Z · LW(p) · GW(p)

In the framework that I'm working in, every single statement is either a premise or a conclusion. In addition, every single statement is either a "truth" (that we are to believe immediately), a "proposition" (that we are to entertain the logical implications of), or part of a "hypothesis/implication" pair (that we are suppose to believe with a level of skepticism until an experiment verifies it or disproves it)

But there is a mini-premise, inference and mini-conclusion inside every "hypothesis-implication pair".

comment by BerryPick6 · 2012-12-27T12:59:51.532Z · LW(p) · GW(p)

In the field of philosophy, from my limited knowledge, I think that these discussions lead to conclusions that we need to believe as "truth", whether or not they are supported by evidence (i.e. John Rawl's "Original Position").

I'm curious as to why you referenced Rawl's work in this context. It's not apparent to me how Justice as Fairness is relevant here.

Replies from: ddxxdd
comment by ddxxdd · 2012-12-28T01:16:00.405Z · LW(p) · GW(p)

I referenced him because I recall that he comes to a very strong conclusion- that a moral society should have agreed-upon laws based on the premise of the "original position". He was the first philosopher that came to mind when I was trying to think of examples of a hard statement that is neither a "proposition" to be explored, nor the conclusion from an observable fact.

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-28T14:01:01.358Z · LW(p) · GW(p)

I mean, I'm pretty sure his conclusion is a "proposition." It has premises, and I could construct it logically if you wanted.

In fact, I don't understand his position to be "that a moral society should have agreed-upon laws" at all, but rather his use of the original position is an attempt to isolate and discover the principles of distributive justice, and that's really his bottom line.

comment by JonathanLivengood · 2012-12-27T21:01:04.719Z · LW(p) · GW(p)

Interesting piece. I was a bit bemused by this, though:

In fact Plato wrote to Archimedes, scolding him about messing around with real levers and ropes when any gentleman would have stayed in his study or possibly, in Archimedes’ case, his bath.

Problematically for the story, Plato died around 347 BCE, and Archimedes wasn't born until 287 BCE -- sixty years later.

comment by BerryPick6 · 2013-02-07T19:08:40.609Z · LW(p) · GW(p)

Thank you for an awesome read. :)

comment by whowhowho · 2013-02-07T18:23:57.352Z · LW(p) · GW(p)

science uses logical rules of inference. Does science take them as self-evident? Or does it test them? And can it test them without assuming them?

comment by jsalvatier · 2012-12-27T20:54:37.752Z · LW(p) · GW(p)

(whisper: Wei Lai should be Wei Dai)

comment by buybuydandavis · 2012-12-27T08:37:29.152Z · LW(p) · GW(p)

Nope. Even if one grants objective meaning to a unique interpersonal aggregate of suffering (and I don't), it's just wrong.

Sometimes you want people to suffer. For example, if one fellow caused all the suffering of the rest, moving him to less suffering than everyone else would be a move to a worse universe.

EDIT: I didn't mean "you" to indicate everyone. Sometimes I want people to suffer, and think that in my hypothetical, the majority of mankind would feel the same, and choose the same, if it were in their power.

Replies from: ddxxdd, Solvent
comment by ddxxdd · 2012-12-27T09:30:59.731Z · LW(p) · GW(p)

Sometimes you want people to suffer. For example, if one fellow caused all the suffering of the rest, moving him to less suffering than everyone else would be a move to a worse universe.

...because doing so would create incentive to not cause suffering to others. In the long run, that would result in less universal suffering overall. Isn't this correct?

Replies from: buybuydandavis
comment by buybuydandavis · 2012-12-27T20:29:28.941Z · LW(p) · GW(p)

No, that's not my motivation at all. That's not my because. It's just vengeance on my part.

Even if one regarded the design of vengeance as an evolutionary adaptation, I don't think that vengeance minimizes suffering, it punishes infractions against values.

At that level, it's not about minimizing suffering either, it's about evolutionary fitness.

comment by Solvent · 2012-12-27T12:02:20.765Z · LW(p) · GW(p)

Yeah, I'm pretty sure I (and most LWers) don't agree with you on that one, at least in the way you phrased it.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-12-27T20:09:48.371Z · LW(p) · GW(p)

You think they'd prefer that the guy that caused everyone else in the universe to suffer didn't suffer himself?

Replies from: Solvent, Oscar_Cunningham
comment by Solvent · 2012-12-27T22:38:43.957Z · LW(p) · GW(p)

Here's an old Eliezer quote on this:

4.5.2: Doesn't that screw up the whole concept of moral responsibility?

Honestly? Well, yeah. Moral responsibility doesn't exist as a physical object. Moral responsibility - the idea that choosing evil causes you to deserve pain - is fundamentally a human idea that we've all adopted for convenience's sake. (23).

The truth is, there is absolutely nothing you can do that will make you deserve pain. Saddam Hussein doesn't deserve so much as a stubbed toe. Pain is never a good thing, no matter who it happens to, even Adolf Hitler. Pain is bad; if it's ultimately meaningful, it's almost certainly as a negative goal. Nothing any human being can do will flip that sign from negative to positive.

So why do we throw people in jail? To discourage crime. Choosing evil doesn't make a person deserve anything wrong, but it makes ver targetable, so that if something bad has to happen to someone, it may as well happen to ver. Adolf Hitler, for example, is so targetable that we could shoot him on the off-chance that it would save someone a stubbed toe. There's never a point where we can morally take pleasure in someone else's pain. But human society doesn't require hatred to function - just law.

Besides which, my mind feels a lot cleaner now that I've totally renounced all hatred.

It's pretty hard to argue about this if our moral intuitions disagree. But at least, you should know that most people on LW disagree with you on this intuition.

EDIT: As ArisKatsaris points out, I don't actually have any source for the "most people on LW disagree with you" bit. I've always thought that not wanting harm to come to anyone as an instrumental value was a pretty obvious, standard part of utilitarianism, and 62% of LWers are consequentialist, according to the 2012 survey. The post "Policy Debates Should Not Appear One Sided" is fairly highly regarded, and it esposes a related view, that people don't deserve harm for their stupidity.

Also, what those people would prefer isn't nessecarily what our moral system should prefer- humans are petty and short-sighted.

Replies from: Eugine_Nier, ArisKatsaris, buybuydandavis
comment by Eugine_Nier · 2012-12-28T23:12:33.782Z · LW(p) · GW(p)

I've always thought that not wanting harm to come to anyone as an instrumental value was a pretty obvious, standard part of utilitarianism, and 62% of LWers are consequentialist, according to the 2012 survey.

What do you mean by "utilitarianism"? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses "total happiness" as a utility function. This sentence appears to be designed to confuse the two meanings.

The post "Policy Debates Should Not Appear One Sided" is fairly highly regarded, and it esposes a related view, that people don't deserve harm for their stupidity.

That is most definitely not the main point of that post.

Replies from: Solvent
comment by Solvent · 2012-12-28T23:47:36.500Z · LW(p) · GW(p)

What do you mean by "utilitarianism"? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses "total happiness" as a utility function. This sentence appears to be designed to confuse the two meanings.

Yeah, my mistake. I'd never run across any other versions of consequentialism apart from utilitarianism (except for Clippy, of course). I suppose caring only for yourself might count? But do you seriously think that the majority of those consequentialists aren't utilitarian?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-28T23:58:55.203Z · LW(p) · GW(p)

Well, even Eliezer's version of consequentialism isn't simple utilitarianism for starters.

Replies from: Solvent
comment by Solvent · 2012-12-29T00:02:18.261Z · LW(p) · GW(p)

It's a kind of utilitarianism. I'm including act utilitarianism and desire utilitarianism and preference utilitarianism and whatever in utilitarianism.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-29T21:43:41.481Z · LW(p) · GW(p)

Ok, what is your definition of "utilitarianism"?

comment by ArisKatsaris · 2012-12-28T01:06:48.004Z · LW(p) · GW(p)

But at least, you should know that most people on LW disagree with you on this intuition.

[citation needed]

Replies from: Solvent
comment by Solvent · 2012-12-28T04:27:58.826Z · LW(p) · GW(p)

I edited my comment to include a tiny bit more evidence.

comment by buybuydandavis · 2012-12-27T23:14:48.600Z · LW(p) · GW(p)

Thank you, that's a good start.

Yes, I had concluded that EY was anti retribution. Hadn't concluded that he had carried the day on that point.

Moral responsibility - the idea that choosing evil causes you to deserve pain - is fundamentally a human idea that we've all adopted for convenience's sake. (23).

I don't think vengeance and retribution are "ideas" that people had to come up with - they're central moral motivations. "A social preference for which we punish violators" gets at 80% of what morality is about.

Some may disagree about the intuition, but I'd note that even EY had to "renounce" all hatred, which implies to me that he had the impulse for hatred (retribution, in this context) in the first place.

This seems like it has makings of an interesting poll question.

Replies from: Solvent
comment by Solvent · 2012-12-28T04:27:19.325Z · LW(p) · GW(p)

This seems like it has makings of an interesting poll question.

I agree. Let's do that. You're consequentialist, right?

I'd phrase my opinion as "I have terminal value for people not suffering, including people who have done something wrong. I acknowledge that sometimes causing suffering might have instrumental value, such as imprisonment for crimes."

How do you phrase yours? If I were to guess, it would be "I have a terminal value which says that people who have caused suffering should suffer themselves."

I'll make a Discussion post about this after I get your refinement of the question?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-12-28T13:18:25.078Z · LW(p) · GW(p)

I'd suggest the following two phrasings:

  • I place terminal value to retribution (inflicting suffering on the causers of suffering), at least for some of the most egregious cases.
  • I do not place terminal value to retribution, not even for the most egregious cases (e.g. mass murderers). I acknowledge that sometimes it may have instrumental value.

Perhaps also add a third choice:

  • I think I place terminal value to retribution, but I would prefer it if I could self-modify so that I wouldn't.
comment by Oscar_Cunningham · 2012-12-27T21:38:07.016Z · LW(p) · GW(p)

I would, all else being equal. Suffering is bad.

comment by A1987dM (army1987) · 2012-12-27T14:12:45.599Z · LW(p) · GW(p)

Every act of lying is morally prohibited / This act would be a lie // This act is morally prohibited.

That also applies to literary criticism: Wulky Wilkinsen shows colonial alienation / Authors who show colonial alieniation are post-utopians // Wulky Wilkinsen is a post-utopian.

comment by Academian · 2012-12-27T06:08:48.644Z · LW(p) · GW(p)

I like this post, and here is some evidence supporting your fear that some people may over-use the morality=logic metaphor, i.e., copy too many anticipations about how logical reasoning works over to their anticipations about how moral reasoning works... The comment is already downvoted to -2, suggesting the community realizes this (please don't downvote it further so as to over-punish the author), but the fact that someone made it is evidence that your point here is valuable one.

http://lesswrong.com/lw/g0e/narrative_selfimage_and_selfcommunication/83ag

comment by nshepperd · 2012-12-27T02:37:27.071Z · LW(p) · GW(p)

The practice of moral philosophy doesn't much resemble the practice of mathematics. Mainly because in moral philosophy we don't know exactly what we're talking about when we talk about morality. In mathematics, particularly since the 20th century, we can eventually precisely specify what we mean by a mathematical object, in terms of sets.

"Morality is logic" means that when we talk about morality we are talking about a mathematical object. The fact that the only place in our mind the reference to this object is stored is our intuition is what makes moral philosophy so difficult and non-logicy. In practice you can't write down a complete syntactic description of morality, so in general¹ neither can you write syntactic proofs of theorems about morality. This is not to say that such descriptions or proofs do not exist!

In practice moral philosophy proceeds by a kind of probabilistic reasoning, which might be analogized to the thinking that leads one to conjecture that P≠NP, except with even less rigor. I'd expect that things like the order of moral arguments mattering come down to framing effects and other biases which are always involved regardless of the subject, but don't show up in mathematics so much because proofs leave little wiggle room.

¹ Of course, you may be able to write proofs that only use simple properties that you can be fairly sure hold of morality without knowing its full description, but such properties are usually either quite boring or not widely agreed upon or don't lead to interesting proofs due to being too specific. eg. "It's wrong to kill someone without their permission when there's nothing to be gained by it."

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-27T03:20:17.375Z · LW(p) · GW(p)

"Morality is logic" means that when we talk about morality we are talking about a mathematical object.

How does one go about defining this mathematical object, in principle? Suppose you were a superintelligence and could surmount any kind of technical difficulty, and you wanted to define a human's morality precisely as a mathematical object, how would you do it?

Replies from: nshepperd
comment by nshepperd · 2012-12-27T06:33:52.888Z · LW(p) · GW(p)

I don't really know the answer to that question.

In principle, you start with a human brain, and extract from it somehow a description of what it means when it says "morality". Presumably involving some kind of analysis of what would make the human say "that's good!" or "that's bad!", and/or of what computational processes inside the brain are involved in deciding whether to say "good" or "bad". The output is, in theory, a function mapping things to how much they match "good" or "bad" in your human's language.

The 'simple' solution, of just simulating what your human would say after being exposed to every possible moral argument, runs into trouble with what exactly constitutes an argument—if an UFAI can hack your brain into doing terrible things just by talking to you, clearly not all verbal engagement can be allowed—and also more mundane issues like our simulated human going insane from all this talking.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-27T11:09:44.340Z · LW(p) · GW(p)

Suppose the "simple" solution doesn't have the problems you mention. Somehow we get our hands on a human that doesn't have security holes and can't go insane. I still don't think it works.

Let's say you are trying to do some probabilistic reasoning about the mathematical object "foobar" and the definition of it you're given is "foobar is what X would say about 'foobar' after being exposed to every possible argument concerning 'foobar'", where X is an algorithmic description of yourself. Well, as soon as you realize that X is actually a simulation of you, you can conclude that you can say anything about 'foobar' and be right. So why bother doing any more probabilistic reasoning? Just say anything, or nothing. What kind of probabilistic reasoning can do you beyond that, even if you wanted to?

Replies from: nshepperd
comment by nshepperd · 2012-12-27T12:41:03.358Z · LW(p) · GW(p)

I think you're collapsing some levels here, but it's making my head hurt to think about it, having the definition-deriver and the subject be the same person.

Making this concrete: let 'foobar' refer to the set {1, 2, 3} in a shared language used by us and our subject, Alice. Alice would agree that it is true that "foobar = what X would say about 'foobar' after being exposed to every possible argument concerning 'foobar'" where X is some algorithmic description of Alice. She would say something like "foobar = {1, 2, 3}, X would say {1, 2, 3}, {1, 2, 3} = {1, 2, 3} so this all checks out."

Clearly then, any procedure that correctly determines what X would say about 'foobar' should result in the correct definition of foobar, namely {1, 2, 3}. This is what theoretically lets our "simple" solution work.

However, Alice would not agree that "what X would say about 'foobar' after being exposed to every possible argument concerning 'foobar'" is a correct definition of 'foobar'. The issue is that this definition has the wrong properties when we consider counterfactuals concerning X. It is in fact the case that foobar is {1, 2, 3}, and further that 'foobar' means {1, 2, 3} in our current language, as stipulated at the beginning of this thought experiment. If-counterfactually X would say '{4, 5, 6}', foobar is still {1, 2, 3}, because what we mean by 'foobar' is {1, 2, 3} and {1, 2, 3} is {1, 2, 3} regardless of what X says.

Having written that, I now think I can return to your question. The answer is that firstly, by replacing the true definition "foobar = {1, 2, 3}" with "foobar is what X would say about 'foobar' after being exposed to every possible argument concerning 'foobar'" in the subject's mind, you have just deleted the only reference to foobar that actually exists in the thought experiment. The subject has to reason about 'foobar' using their built in definition, since that is the only thing that actually points directly to the target object.

Secondly, as described above "foobar is what X would say about 'foobar' after being exposed to every possible argument concerning 'foobar'" is an inaccurate definition of foobar when considering counterfactuals concerning what X would say about foobar. Which is exactly what you are doing when reasoning that "if-counterfactually I say {4, 5, 6} about foobar, then what X would say about 'foobar' is {4, 5, 6}, so {4, 5, 6} is correct."

Which is to say that, analogising, the contents of our subject's head is a pointer (in the programming sense) to the object itself, while "what X would say about 'foobar' after being exposed to every possible argument concerning 'foobar'" is a pointer to the first pointer. You can dereference it, and get the right answer, but you can't just substitute it in for the first pointer. That gives you nothing but a pointer referring to itself.

ETA: Dear god, this turned into a long post. Sorry! I don't think I can shorten it without making it worse though.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-27T17:45:43.762Z · LW(p) · GW(p)

Right, so my point is that if your theory (that moral reasoning is probabilistic reasoning about some mathematical object) is to be correct, we need a definition of morality as a mathematical object which isn't "what X says after considering all possible moral arguments". So what could it be then? What definition Y can we give, such that it makes sense to say "when we reason about morality, we are really doing probabilistic reasoning about the mathematical object Y"?

Secondly, until we have a candidate definition Y at hand, we can't show that moral reasoning really does correspond to probabilistic logical reasoning about Y. (And we'd also have to first understand what "probabilistic logical reasoning" is.) So, at this point, how can we be confident that moral reasoning does correspond to probabilistic logical reasoning about anything mathematical, and isn't just some sort of random walk or some sort of reasoning that's different from probabilistic logical reasoning?

Replies from: nshepperd, HalMorris
comment by nshepperd · 2012-12-29T04:01:04.079Z · LW(p) · GW(p)

Right, so my point is that if your theory (that moral reasoning is probabilistic reasoning about some mathematical object) is to be correct, we need a definition of morality as a mathematical object which isn't "what X says after considering all possible moral arguments". So what could it be then? What definition Y can we give, such that it makes sense to say "when we reason about morality, we are really doing probabilistic reasoning about the mathematical object Y"?

Unfortunately I doubt I can give you a short direct definition of morality. However if such a mathematical object exists, "what X says after considering all possible moral arguments" should be enough to pin it down (disregarding the caveats to do with our subject going insane, etc).

Secondly, until we have a candidate definition Y at hand, we can't show that moral reasoning really does correspond to probabilistic logical reasoning about Y. (And we'd also have to first understand what "probabilistic logical reasoning" is.) So, at this point, how can we be confident that moral reasoning does correspond to probabilistic logical reasoning about anything mathematical, and isn't just some sort of random walk or some sort of reasoning that's different from probabilistic logical reasoning?

Well, I think it safe to assume I mean something by moral talk, otherwise I wouldn't care so much about whether things are right or wrong. I must be talking about something, because that something is wired into my decision system. And I presume this something is mathematical, because (assuming I mean something by "P is good") you can take the set of all good things, and this set is the same in all counterfactuals. Roughly speaking.

It is, of course, possible that moral reasoning isn't actually any kind of valid reasoning, but does amount to a "random walk" of some kind, where considering an argument permanently changes your intuition in some nondeterministic way so that after hearing the argument you're not even talking about the same thing you were before hearing it. Which is worrying.

Also it's possible that moral talk in particular is mostly signalling intended to disguise our true values which are very similar but more selfish. But that doesn't make a lot of difference since you can still cash out your values as a mathematical object of some sort.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-29T05:05:28.903Z · LW(p) · GW(p)

It is, of course, possible that moral reasoning isn't actually any kind of valid reasoning, but does amount to a "random walk" of some kind, where considering an argument permanently changes your intuition in some nondeterministic way so that after hearing the argument you're not even talking about the same thing you were before hearing it. Which is worrying.

Yes, exactly. This seems to me pretty likely to be the case for humans. Even if it's actually not the case, nobody has done the work to rule it out yet (has anyone even written a post making any kind of argument that it's not the case?), so how do we know that it's not the case? Doesn't it seem to you that we might be doing some motivated cognition in order to jump to a comforting conclusion?

comment by HalMorris · 2012-12-28T02:29:39.659Z · LW(p) · GW(p)

"what X says after considering all possible moral arguments"

I know you're not arguing for this but I can't help noting the discrepancy between the simplicity of the phrase "all possible moral arguments", and what it would mean if it can be defined at all.

But then many things are "easier said than done".

comment by Armok_GoB · 2013-01-05T18:25:29.956Z · LW(p) · GW(p)

I think the term you are looking for is "formal" or "an algebra", not "logic".

comment by chaosmage · 2012-12-27T01:03:56.747Z · LW(p) · GW(p)

You're mischaracterizing the quote that your post replies to. EY claims that he is attempting to comprehend morality as a logical, not a physical thing, and he's trying to convince readers to do the same. You're evidently thinking of morality as a physical thing, something essentially derived from the observation of brains. You're restating the position his post responds to, without strengthening it.

Replies from: timtyler
comment by timtyler · 2012-12-29T12:51:18.982Z · LW(p) · GW(p)

The argument in that post seems incoherent to me. In the conventional natural sciences, what is moral is the subject matter of biology. This employs game theory and evolutionary theory (i.e. logic), but also considers the laws of physics and the local state of the universe to explain existing moral systems.

For instance, consider the question of whether it is wrong to drive on the left-hand side of the road. That isn't logic, it depends on the local state of the universe. Two advanced superintelligences which had evolved independently could easily find themselves in disagreement over such issues.

This is an example of spontaneous symmetry breaking. It is one of the factors which explains how arbitrarily-advanced agents can still disagree on what the right thing to do is.