The Maze of Moral Relativism

post by Stabilizer · 2017-01-27T19:29:56.813Z · LW · GW · Legacy · 35 comments

This is a link post for https://opinionator.blogs.nytimes.com/2011/07/24/the-maze-of-moral-relativism/

Contents

35 comments

35 comments

Comments sorted by top scores.

comment by satt · 2017-02-07T23:06:54.240Z · LW(p) · GW(p)

Like BiasedBayes, this article reads to me as putting forward a false dichotomy. Unlike BiasedBayes, I don't think that "wellbeing" or "science" have much to do with why I'm unconvinced by the article.

To me the third alternative to the dichotomy is, unsurprisingly, my own view: moral facts don't exist, and "right" & "wrong" are shorthand for behaviour of which I strongly approve or disapprove. My approvals & disapprovals can't be said to be moral facts, because they depend solely on my state of mind, but I'm nonetheless not obliged to become a nihilist because my approvals & disapprovals carry normative import to me, so my uses of "right" & "wrong" are not just descriptive as far as I'm concerned.

I expect Boghossian has a rebuttal, but I can't infer from the article what it would be. I can't imagine a conversation between the two of us that doesn't go in circles or leave me with the last word.


Me: Moral facts don't real. And yet, no logic compels me to be a nihilist. Checkmate, perfessor!

Imagined extrapolation of Paul Boghossian: But if there are no moral facts, any uses of ideas like "right", "wrong", or "should" just become descriptions of what someone thinks or feels. This leaves you bereft of normative vocabulary and hence a nihilist.

Me: Uses of "right", "wrong", or "should" are descriptions of how someone thinks or feels, at least when I use them. Specifically, they're descriptions of how I think or feel. But they aren't just that.

IEPB: So what's that extra normative component? Where does it come from?

Me: Well, it comes from me. I mentally promote certain actions (& non-actions) to the level of obligations or duties, or at least things which should be encouraged, whether or not I (or others) actually fulfil those obligations or duties.

IEPB: This is reminiscent of the example I gave in my article of etiquette, which derives its normative force from the hidden moral fact (absolute norm) that "we ought not, other things being equal, offend our hosts".

Me: If that analogy works, there must be some moral fact hidden in my mental-promotion-to-duty conception of right & wrong. Suppose for a moment that that's so. Start with the observation that my conception of right is basically "that is the right thing to do, in that it is something I approve of so strongly that I regard it as an obligation, or something approaching an obligation, binding on me/you". Digging into that, what's the underlying "moral fact" there? Presumably it's something like "we ought to do things that satt strongly approves of, and not do things that satt strongly disapproves of". But that's obviously not a moral fact, because it's obviously partial and dependent on one specific person's state of mind.

IEPB: Which means it's not normative, it's just a description of someone's mind. So you have no basis for normative judgements. You're a nihilist in denial.

Me: If I'm incapable of making normative judgements, how do you explain my judgement that you shouldn't make mediocre philosophical arguments, because I strongly disapprove of them?

IEPB: Har har. That's not a normative judgement. That's just a description of your state of mind.

Me: Not "just"! It's an assertion that you're obliged to not make mediocre philosophical arguments!

IEPB: Obliged in what way?

Me: Obliged in that I'm telling you you're obliged!

IEPB: That's not an obligation, that's just you expressing your preferences.

Me: No, because there's an explicit extra component to what I'm expressing. Your "just"ing would be correct if I were saying, for example, that I don't like chocolate. But I'm not merely passively observing that I don't approve of mediocre philosophical arguments. I'm telling you to desist from making them.

IEPB: I don't disagree that you're telling me that. Nor would any rational listener to this conversation. But "satt is telling me to desist" is "just a descriptive remark that carries no normative import whatsoever", quoting my article, which you did read, right?

Me: As a matter of fact I did. But like I say, I'm not (just) making the bland descriptive claim which anyone with ears would agree with. I'm carrying out the first-order action of commanding you, in the earnest hope that you will listen & obey, to refrain from an action.

IEPB: Big whoop. Anybody can give an order.

Me: That you're unmoved by my order doesn't make it any less normative. Compare a realm where we both agree that there are facts: empirical investigation of reality. If I told you that gravity made things fall downwards, that would still have force (lol) as a positive, empirical claim, whether you agreed or not. Likewise, when I tell you to knock off some behaviour, that still has force as a normative claim, whether you agree or not.

IEPB: Nuh uh. The two cases are disanalogous. In the gravity case I can only disagree with you on pain of being objectively incorrect. In the knock-it-off case I can disagree with you however I please.

Me: No, you disagree on pain of being quasi-objectively wrong, according to my standard.

IEPB: Oh, come on. Quasi-objectively? By your standard? Really?

Me: Yes; any observer would agree that you'd violated my standard.

IEPB: But that's purely a descriptive claim!

Me: That's the descriptive component, and as a descriptive claim it's objectively correct. The normative claim is that your disagreement and violation mean you're in the wrong, as defined by my disapproval of your behaviour. And that normative claim is subjectively correct.

IEPB:

And at this point I have to break off this made-up conversation, because I don't see what new rebuttal Boghossian could/would give. Here endeth the philosopher fanfiction.


Edit, 4 days later: correct "normative important" misquotation to "normative import".

Replies from: BiasedBayes, Stabilizer
comment by BiasedBayes · 2017-02-08T12:17:41.170Z · LW(p) · GW(p)

Im curious about your view. Do you think that we cant say its a moral fact that its better (1) to feed newborn baby with milk from its mother and sooth it tenderly so it stops crying compared to (2) chop its fingers of one by one slowly with a dull blade and then leave it bleeding? And this moral evaluation depends on your state of mind?

Replies from: satt
comment by satt · 2017-02-11T15:23:38.620Z · LW(p) · GW(p)

Do you think that we cant say its a moral fact that [...]

Correct, I would call that a category error.

And this moral evaluation depends on your state of mind?

One's view of the wrongness of torturing a newborn versus soothing it depends on one's state of mind, yes.

If I were confronted with someone who insisted that "torturing a newborn instead of soothing it is good, actually", I could say that was "wrong" in the sense of evil, but there is no evidence I could present which, in itself, would show it to be "wrong" in the sense of incorrect.

comment by Stabilizer · 2017-02-08T00:20:03.242Z · LW(p) · GW(p)

Actually, I don't know if you and Boghossian really disagree here. I think Boghossian is trying to argue that your normative preferences arise from your opinions about what the moral facts are. So I think he'd say:

IEPB: "People ought to do X" is your preference because you are assuming "People ought to do X" is a moral fact. It's a different issue whether your assumption is true or false, or justified or unjustified, but the assumption is being made nevertheless.

For example, when you exhort IEPB to not make mediocre philosophy arguments, and say that that's your preference, it's because you are assuming that the claim, "philosophy professors ought not to make mediocre philosophy arguments", is in fact, true.

Replies from: satt
comment by satt · 2017-02-11T15:14:52.314Z · LW(p) · GW(p)

IEPB: "People ought to do X" is your preference because you are assuming "People ought to do X" is a moral fact. It's a different issue whether your assumption is true or false, or justified or unjustified, but the assumption is being made nevertheless.

If my mental model of moral philosophers is correct, this contravenes how moral philosophers usually define/use the phrase "moral fact". Moral facts are supposed to (somehow) inhere in the outside world in a mind-independent way, so the origin of my "People ought to do X" assumption does matter. Because my ultimate justification of such an assumption would be my own preferences (whether or not alloyed with empirical claims about the outside world), I couldn't legitimately call "People ought to do X" a moral fact, as "moral fact" is typically understood.

Consequently I think this line of rebuttal would only be open to Boghossian if he had an idiosyncratic definition of "moral fact". But it is possible that our disagreement reduces to a disagreement over how to define "moral facts".

For example, when you exhort IEPB to not make mediocre philosophy arguments, and say that that's your preference, it's because you are assuming that the claim, "philosophy professors ought not to make mediocre philosophy arguments", is in fact, true.

Introspecting, this feels like a reversal of causality. My own internal perception is that the preference motivates the claim rather than vice versa. (Not that introspection is necessarily reliable evidence here!)

Replies from: Stabilizer
comment by Stabilizer · 2017-02-11T19:59:18.786Z · LW(p) · GW(p)

I agree that any disagreement might come down to what we mean by moral claims.

I don't know Boghossian's own particular commitments, but baseline moral realism is a fairly weak claim without any metaphysics of where these facts come from. I quote from the Stanford Encyclopedia:

Moral realism is not a particular substantive moral view nor does it carry a distinctive metaphysical commitment over and above the commitment that comes with thinking moral claims can be true or false and some are true.

A simple interpretation that I can think of: when you say that you prefer that people do X, typically, you also prefer that other people prefer that people do X. This, you could take as sufficient to say "People ought to do X". (This has the flavor the Kantian categorical imperative. Essentially, I'm proposing a sufficient condition for something to be a moral claim, namely, that it be desired to be universalized. But I don't want to claim that this a necessary condition.)

At any rate, whether the above definition stands or falls, you can see that it doesn't have any metaphysical commitment to some free-floating, human-independent (to be be distinguished from mind-independent) facts embedded in the fabric of the universe. Hopefully, there are other ways of parsing moral claims in such ways so that the metaphysics isn't too demanding.

Replies from: satt
comment by satt · 2017-02-17T02:08:05.460Z · LW(p) · GW(p)

I might've been influenced too much by people speaking to me (in face-to-face conversation) as if moral realism entails objectivity of moral facts, and maybe also influenced too much by the definitions I've seen online. Wikipedia's "Moral realism" article starts outright with

Moral realism (also ethical realism or moral Platonism) is the position that ethical sentences express propositions that refer to objective features of the world (that is, features independent of subjective opinion), some of which may be true to the extent that they report those features accurately. This makes moral realism a non-nihilist form of ethical cognitivism with an ontological orientation, standing in opposition to all forms of moral anti-realism and moral skepticism, including ethical subjectivism (which denies that moral propositions refer to objective facts),

and the IEP's article on MR has an entire section, "Moral objectivity", the beginning of which seems to drive at moral facts and MR relying on a basis beyond (human) mind states. The intro concludes,

Neither subjectivists nor relativists are obliged to deny that there is literal moral knowledge. Of course, according to them, moral truths imply truths about human psychology. Moral realists must maintain that moral truths —and hence moral knowledge—do not depend on facts about our desires and emotions for their truth.

At the same time, the SEP does seem to offer a less narrow definition of MR which allows for moral facts to have a non-objective basis.

I wonder whether I've anchored too much on old-fashioned, "classic" MR which does require moral facts to have objective status (whether that's a mind-independent or human-independent status), while more recent moral realist philosophies are content to relax this constraint. Maybe I'm a moral realist to 21st century philosophers and a moral irrealist to 20th century philosophers!

Replies from: bogus
comment by bogus · 2017-02-17T12:11:12.271Z · LW(p) · GW(p)

"Moral facts" (i.e. _facts_ about _morality_) are overall neither objective nor subjective; they're intersubjective, in that they are shared at least throughout a given community and moral code, and to some extent they're even shared among most human communities. (Somewhat paradoxically, when talking about the most widely-shared values - precisely those values that are closest to being 'objective', if only in an everyday sense! - we don't even use the term "morals" or "morality" but instead prefer to talk about "ethics", which in a stricter sense is rather the subject of how different facets of morality might interrelate and balance each other, what it even means to argue about morality, and the practical implications of these things for everyday life.)

Whether "moral facts" are human-independent is an interesting question in itself. I think one could definitely argue that a number of basic moral facts that most human communities share (such as the value of 'protection' and 'thriving') are in fact also shared by many social animals. If true, this would clearly imply a human-independent status for these moral facts. Perhaps more importantly, it would also point to the need to attribute some sort of moral relevance and personhood at least to the most 'highly-developed' social animals, such as the great apes (hominids) and perhaps even dolphins and whales.

comment by BiasedBayes · 2017-01-31T17:10:33.982Z · LW(p) · GW(p)

Confusing article. "Either total relativism or absolute moral facts". How about the following: descriptive morality is based on rationalizations of emotional system 1 responses but if morality has anything to do with human wellbeing, science can inform what is right normative moral code.

Replies from: TheAncientGeek, BiasedBayes, Stabilizer
comment by TheAncientGeek · 2017-02-03T13:24:24.259Z · LW(p) · GW(p)

Science can inform, but hardly solve. How much weighting do you put on the wellbeing of the folks at home versus people in far off lands?

Replies from: BiasedBayes
comment by BiasedBayes · 2017-02-06T20:16:27.659Z · LW(p) · GW(p)

Depends what you mean by "solving".Yeah, this is exactly what the data shows and its called ingroup-outgroup bias.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-02-07T13:39:38.099Z · LW(p) · GW(p)

You seem to have read my question "how much weighting do you put on the wellbeing of the folks at home versus people in far off lands?" as the statement "you put more weight on the folks at home". But that is descriptive not normative.. it tells you what you do do, not what you should do. Solving ethics involves recognising that the normative question is different to the descriptive question, and solving the normative question.

Replies from: BiasedBayes
comment by BiasedBayes · 2017-02-07T19:46:01.522Z · LW(p) · GW(p)

Thanks for clarifying! In that case thats a way too general normative question to give real purposeful answer. (For example 1,089 % more does not really make any sense).

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-02-08T13:10:45.021Z · LW(p) · GW(p)

So are we believing that science is bad, because it can't answer broad normative questions, or that broad normative questions are bad because science can't answer them?

Replies from: BiasedBayes
comment by BiasedBayes · 2017-02-08T20:01:53.727Z · LW(p) · GW(p)

That is huge generalisation from one simple general question. Science can inform general normative questions and theres nothing wrong with general normative questions, in general ;). The problem is that this specific question here is pretty meaningless. In general one should put equal weight to everybodys wellbeing. Its easy to poke holes to this answer but the problem is still your question that begs general answers like this.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-02-09T10:58:43.044Z · LW(p) · GW(p)

I dont see why the weighting/universality issue is meaningless, particularly on view of the fact that a lot of object level ethics depends on it.

I also dont see why equal weighting is the right answer, mainly because you did not provide any argument.

Replies from: BiasedBayes
comment by BiasedBayes · 2017-02-10T11:27:13.021Z · LW(p) · GW(p)

Its not meaningless in general, just your question: "How much weighting do you put on the wellbeing of the folks at home versus people in far off lands" is meaningless to me or atleast not interesting. What do you mean by wellbeing? who are these people? (all people abroad?),Whats "far away"? what do you mean by saying to "put weight on wellbeing"?, whats "home"?

Exactly equal weighting (whatever that means) because we are the same species with the same kind of nervous system and same factors affecting to our wellbeing. If you are going to specify your question further my answer will get more nyanced too.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-02-25T11:20:25.016Z · LW(p) · GW(p)

Its not meaningless in general, just your question: "How much weighting do you put on the wellbeing of the folks at home versus people in far off lands" is meaningless to me or atleast not interesting. What do you mean by wellbeing? who are these people? (all people abroad?),Whats "far away"? what do you mean by saying to "put weight on wellbeing"?, whats "home"?

You have already put forward an answer to the question, namely that the wellbeing of everybody near and far counts equally, so your protestation that you don't understand the terms of the question are not convincing.

The terms are neither completely meaningless nor exactly well defined. You are engaging in a form of Selective Demand for RIgor, where you round off the terms to meaningless or perfectly OK depending on who is using them.

Exactly equal weighting (whatever that means) because we are the same species with the same kind of nervous system and same factors affecting to our wellbeing.

That's a series of fact. How does it add up to a value?

Replies from: BiasedBayes
comment by BiasedBayes · 2017-02-26T10:42:42.134Z · LW(p) · GW(p)

I have not stated that I do not undestand the terms.This should be very clear. I have stated that the question is not interesting to me because its too general. BUT because you keep insisting I still gave an answer to you while very clearly stating that if you would like to be more specific I could give you even better answer.

How can it add up to value? It can can provide crucial information in meeting those values. Hard distinction between facts and values is illposed. What is a fact free value? In the other hand even our senses and cognition have apriori concepts that affects the process of observing and processing so called value free facts. Welcome to 2017 David Hume.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-02-26T11:32:44.931Z · LW(p) · GW(p)

.This should be very clear. I have stated that the question is not interesting to me because its too general.

But to get to an "interesting", object-level ethical proposition, you have to solve the general questions first. And you have already taken a stand o one such general question, namely universalism versus tribalism.

It can can provide crucial information in meeting those values.

In conjunction with something else that has'nt been specified? Of course factual information can contribute to evaluative claims, that's not the hard version of the fact-value problem.

What is a fact free value?

For instance, a decision-theoretic weighting on the desirbiilty a future outcome.

Replies from: BiasedBayes
comment by BiasedBayes · 2017-02-26T13:51:46.547Z · LW(p) · GW(p)

You are partly trying to carve nature in clear categories, and it does not care about your intent doing so. There are general answers to general questions but when being more specific your nice clean and clear general answer can be problematic. Im in no way forced to keep defending universalism if the question is more specific. Good luck solving anything with that kind of conceptual musings from the armchair.

For instance, a decision-theoretic weighting on the desirbiilty a future outcome.

And those utilities on that decision-theoretic weighting are affected in the first place by the actual facts of how our nervous system and cognition is evolved to appreciate these specific values, how our beliefs are in line with the facts of the reality and also hopefully be updated and criticised based on facts too.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-02-27T11:13:42.988Z · LW(p) · GW(p)

You are partly trying to carve nature in clear categories, and it does not care about your intent doing so.

Or yours? You would have been making a consistent case if, throughout this discussion, you had maintained some kind of error theory about ethics, some claim along the lines that he whole subject is imponderable nonsense. Instead you have maintained the inconsistent claims that some completely gneral set of considerations about conceptual clarity undermine everyone else's case, but not yours.

And those utilities on that decision-theoretic weighting are affected in the first place by the actual facts of how our nervous system and cognition is evolved to appreciate these specific values

So there is such a thing as value, and the answer the fact-value dilemma is to wholeheartedly embrace the naturalistic fallacy?

Replies from: BiasedBayes
comment by BiasedBayes · 2017-02-27T18:22:13.817Z · LW(p) · GW(p)

Thanks for your answer. My 20 cents:

There are no both necessary and sufficient conditions for the perfect foundational general ethical theories. Would be interested hearing your arguments if you think otherwise. Give me an example to refute this. This does not mean that there cant be general guidelines. Contrary to your post there is huge falsifiability demand on my proposition because scientific enquiry can and has to inform what is the right (or better) moral answer. Which leads nicely to G.E. Moores naturalistic fallacy.

So there is such a thing as value, and the answer the fact-value dilemma is to wholeheartedly embrace the naturalistic fallacy?

Im not completely sure that you understand what is the naturalistic fallacy since you even suggest it here. There are many naturalistic approaches to ethics that do not fall into Moores naturalistic fallacy. What these approaches have in common is the argument that science is relevant for ethics, without being an attempt to start from a foundational first moral principles .

Moore’s naturalistic fallacy is aimed towards arguments seeking a foundation to ethics, not to criticize ethicists who do not provide such a foundation. Im not trying to derive foundational ethical principles here if that is not clear already. This is approach where normative inquiry is aimed to tangible problem solving and where a moral problem is not necessary ever completely solved.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-02-28T10:34:55.273Z · LW(p) · GW(p)

There are no both necessary and sufficient conditions for the perfect foundational general ethical theories.

True, but irrelevant. You can't start with the premise that all ethics is imperfect and then immediately conclude your imperfect theory is less imperfect than everyone else's.

What these approaches have in common is the argument that science is relevant for ethics, without being an attempt to start from a foundational first moral principles And who even said that science is irrelevant?

Pointing out that science is relevant to ethics , without saying anything else, doesn't buy you anything..not even a theory, let alone a correct one.

Replies from: BiasedBayes
comment by BiasedBayes · 2017-02-28T12:56:12.897Z · LW(p) · GW(p)

Since we agree that this is true, but you still keep insisting that "you have to solve (funny to use this word because this task is still work in progress after thoussands of years) the general questions first get to an "interesting" object-level ethical propositions" as you wrote in the beginning, please put forward your answers to these general questions. Im begging for you to give your foundational ethical arguments.

After this we could really proceed to compare our suggestions and other people could perhaps conclude whose proposition is more or less imperfect. Im very happy to do this and give more structured arguments how science is relevant when answering more specifically to your points.

So far I have not concluded that "my theory is less perfect than everybodys else". You or anyone else have not even stated a hint of a theory! What I have said is that the question "how much weighting do you put on the wellbeing of the folks at home versus people in far off lands" is not interesting to me.

I never wrote the last sentence of your second quote

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-03-11T18:38:53.440Z · LW(p) · GW(p)

So far I have not concluded that "my theory is less [im]perfect than everybodys else".

To state P is to imply "P is true". If you didn't think your theory was better, why state it?

You or anyone else have not even stated a hint of a theory!

Anyone else? Any number of people have stated theories. The Catholic Church The Protestant churches.Left wing politics. Right wing politics. ....etc etc etc.

Since we agree that this is true, but you still keep insisting that "you have to solve (funny to use this word because this task is still work in progress after thoussands of years) the general questions first get to an "interesting" object-level ethical propositions" as you wrote in the beginning,

Anyone can state an object-level theory which is just the faith of their ancestors or whatever, and many do. However, you put yourself in a tricky position to do so when your theory boils down to "science solves it", because science is supposed to be better than everything else for reasons connected to wider rationality...it's supposed to be on the high ground.

Science as arbitrary, free-floating principles isn't really science.

please put forward your answers to these general questions. Im begging for you to give your foundational ethical arguments.

Why? To support some claim about ethics? I haven't made any. To prove that it is possible?

Oh well....

We will be arguing that:-

  • Ethics fulfils a role in society, and originated as a mutually beneficial way of regulating individual actions to minimise conflict, and solve coordination problems. ("Social Realism").

  • No spooky or supernatural entities or properties are required to explain ethics (naturalism is true)

  • There is no universally correct system of ethics. (Strong moral realism is false)

  • Multiple ethical constructions are possible...

  • ...but an ethical system can be better or worse adapted to a society's needs, meaning there are better and worse ethical systems.(Strong ethical relativism is also false...we are promoting a central or compromise position along the realism-relativism axis).

  • The rival theories of metaethics, deontology, consequentialism and virtue theory, are not really alternatives, but deal with different aspects of ethics.

  • Therefore the correct theory of metaethics is a kind of hybrid of deontology, consequentialism and virtue theory, as well as a compromise between relativism and realism.(Deontology explains obligation, consequentialism grounds deontology, virtue puts ethics into practice, utilitarianism steers the future direction of society)

PS perhaps you are getting hung up on the idea of perfect proof or solution. When I say you have to solve the general questions, what I mean is that the closer you are to solving them, the better positioned you are to offer answers to the specific questions. In other words, you don't get to shrug off all responsibility to justify a view just because perfect justification is practically unavailable.

Replies from: BiasedBayes
comment by BiasedBayes · 2017-03-17T14:17:18.548Z · LW(p) · GW(p)

To state P is to imply "P is true". If you didn't think your theory was better, why state it?

Im not advocating some big grand theory of ethics but a rational approach to ethical problems given the values we have. I dont think its needed or even possible to solve some big general questions first.

Anyone else? Any number of people have stated theories. The Catholic Church The Protestant churches.Left wing politics. Right wing politics. ....etc etc etc.

In this discussion.

Anyone can state an object-level theory which is just the faith of their ancestors or whatever, and many do. However, you put yourself in a tricky position to do so when your theory boils down to "science solves it", because science is supposed to be better than everything else for reasons connected to wider rationality...it's supposed to be on the high ground.

Irrelevant. Given values we have there are better and worse approaches to ethical problems. The answer is not some lipservice slogan "science solves it " but to give an argument based on synthesized evidence we have related to that specific ethical problem. After this peers can criticise the arguments based on evidence.

Why? To support some claim about ethics? I haven't made any. To prove that it is possible?

Because you keep insisting that we have to solve some big ethical questions first. When asked repeatedly you try to specify by saying "closer you are solving them" but that does not really mean anything. That is just a mumbo-jumbo. Looking forward to that day when philosophers agree on general ethical theory.

an ethical system can be better or worse adapted to a society's needs, meaning there are better and worse ethical systems.(Strong ethical relativism is also false...we are promoting a central or compromise position along the realism-relativism axis).

How do you know which system is better or worse? Would you not rank and evaluate different solutions to ethical problems by actually researching the solutions using empirical data had and applying this thing called scientific method?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-03-18T16:38:08.132Z · LW(p) · GW(p)

but a rational approach to ethical problems given the values we have. I dont think its needed or even possible to solve some big general questions first.

You need to understand the meta-level questions in order to solve the right problem in the right way. Applying science to ethics unreflectively, naively, has numerous potentional pitfalls. For instance, the pitfall of treating whatever intuitions evolution has given us as the last word on the subject.

The answer is not some lipservice slogan "science solves it " but to give an argument based on synthesized evidence we have related to that specific ethical problem.

Repeat three times before breakfast: science is value free. You cannot put together a heap of facts and come immediately to a conclusion about what is right and wrong. You need to think about how you are bridging the is-ought gap.

Looking forward to that day when philosophers agree on general ethical theory.

At least they see the need to. If you don't , you just end up jumping to conclusions, like the way you backed universalism without even considering an alternative.

Because you keep insisting that we have to solve some big ethical questions first.

I keep insisting that people think you can solve ethics with science need a meta ethical framework. The many people who have no ethical claims to make are not included.

How do you know which system is better or worse?

If you identify ethics as, in broad terms, fulfilling a functional role, then the answer to that questions is of the same general category as "is this hammer a good hammer". I am connecting ethical goodness to facts via instrumental goodness -- that is how I am approaching the is-ought gap.

ould you not rank and evaluate different solutions to ethical problems by actually researching the solutions using empirical data had and applying this thing called scientific method?

I am not saying : don't use empiricism, I am saying don't use it naively.

comment by BiasedBayes · 2017-02-10T13:13:42.843Z · LW(p) · GW(p)

Actually authors point is the choice between moral facts or slide to the nihilism. As a relativist one can try to muddle the water by saying that one should say that "according to culture X, Y is wrong" but this is a descriptive statement of culture X carrying no normative power. I really like the article, thanks a lot. Should have read it better in the first place.

comment by Stabilizer · 2017-01-31T19:30:32.434Z · LW(p) · GW(p)

Your view is consistent with the article's. The assumption that one ought to improve the well-being of humans would be a moral fact. The fact that emotional system 1 acquired noisy and approximate knowledge of moral facts would simply mean that evolution can acquire knowledge of moral facts. This is unproblematic: compare, for example, how evolutionarily evolved humans can obtain knowledge of mathematical facts.

For more on this, I recommend this Stanford Encyclopedia article; especially Section 4.

Replies from: BiasedBayes
comment by BiasedBayes · 2017-01-31T20:06:50.247Z · LW(p) · GW(p)

Thanks for the reply. My point was that evolutionary system 1 thinking and morality does not necessarily even correlate. Descriptive intuitive moral decisions are highly biased and can be affected for example by the ingroup bias and framing.Moral intuitions are there to better own reproduction/survival not to make good moral and ethical decisions.

Replies from: Stabilizer
comment by Stabilizer · 2017-01-31T21:14:57.083Z · LW(p) · GW(p)

I don't think you and the article's author really have a disagreement here. Notice that the author is not trying to tell you what the correct moral facts are. He'd be happy to accept that many proposed moral facts are actually false. He is simply trying to show that whenever we make moral judgements, we are implicitly assuming the existence of some moral facts – erroneous though they might be.

Replies from: BiasedBayes
comment by BiasedBayes · 2017-02-10T11:50:41.939Z · LW(p) · GW(p)

You are right sir. I think we might have different opinions about the ways/angle to approach the issue of right normative moral code. If I interpret it right I would be sceptical about authors idea "to employ our usual mix of argument, intuition and experience" in the light of knowledge of the limits and pitfalls of descriptive moral reasoning.

Replies from: Stabilizer
comment by Stabilizer · 2017-02-10T18:01:51.627Z · LW(p) · GW(p)

Right. Unfortunately, we don't really have any other means of obtaining moral knowledge other than via argument, intuition, and experience. Perhaps your point is that we should emphasize intuition less and argument+experience more.

Replies from: BiasedBayes
comment by BiasedBayes · 2017-02-10T20:21:06.295Z · LW(p) · GW(p)

Well yes, I think morality is related to the wellbeing of the organism interested about the morality in the first place. There are reasons why forcefully cutting my friends arm vs hair is morally different. The difference is the different effects of cutting the limb vs hair to the nervous system of the organism being cut. Its relevant what we know scientifically about human wellbeing. We can obtain morally relevant knowledge through science.