Moral Anti-Epistemology
post by Lukas_Gloor · 2015-04-24T03:30:27.972Z · LW · GW · Legacy · 36 commentsContents
36 comments
This post is a half-baked idea that I'm posting here in order to get feedback and further brainstorming. There seem to be some interesting parallels between epistemology and ethics.
Part 1: Moral Anti-Epistemology
"Anti-Epistemology" refers to bad rules of reasoning that exist not because they are useful/truth-tracking, but because they are good at preserving people's cherished beliefs about the world. But cherished beliefs don't just concern factual questions, they also very much concern moral issues. Therefore, we should expect there to be a lot of moral anti-epistemology.
Tradition as a moral argument, tu quoque, opposition to the use of thought experiments, the noncentral fallacy, slogans like "morality is from humans for humans" – all these are instances of the same general phenomenon. This is trivial and doesn't add much to the already well-known fact that humans often rationalize, but it does add the memetic perspective: Moral rationalizations sometimes concern more than a singular instance, they can affect the entire way people reason about morality. And like with religion or pseudoscience in epistemology about factual claims, there could be entire memeplexes centered around moral anti-epistemology.
A complication is that metaethics is complicated; it is unclear what exactly moral reasoning is, and whether everyone is trying to do the same thing when they engage in what they think of as moral reasoning. Labelling something "moral anti-epistemology" would suggest that there is a correct way to think about morality. Is there? As long as we always make sure to clarify what it is that we're trying to accomplish, it would seem possible to differentiate between valid and invalid arguments in regard to the specified goal. And this is where moral anti-epistemology might cause troubles.
Are there reasons to assume that certain popular ethical beliefs are a result of moral anti-epistemology? Deontology comes to mind (mostly because it's my usual suspect when it comes to odd reasoning in ethics), but what is it about deontology that relies on "faulty moral reasoning", if indeed there is something about it that does? How much of it relies on the noncentral fallacy, for instance? Is Yvain's personal opinion that "much of deontology is just an attempt to formalize and justify this fallacy" correct? The perspective of moral anti-epistemology would suggest that it is the other way around: Deontology might be the by-product of people applying the noncentral fallacy, which is done because it helps protect cherished beliefs. Which beliefs would that be? Perhaps the strongly felt intuition that "Some things are JUST WRONG?", which doesn't handle fuzzy concepts/boundaries well and therefore has to be combined with a dogmatic approach. It sounds somewhat plausible, but also really speculative.
Part 2: Memetics
A lot of people are skeptical towards these memetical just-so stories. They argue that the points made are either too trivial, or too speculative. I have the intuition that a memetic perspective often helps clarify things, and my thoughts about applying the concept of anti-epistemology to ethics seemed like an insight, but I have a hard time coming up with how my expectations about the world have changed because of it. What, if anything, is the value of the idea I just presented? Can I now form a prediction to test whether deontologists want to primarily formalize and justify the noncentral fallacy, or whether they instead want to justify something else by making use of the noncentral fallacy?
Anti-epistemology is a more general model of what is going on in the world than rationalizations are, so it should all reduce to rationalizations in the end. So it shouldn't be worrying that I don't magically find more stuff. Perhaps my expectations were too high and I should be content with having found a way to categorize moral rationalizations, the knowledge of which will make me slightly quicker at spotting or predicting them.
Thoughts?
36 comments
Comments sorted by top scores.
comment by afeller08 · 2015-04-24T19:41:17.090Z · LW(p) · GW(p)
Anti-epistemology is a more general model of what is going on in the world than rationalizations are,
Yes.
so it should all reduce to rationalizations in the end.
Unless there are anti-epistemologies that are not rationalizations.
The general concept of a taboo seems to me to be an example of a forceful anti-epistemology that is common in most moral ideologies and is different from rationalization. When something is tabooed, it is deemed wrong to do, wrong to discuss, and wrong to even think about. The tabooed thing is something that people deem wrong because they cannot think about whether it is wrong without in the process doing something "wrong," so there is no reason to suppose that they would find something wrong with the idea if they were to think about it, and try to consider whether the taboo fit with or ran against their moral sense.
A similar anti-epistemology is when people believe it is right to believe something is morally right... on up through all the meta-levels of beliefs about beliefs, so that they would already be committing the sin of doubt as soon as they begin to question whether they should believe that continuing to hold their moral beliefs is actually something they are morally obliged to do. (For ease of reference, I'll call this anti-epistemology "faith".)
One of the three things that rationalization, taboos, and faith have in common is that they are sufficiently general modes of thought to permit them to be applied to "is" propositions as well as "ought" propositions, and when these modes of thought are applied to objective propositions for which truth-values can be measured, they behave like anti-epistemologies. So in the absence of evidence to the contrary, we should presume that they behave as anti-epistemologies for morality, art criticisms and other subjects -- even though the existence of something stable and objective to be known in these subjects is highly questionable. The modes of thought I just mentioned are themselves inherently flawed. They are not simply flawed ways of thinking about morality, in particular.
If you are looking for bad patterns of though that deal specifically with ethics, and cannot be applied to other subjects about which truthiness can be more objectively measured, the best objection (I can think of) by which to call those modes of thought invalid is not to try to figure out why they are anti-epistemologies, but instead to reject them for their failure to put forward any objectively measurable claims. There are many more ways for a mode of thought to go wrong than for it to go right, so until some thought pattern has provided evidence of being useful for making accurate judgments about something, it should not be presumed to be a useful way to think about something for which the accuracy of statements is difficult or impossible to judge.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-04-25T10:01:07.238Z · LW(p) · GW(p)
This is a much better explanation of the OPs point than the OPs own posting.
comment by Lumifer · 2015-04-24T15:02:47.588Z · LW(p) · GW(p)
Therefore, we should expect there to be a lot of moral anti-epistemology.
Um. Epistemology, generally speaking, assumes that there is something stable and objective to be known -- reality. Do you assume moral realism? We don't speak of epistemology (or anti-epistemology) of artistic preferences, do we?
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-04-24T15:37:56.798Z · LW(p) · GW(p)
Did you read my third paragraph? I'm not assuming moral realism and I'm well aware of the issue you mention. I do think ithere is a meaningful way a person's reasoning about moral issues can be wrong, even under the assumption of anti-realism. Namely, if people use an argument of form f to argue for their desired conclusion, and yet they would reject other conclusions that follow from the argument of form f, it seems like they're deluding themselves. I'm not entirely sure the parallels to epistemology are strong enough to justify the analogy, but it seems worth thinking about it.
Replies from: Lumifercomment by OrphanWilde · 2015-05-01T19:29:47.465Z · LW(p) · GW(p)
You're making a mistake, in assuming that ethical systems are intended to do what you think they're intended to do. I'm going to make some complete unsubstantiated claims; you can evaluate them for yourself.
Point 1: The ethical systems aren't designed to be followed by the people you're talking to.
Normal people operate by internal guidance through implicit and internal ethics, primarily guilt; ethics are largely and -deliberately- a rationalization game. That's not an accident. Being a functional person means being able to manipulate the ethical system as necessary, and justify the actions you would have taken anyways.
Point 2: The ethical systems aren't just there to be followed, they're there to see who follows them.
People who -do- need the ethical systems are, from a social perspective, dangerous and damaged. Ethical systems are ultimately a fallback for these kinds of people, but also a marker; "normal" people don't -need- ethics. As a rule of thumb, anybody who has strict adherence to a code of ethics is some variant of sociopath. And also as a rule of thumb, some mechanism of taking advantage of these people, who can't know any better, is going to be built into these ethical systems. It will generally take some form akin to "altruism", and is most recognizable when ethical behavior begins to be labeled as selfishness, such as variants of Buddhism where personal enlightenment is treated as selfish, or Comtean altruism.
Point 3: The ethical systems are designed to be flexible
People who have internal ethical systems -do- need something to deal with situations which have no ethical solutions, but nonetheless are necessary to solve. Ethical systems which don't permit considerable flexibility in dealing with these situations aren't useful. But because of sociopaths, who still need ethical systems to be kept in line, you can't just permit anything. This is where contradiction is useful; you can use mutually exclusive rules to justify whatever action you need to take, without worrying about any ordinary crazy person using the same contradictions to their advantage, since they're trying to follow all the rules all the time.
Point 4: Ethical systems were invented by monkeys trying to out-monkey other monkeys
Finally, ethical systems provide a framework by which people can assert or prove their superiority, thereby improving their perceived social rank (what, you think most people here are arguing with an interest in actually getting the right answer?). A good ethical framework needs to provide room for disagreement; ambiguity and contradiction are useful here, as well, especially because a large point of ethical systems is to provide a framework to justify whatever action you happened to take. This is enhanced by perceptions of the ethical framework itself, which is why mathematicians will tend to claim utilitarianism is a great ethical system, in spite of it being a perfectly ambiguous "ethical system"; it has a superficially mathematical rigor to it, so appears more scientific, and lends itself to mathematics-based arguments.
See all the monkeys correcting you on trivial issues? Raising meaningless points that contribute nothing to anybody's understanding of anything while giving them a basis to prove their intelligence in thinking about things you hadn't considered? They're just trying to elevate their social status, here measured by karma points. On a site called Less Wrong, descended from a site called Overcoming Bias, the vast majority of interactions are still ultimately driven by an unconscious bias for social status. Although I admit the quality of the monkey-games here is at times somewhat better than elsewhere.
If you want an ethical system that is actually intended to be followed as-is, try Objectivism. There may be other ethical systems designed for sociopaths, but as a rule, most ethical systems are ultimately designed to take advantage of the people who actually try to follow them, as opposed to pay lip service to them.
Replies from: TheAncientGeek, Lukas_Gloor↑ comment by TheAncientGeek · 2015-05-02T12:01:17.431Z · LW(p) · GW(p)
Being a functional person means being able to manipulate the ethical system as necessary, and justify the actions you would have taken anyways.
One sided. OTOH: An ethical system being a functional ethical system means its being able to resist too much system-gaming by individuals. Ethical systems have a social role. Communities that can't persuade any of their members to sacrifice themselves in defence of the community don't survive,
People who -do- need the ethical systems are, from a social perspective, dangerous and damaged. Ethical systems are ultimately a fallback for these kinds of people,
What? If voluntary ethics is the fallback for the dangerous and damaged, what is the law, the criminal justice system, the involuntary stuff? (ETA and isnt a sociopath by definition someone who can't/won't internalise social norms?)
Systems of ethical rules are needed to solve the difficult problem of making on the spot calculations, and the impossible problem of spontaneously coordinating without commitment. Which is to say they are needed by almost everybody.
you want an ethical system that is actually intended to be followed as-is, try Objectivism.
It may have the as-is characteristic, but it is very doubtful that it qualifies as ethics, since egoism is the opposite of ethics in the eyes of >99% of people.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-05-02T17:50:38.411Z · LW(p) · GW(p)
Quit anthropomorphizing groups of people, criminal justice is designed for the sane, sociopathy is defined differently in psychiatry than among the general public, you never actually need to decide whether to save the world-famous violinist or the rather average doctor who occupy the different trolley tracks, and you're believing what the other monkeys tell you about what you should do when their actions are in clear contradiction.
None of that actually matters, though, because you're not actually arguing with me, you're debating points. I didn't give you anything to argue with, you're just so used to people wanting to argue that you tried to find things you could argue -with-. There's nothing there. It's all just assertions of premises you either accept or deny. Arguing about premises doesn't get you anywhere, will never get you anywhere, but you do it anyways. Why?
Do you think ethics are important enough to -defend-, even when there's nothing to be gained from defending them? Why is that?
Replies from: Lumifer, TheAncientGeek↑ comment by TheAncientGeek · 2015-05-02T17:58:57.131Z · LW(p) · GW(p)
""
↑ comment by Lukas_Gloor · 2015-05-02T10:53:52.268Z · LW(p) · GW(p)
Good points. My entire post assumes that people are interested in figuring out what they would want to do in every conceivable decision-situation. That's what I''d call "doing ethics", but you're completely correct that many people do something very different. Now, would they keep doing what they're doing if they knew exactly what they're doing and not doing, i.e. if they were aware of the alternatives? If they were aware of concepts like agentyness? And if yes, what would this show?
I wrote down some more thoughts on this in this comment. As a general reply to your main point: Just because people act as though they are interested in x rather than y doesn't mean that they wouldn't rather choose y if they were more informed. And to me, choosing something because one is not optimally informed seems like a bias, which is why I thought the comparison/the term "moral anti-epistemology" has merits. However, under a more Panglossian interpretation of ethics, you could just say that people want to do what they do, and that this is perfectly fine. I depends on how much you value ethical reflection (there is quite a rabbit hole to go down to, actually, having to do with the question whether terminal values are internal or chosen).
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-05-02T17:57:10.702Z · LW(p) · GW(p)
And if making people more informed in this manner makes them worse off?
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-05-02T23:29:35.014Z · LW(p) · GW(p)
The sad thing is it probably will (the rationalist's burden: aspiring to be more rational makes rationalizating harder, and you can't just tweak your moral map and your map of the just world/universe to fit your desired (self-)image).
What is it that counts, revealed preferences or stated preferences or preferences that are somehow idealized (if the person knew more, was smarter etc.)? I'm not sure the last option can be pinned down in a non-arbitrary way. This would leave us with revealed preferences and stated preferences, even though stated preferences are often contradictory or incomplete. It would be confused to think that one type of preferences are correct, whereas the others aren't. There are simply different things going on, and you may choose to focus on one or the other. Personally I don't intrinsically care about making people more agenty, but I care about it instrumentally, because it turns out that making people more agenty often increases their (revealed) concern for reducing suffering.
What does this make of the claim under discussion, that deontology could sometimes/often be a form of moral rationalizing? The point still stands, but it is qualified with a caveat, namely that it is only a rationalizing if we are talking about (informed/complete) stated preferences. For whatever that's worth. On LW, I assume it is worth a lot to most people, but there's no mistake being made if it isn't for someone.
comment by TheAncientGeek · 2015-04-24T11:26:13.403Z · LW(p) · GW(p)
morality is from humans for humans
What's wrong with that? Not enough concern for non human animals?
As long as we always make sure to clarify what it is that we're trying to accomplish, it would seem possible to differentiate between valid and invalid arguments in regard to the specified goal.
Does that mean what counts as good epistemology in the context of ethics is specific to the contexts of ethics?
Deontology comes to mind (mostly because it's my usual suspect when it comes to odd reasoning in ethics), b0
All variations on deontology?
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-04-24T12:41:57.102Z · LW(p) · GW(p)
What's wrong with that? Not enough concern for non human animals?
The way most people use it, the slogan would also put all transhumanist ideas outside the space of things to consider. I feel that it is "wrong" in that it prematurely limits your search space, but I guess if someone really did just care about how humans in their current set-up interact with each other, ok...
Does that mean what counts as good epistemology in the context of ethics is specific to the contexts of ethics?
Yes, and I find this non-trivial because it means that "ethics" is too broad for there to be one all-encompassing methodology. For instance, some people people are just interested to find an "impartial" view that they would choose behind the veil of ignorance, whereas others also want to account for person-specific intuitions and preferences. None of these two parties is wrong, they just have different axioms. The situations seems different when you look at science, there people seem to agree on the criteria for a good scientific explanation (well, at least in most cases).
All variations on deontology?
No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn't strike me as a misguided thing to be thinking about.
Replies from: dxu, TheAncientGeek↑ comment by dxu · 2015-04-24T16:13:19.609Z · LW(p) · GW(p)
No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn't strike me as a misguided thing to be thinking about.
Well, there are two things I have to say in response to that:
- Timeless decision-making is a decision algorithm; you can use it to maximize any utility function you want. In other words, it's instrumental, not terminal. So it's hard to see how timeless cooperation could be morally significant, since morality usually deals with terminal values, not instrumental goals.
- Timeless decision-making is still based on your estimated degree of similarity to other agents on the playing field. I'll only cooperate in the one-shot Prisoner's Dilemma if I suspect my decision and my opponent's are logically connected. So even if you advocate timeless decision-making, "cooperate in PD-like situations" is still not going to be a universal rule like the Golden Rule.
↑ comment by afeller08 · 2015-04-24T21:38:49.475Z · LW(p) · GW(p)
I changed my mind midway through this post. Hopefully it still makes sense... I started disagreeing with you based on the first two thoughts that come to mind, but I'm now beginning to think you may be right.
So it's hard to see how timeless cooperation could be morally significant, since morality usually deals with terminal values, not instrumental goals.
I.
This statement doesn't really fit with the philosophy of morality. (At least as I read it.)
Consequentialism distinguishes itself from other moral theories by emphasizing terminal values more than other approaches to morality do. A consequentialist can have "No murder" as a terminal value, but that's different from a deontologist believing that murder is wrong or a Virtue Ethicist believing that virtuous people don't commit murder. A true consequentialist seeking to minimize the amount of murder that happens would be willing to commit murder to prevent more murder, but neither a deontologist nor a virtue ethicist would.
Contractualism is a framework for thinking about morality that presupposes that people have terminal values and their values sometimes conflict with each other's terminal values. It's a description of morality as a negotiated system of adopting/avoiding certain instrumental goals so that the people who implicitly negotiate the contract for their mutual benefit at attaining their terminal values. It says nothing about what kind of terminal values people should have.
II.
Discussions of morality focus on what people "should" do and what people "should" think, etc. The general idea of terminal values is that you have them and they don't change in response to other considerations. They're the fixed points that affect the way you think about what you want to accomplish with you instrumental goals. There's no point to discussing what kind of terminal values people "should" have. But in practice, people agree that there is a point to discussing what sorts of moral beliefs people should have.
III.
The psychological conditions that cause people to become immoral by most other people's standards have a lot to do with terminal values, but not anything to do with the kinds of terminal values that people talk about when they discuss morality.
Sociopaths are people who don't experience empathy or remorse. Psychopaths are people who don't experience empathy, remorse, or fear. Being able to feel fear is not the sort of thing that seems relevant to a discussion about morality... But that's not the same thing as saying that being able to feel fear is not relevant to a discussion about morality. Maybe it is.
Maybe what we mean by morality, is having the terminal values that arise from experiencing empathy, remorse, and fear the way most people experience these things in relation to the people they care about. That sounds like a really odd thing to say to me... but it also sounds pretty empirically accurate for nailing down what people typically mean when they talk about morality.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-05-02T21:04:48.171Z · LW(p) · GW(p)
Contractualism is a framework for thinking about morality that presupposes that people have terminal values and their values sometimes conflict with each other's terminal value
Instrumental values can clash too. The instrumental-terminal axis is pretty well orthogonal to the morally relevant/irrelevant axis.
↑ comment by TheAncientGeek · 2015-04-25T10:21:41.815Z · LW(p) · GW(p)
The way most people use it,
Are you sure? That meaning wasn't obvious to me?
For instance, some people people are just interested to find an "impartial" view that they would choose behind the veil of ignorance, whereas others also want to account person-specific intuitions and preferences. None of these two parties is wrong, they just have different axioms.
There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped. Having multiple epistemologies with equally good answers us something of a disaster.
No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn't strike me as a misguided thing to be thinking about.
I still don't know what you think is bad about bad deontology.
In general, you need to make many fewer assumptions that what is obvious to you is obvious to everybody.
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-04-25T14:41:19.752Z · LW(p) · GW(p)
Are you sure? That meaning wasn't obvious to me?
I often got this as an objection to utilitarianism, the other premise being that utilitarianism is impractical for humans. I've talked to lots of people about ethics since I took high school philosophy classes, study philosophy at university, and have engaged in more than a hundred online discussions about ethics. The objection actually isn't that bad if you steelman it, maybe people are trying to say that they, as humans, care about many other things and would be overwhelmed with utilitarian obligations. (But there remains the question whether they care terminally about these other things, or whether they would self-modify to a perfect utilitarian robot if given the chance.)
There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped.
There could be in some cases, if people find out they didn't really believe their axiom after all. But it can just as well be that the starting assumptions really are axiomatic. I think that the idea that terminal values are hardwired in the human brain, and will converge if you just give an FAI good instructions to get them out, is mistaken. There are billions of different ways of doing the extrapolation, and they won't all output the same. At the end of the day, the buck does have to stop somewhere, and where else could that be than where a person, after long reflection and an understanding of what she is doing, concludes that x are her starting assumptions and that's it.
I don't quite agree with the prominent LW-opinion that human values are complex. What is complex are human moral intuitions. But no one is saying that you need to take every intuition into account equally. Humans are very peculiar sort of agents in mind space, when you ask most people what their goal is in life, they do not know or they give you an answer that they will take back as soon as you point out some counterintuitive implications of what they just said. I imagine that many AI-designs would be such that the AIs are always clearly aware of their goals, and thus feel no need to ever engage in genuine moral philosophy. Of course, people do have a utility-function in form of revealed preferences, what they would do if you placed them in all sorts of situations, but is that the thing we are interested in when we talk of terminal values? I don't think so! It should at least be on the table that some fraction of my brain's pandemonium of voices/intuitions is stronger than the other fractions, and that this fraction makes up what I consider the rational part of my brain and the core part of my moral self-identity, and that I would, upon reflection, self-modify to an efficient robot with simple values. Personally I would do this, and I don't think I'm missing anything that would imply that I'm making any sort of mistake. Therefore, the view that all human values are necessarily complex seems mistaken to me.
Having multiple epistemologies with equally good answers us something of a disaster.
These different epistemologies have a lot in common. The exercise would always be "define you starting assumptions, then see which moves are goal-tracking, and which ones aren't". Ethical thought experiments for instance, or distinguishing instrumental values from terminal ones, are things that you need to do either way if you think about what your goals are, e.g. how you would want to act in all possible decision-situations.
I still don't know what you think is bad about bad deontology.
It is often vague and lets people get away with not thinking things through. It feels like they have an answer, but most people would have no clue how to set the parameters for an AI that implemented their type of deontology (e.g. when dilemma situations become probabilistic, which is, of course, all the time).
It contains discussion stoppers like "rights", even though, when you taboo the term, that just means "harming is worse than not-helping", which is a weird way to draw a distinction, because when you're in pain, you primarily care about getting out of it and don't first ask what the reason for it was. Related: It gives the air of being "about the victim", but it's really more about the agent's own moral intuitions, and is thus, not really other-regarding/impartial at all. This would be ok if deontologists were aware of it, but they often aren't. They object to utilitarianism on the grounds of it being "inhumane", instead of "too altruistic".
In general, you need to make many fewer assumptions that what is obvious to you is obvious to everybody.
Yes, I see that now. I thought I was mainly preaching to the choir and didn't think the details of people's metaethical views would matter for the main thoughts in my original post. It felt to me like I was saying something at risk of being too trivial, but maybe I should have picked better examples. I agree that this comment does a good job at what I was trying to get at.
Replies from: VoiceOfRa, TheAncientGeek↑ comment by VoiceOfRa · 2015-04-26T22:01:51.605Z · LW(p) · GW(p)
It is often vague and lets people get away with not thinking things through. It feels like they have an answer, but most people would have no clue how to set the parameters for an AI that implemented their type of deontology (e.g. when dilemma situations become probabilistic, which is, of course, all the time).
The same is true of most discussions of consequentialism and utility functions.
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-04-26T22:54:25.587Z · LW(p) · GW(p)
No, most consequentialists have a very good idea of how they would deal with probabilistic decision-situations, that's where consequentialism is good at. This is worked out to a much lesser extent in deontology.
I'm not saying that most forms of consequentialism aren't vague at all, if you interpreted me charitably, you would assume that I'm talking about a difference in degree.
An example of "letting people get away with not thinking things through": Consider the entire domain of population ethics. Why is this predominantly being discussed by consequentialists, where it is recognized as a huge problem area? It's not like analogous difficulties wouldn't turn up in deontology if you went deep enough into the rabbit hole, but how many deontologists have gone there?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-04-28T09:16:59.824Z · LW(p) · GW(p)
No, most consequentialists have a very good idea of how they would deal with probabilistic decision-situations,
Whereas what it is bad at is combining utility functions.
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-04-28T10:23:29.925Z · LW(p) · GW(p)
Do you mean utility functions of different parts of your brain? I agree. But no one says it's necessary to consider every single voice in your mind. If your internal democracy falls into a consequentialist dictatorship because somehow your most fundamental intuition is about altruism, that seems totally fine. Likewise, if you have a lot of strong deontological intuitions and don't want to just overwrite them with a more simple, consequentialist view, that's totally fine as well, as long as you understand what you're doing. I'm only objecting to deontology because most of the time, it seems like people think they are doing more than just following their intuitions, they think they somehow do the only right or alruistic thing, when this is non-obvious at best. The "as long as you understand what you're doing" of course also applies to consequentialists: it would be problematic if the main reason someone is a consequentialist is that she thinks utility functions ought to be simple/elegant. (Consequentialism doesn't necessarily have to be simple, complexity of value could well be consequentialist as well. I'm mainly talking about utilitarianism and closely related views here.)
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-04-29T18:08:35.682Z · LW(p) · GW(p)
Do you mean utility functions of different parts of your brain?
No I mean combining utilities across individual, species, etc H
Likewise, if you have a lot of strong deontological intuitions and don't want to just overwrite them with a more simple, consequentialist view, that's totally fine as well, as long as you understand what you're doing.
You have missed my point entirely. I meant that it is actually difficult to make consequentialism work, and c ists solve the problem by taking it glibly ... your critique of deontology, IOW.
I'm only objecting to deontology because most of the time, it seems like people think they are doing more than just following their intuitions
Rightly. Most of the time they are following socially defined rules.
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-04-29T19:54:27.983Z · LW(p) · GW(p)
Ah, aggregation. This seems to be mainly a problem for what I would call preference utilitarianism, where you sum up utility functions over individuals. Outside of LW, the standard usage of utilitarianism refers to experiential utilitarianism, where the only matter of concern is hedonic tone. Hence my confusion about what you meant. There are still some tricky questions with that, e.g. how many seconds of intense depression of a 24-year-old human is worse than a chimpanzee being burned alive for 1 second, but at worst these questions require the stipulation of a finite number of tradeoff values. So your objection fails for the (arguably) most popular forms of utilitarianism.
In addition, I would say it also fails for preference utilitarianism, because I would imagine that these problems arise mainly because utilitarians are trying hard to find decision-criteria that cover all conceivable situations. If someone took deontology this seriously, I suspect that they too would run into aggregation problems of some sort somewhere, except if they block aggregation entirely (Taurek) and rely on the view that "numbers never count".
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-05-01T08:03:34.375Z · LW(p) · GW(p)
only matter of concern is hedonic tone. Hence my confusion about what you meant.
I don't think that fixes the problem, so I didn't think that the distinction was worth making. We can't objectively measure subjective feelings, so aggregating them across species is guesswork.
but at worst these questions require the stipulation of a finite number of tradeoff values.
That sounds like guesswork to me,
In addition, I would say it also fails for preference utilitarianism, because I would imagine that these problems are trying hard to find decision-criteria that cover all conceivable situations
Inter species aggregation comes in when you are considers vegetarianism, vivisection, etc, which are uncontrived real world issues.
I don't think deontology necessarily does a lot better, -- I am actually a hybrid theorist-- but I don't think you are giving deontology a fair trial, in that you are not considering its most sophisticated arguments, or allowing it to guess its way out of problems.
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-05-01T09:19:17.161Z · LW(p) · GW(p)
That sounds like guesswork to me,
If you care about suffering, you don't stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness. Things being "arbitrary" or "guesswork" just means that the answer you're looking for depends on your own intuitions and cognitive machinery. This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn't possible.
I don't think deontology necessarily does a lot better, -- I am actually a hybrid theorists-- but I don't think you are giving deontology a fair trial, in that you are not considering its mist sophisticated arguments, or allowing it to guess its way out of problems.
I don't see how hybrid theorists would solve the problem of things being "guesswork" either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.
I still don't see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn't hold, because it applies to most or all moral views.
To give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.
Replies from: TheAncientGeek, TheAncientGeek↑ comment by TheAncientGeek · 2015-05-02T08:52:00.266Z · LW(p) · GW(p)
If you care about suffering, you don't stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness.
I wasn't suggesting giving up on ethics, I was suggesting giving up on utilitarianism.
This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn't possible.
I think there are other approaches that do better than utilitarianism at its weak areas.
I don't see how hybrid theorists would solve the problem of things being "guesswork" either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.
Metaethically, hybrid theorists do need to figure out which theories apply where, and that isnt guesswork.
At the object level, it is quite possible, at the first approximation, to cash out your obligations as whatever society obliges you to do -- deontologists have a simpler problem to solve.
I still don't see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn't hold, because it applies to most or all moral views.
My principle argument is that it ain't necessarily so. You put forward, without any specific evidence, a version of events where deontology arises out of attempts to rationalise random intuitions. I put forward, without any specific evidence a version of events where widespread deontology arises out of rules being defined socially, and people internalising them. My handwaving theory doesn't defeat yours, since they both have the same, minimal, support, but it does show that your theory doesn'thave any unique status as the default or only theory of de facto deontology.
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-05-02T10:48:02.032Z · LW(p) · GW(p)
I wasn't suggesting giving up on ethics, I was suggesting giving up on utilitarianism.
What I wrote concerned giving up on caring about suffering, which is very closely related with utilitarianism.
I think there are other approaches that do better than utilitarianism at its weak areas.
Maybe according to your core intuitions, but not for me as far as I know.
but it does show that your theory doesn'thave any unique status as the default or only theory of de facto deontology.
But my main point was that deontology is too vague for a theory that specifies how you would want to act in every possible situation, and that it runs into big problems (and lots of "guesswork") if you try to make it less vague. Someone pointed out that I'm misunderstanding what people's ethical systems are intended to do. Maybe, but I think that's exactly my point: People don't even think about what they would want to do in every possible situation because they're more interested in protecting certain status quos rather than figuring out what it is that they actually want to accomplish. Is "protecting certain status quos" their true terminal value? Maybe, but how would they know if they know if this question doesn't even occur to them? This is exactly what I meant by moral anti-epistemology: you believe things and follow rules because the alternative is daunting/complicated and possibly morally demanding.
The best objection to my view is indeed that I'm putting arbitrary and unreasonable standards on what people "should" be thinking about. In the end, it also arbitrary what you decide to call a terminal value, and which definition of terminal values you find relevant. For instance, whether it needs to be something that people reach on reflection, or whether it is simply what people tell you they care about. Are people who never engage in deep moral reasoning making a mistake? Or are they simply expressing their terminal value of wanting to avoid complicated and potentially daunting things because they're satisficers? That's entirely up to your interpretation. I think that a lot of these people, if you were to nudge them towards thinking more about the situation, would at least in some respect be grateful for that, and this, to me, is reason to consider deontology as something irrational in respect to a conception of terminal values that takes into account a certain degree of reflection about goals.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-05-02T13:28:40.757Z · LW(p) · GW(p)
What I wrote concerned giving up on caring about suffering, which is very closely related with utilitarianism
Its not obvious that utilitarians have cornered the market in caring. For instance, when Bob Geldof launched Band Aid, he used the phrase "categorical imperative", which comes from Kantian deontology.
I think there are other approaches that do better than utilitarianism at its weak areas.
Maybe according to your core intuitions, but not for me as far as I know.
Its not intuition in my case: I know that certain questions have answers, because I have answered them in the course of the hybrid theory I am working on.
ETA
But my main point was that deontology is too vague for a theory that specifies how you would want to act in every possible situation,
Its still not clear what you are saying, or why it is true. As a metaethical theory it doesn't completely specify an object level ethics, but that's normal .. the metaethical claim of virtue ethics, that the good is the virtuous, doesn't specify any concrete virtues. Utilitarianism is exceptional in that the metaethics specifies the object level ethics.
Or you might mean that deontological ethics is too vague in practice. But then, as before, add more rules. There's no meta rule that limits to you ten to rather than ten thousand rules.
Or you might mean that deontological ethics can't match consequentialist ethics. But it seems intuitive to me that a sufficiently complex set rules should be able to match any consequentialism.
ETA2
and that it runs into big problems (and lots of "guesswork") if you try to make it less vague.
So is the problem obligation or supererogation? Is it even desirable to have an ethical system that places fine grained obligations on you in every situation? Don't you need some personal freedom?
People don't even think about what they would want to do in every possible situation because they're more interested in protecting certain status quos rather than figuring out what it is that they actually want to accomplish. Is "protecting certain status quos" their true terminal value?
Maybe. But if popular deontology leverages status seeking to motivate minimal ethical behaviour, why not consider that a feature rather than a bug? You have to motivate ethics somehow.
Or maybe you complaint is that popular deontology is too minimal, and doesn't motivate personal growth. My reaction would then be that, while personal growth is a thing, it isnt a matter of central concern to ethics, and an ethical system isnt required to motivate it, and isnt broken if it doesn't,
Or maybe your objection is that deontology isnt doing enough to encourage societal goals. I do think that sort of thing is a proper goal of ethics, and that is a consideration that went into my hybrid approach: not killing is obligatory; making the world a better place is, nice-to-have, supererogatory. The obligation comes from the deontologcal component, which is minimal, so utilitarian demandingness is avoided.
↑ comment by TheAncientGeek · 2015-05-02T13:03:30.882Z · LW(p) · GW(p)
to give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.
Biases are only unconditionally bad in the case of epistemic rationality, and ethics is about action in the world, not massively rejecting truth. To expand:
Rationality is (at least) two different things called by one name. Moreover, while there is only one epistemic rationality, the pursuit of objective truth, there are many instrumental rationalities aiming at different goals.
Biases are regarded as obstructions to rationality ... but which rationality? Any bias is a stumbling block to epistemic rationalism ... but in what way would, for instance, egoistic bias be an impediment to the pursuit of selfish aims? The goal, in that case is the bias, and the bias the goal. But egotism is still a stumbling block to epistemic rationality, and to the pursuit of incompatible values, such as altruism.
That tells us two things: one is that what counts as a bias is relative, or context dependent. The other -- in conjunction the reasonable supposition that humans don't follow a single set of values all the time -- is where bias comes from.
If humans are a messy hack with multiple value systems, and with a messy, leaky way of switching between them, then we would expect to see something like egotistical bias as a kind of hangover when switching to altruistic mode, and so on.
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-05-02T13:39:33.964Z · LW(p) · GW(p)
I think if you read all my comments here again, you will see enough qualifications in my points that suggest that I'm aware of and agree with the point you just made. My point on top of that is simply that often, people would consider these things to be biases under reflection, after they learn more.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-05-02T14:57:41.196Z · LW(p) · GW(p)
My argument was that on reflection, not all biases are bad.
↑ comment by TheAncientGeek · 2015-04-26T14:08:07.491Z · LW(p) · GW(p)
I often got this as an objection to utilitarianism
I wasnt objecting to utilitarianism.
There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped
.There could be in some cases, if people find out they didn't really believe their axiom after all.
Belief isnt the important criterion. The important criterion is whether person B can argue for or against what person A takes as automatic. How do you show objectively that claim can't be argued for, and has to be assumed.
don't quite agree with the prominent LW-opinion that human values are complex.
Values are complex. Whether moral values are complex is another story.
still don't know what you think is bad about bad deontology.
It is often vague
That doesn't seem to be an intrinsic problem. You can make a set of rules as precise as you like. It also not clear that the well known alternatives fare better. Utilitarianism, in particular, works only in fairly constrained domains, where you're not comparing apples and oranges.
It contains discussion stoppers like "rights",
Arguably, that's a feature, not a bug. If people realised how insubstantial ethics is, they would have trouble sticking to it.
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-04-26T14:53:47.532Z · LW(p) · GW(p)
I wasnt objecting to utilitarianism.
I know, my point referred to people using "ethics is from humans for humans" in a way that would also rule out transhumanism.
Belief isnt the important criterion. The important criterion is whether person B can argue for or against what person A takes as automatic. How do you show objectively that claim can't be argued for, and has to be assumed.
The burden of proof is elsewhere, how do you overcome the is-ought distinction when you try to justify/argue for a claim? Edit: To repraphse this (don't know how this could get me downvotes, but I'm trying to make this more clear), if the arguments for the is-ought distinction, which seem totally sound, are correct, it is unclear how you could argue for person A's moral assumptions being incorrect, at least in cases where these assumptions are non-contradicting and not based on confused metaphysics.