Is Morality a Valid Preference?
post by MinibearRex · 2011-02-21T01:18:16.941Z · LW · GW · Legacy · 75 commentsContents
75 comments
In general, the ethical theory that prevails here on Less Wrong is preference utilitarianism. The fundamental idea is that the correct moral action is the one that satisfies the strongest preferences of the most people. Preferences are discussed with units such as fun, pain, death, torture, etc. One of the biggest dilemmas posed on this site is the Torture vs. Dust Specks problem. I should say, up front, that I would go with dust specks, for some of the reasons I mentioned here. I mention this because it may be biasing my judgments about my question here.
I had a thought recently about another aspect of Torture vs. Dust Specks, and wanted to submit it to some Less Wrong Discussion. Namely, do other people's moral intuitions constitute a preference that we should factor into a utilitarian calculation? I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
Should we assign weight to other people's moral intuitions, and how much weight should it have?
75 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2011-02-21T09:53:12.953Z · LW(p) · GW(p)
In general, the ethical theory that prevails here on Less Wrong is preference utilitarianism.
What is your evidence for this? In The Preference Utilitarian’s Time Inconsistency Problem, the top voted comments didn't try to solve the problem posed for preference utilitarians, but instead made general arguments against preference utilitarianism.
comment by jimrandomh · 2011-02-21T05:53:25.581Z · LW(p) · GW(p)
The real answer to torture vs. dust specks is to recognize that the answer to the scenario is torture, but the scenario itself has a prior probability so astronomically low that no evidence could ever convince you that you were in it, since at most k/3^^^3 people can affect the fate of 3^^^3 people at once (where k is the number of times a person's fate is affected). However, there are higher-probability scenarios that look like torture vs. 3^^^3 dust specks, but are actually torture vs. nothing or torture vs. not-enough-specks-to-care. In philosophical problems we ignore that issue for simplicity and assume the problem statement is true with probability exactly 1, but you can't do that in real life, and in this case intuition sides with reality.
Therefore, answer dust specks, but build theories as though the answer were torture.
Replies from: DanielLC↑ comment by DanielLC · 2011-02-21T06:25:31.284Z · LW(p) · GW(p)
The same points in Pascal's Mugging apply. 3^^^3 has a relatively low K-complexity, which means that, if someone where to just tell you it happens, the expected number would still be astronomical.
There are higher-probability things that actually apply. They're just more like torture vs. significantly less torture. The bias is still enough to keep the paradox strong.
comment by mkehrt · 2011-02-21T04:27:14.845Z · LW(p) · GW(p)
I've been thinking about this on and off for half a year or so, and I have come to the conclusion that I cannot agree with any proposed moral system that answers "torture" to dust specks and torture. If this means my morality is scope-insensitive, then so be it.
(I don't think it is; I just don't think utilitarianism with an aggregation function of summation over all individuals is correct; I think the correct aggregation function should probably be different. I am not sure what the correct aggregation function is, but maximizing the minimum individual utility is a lot closer to my intuitions than summation (where by "correct" I mean compatible with my moral intuitions). I'm planning on writing a post about this soon.)
comment by rstarkov · 2011-02-21T20:48:27.842Z · LW(p) · GW(p)
I think Torture vs Dust Specks makes a hidden assumption that the two things are comparable. It appears that people don't actually think like that; even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that's infinitely worse than another category.
So, I'd say that you aren't preferring morality; you are simply placing 50 years of torture as infinitely worse than a dust speck; no number people getting dust specks can possibly be worse than 50 years of torture.
Replies from: MBlume, endoself, Nornagest↑ comment by MBlume · 2011-02-22T18:36:47.780Z · LW(p) · GW(p)
I think Torture vs Dust Specks makes a hidden assumption that the two things are comparable. It appears that people don't actually think like that; even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that's infinitely worse than another category.
The thing is, if you think that A and B aren't comparable, with A>B, and if you don't make some simplifying assumption like "any event with P < 0.01 is unworthy of consideration, no matter how great or awful" or something, then you don't get to ever care about B for a moment. There's always some tiny chance of A that has to completely dominate your decision-making.
Replies from: TheOtherDave, rstarkov↑ comment by TheOtherDave · 2011-02-22T23:34:42.810Z · LW(p) · GW(p)
After reading this several times, I have to conclude that I don't understand what "comparable" means in this comment. Otherwise, I have no idea how one could thinking both that A and B aren't comparable and that A > B.
Replies from: MBlume↑ comment by MBlume · 2011-02-23T08:54:35.383Z · LW(p) · GW(p)
even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that's infinitely worse than another category.
I mean "comparable" as the negation of this line of thought.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-02-23T14:09:30.354Z · LW(p) · GW(p)
Ah.
So, you meant something like: if I think A is worse than B, but not infinitely worse than B, and I don't have some kind of threshold (e.g., a threshold of probability) below which I no longer evaluate expected utility of events at all, then my beliefs about B are irrelevant to my decisions because my decisions are entirely driven by my beliefs about A?
I mean, that's trivially true, in the sense that a false premise justifies any conclusion, and any finite system will have some threshold for which events not to evaluate.
But in a less trivial sense... hm.
OK, thanks for clarifying.
↑ comment by rstarkov · 2011-03-09T18:31:39.502Z · LW(p) · GW(p)
This is a good point, and I've pondered on this for a while.
Following your logic: we can observe that I'm not spending all my waking time caring about A (people dying somewhere for some reason). Therefore we can conclude that the death of those people is comparable to mundane things I choose to do instead - i.e. the mundane things are not infinitely less important than someone's death.
But this only holds if my decision to do the mundane things in preference to saving someone's life is rational.
I'm still wondering whether I do the mundane things by rationally deciding that they are more important than my contribution to saving someone's life could be, or by simply being irrational.
I am leaning towards the latter - which means that someone's death could still be infinitely worse to me than something mundane, except that this fact is not accounted for in my decision making because I am not fully rational no matter how hard I try.
↑ comment by endoself · 2011-02-22T01:46:38.603Z · LW(p) · GW(p)
We defined a dust speck to have nonzero negative utility. If you don't think this describes reality, then you can substitute something else, like a stubbed toe.
As long as we can make a series of things, none of which is infinitely worse than the next, we can prove that nothing in the list is infinitely worse than any other. http://lesswrong.com/lw/n3/circular_altruism/u7x presents this well.
Replies from: None↑ comment by [deleted] · 2011-02-22T21:56:30.862Z · LW(p) · GW(p)
As long as we can make a series of things, none of which is infinitely worse than the next, we can prove that nothing in the list is infinitely worse than any other.
This is true, but not relevant to the question of whether 50 years of torture is infinitely worse than a speck of dust in the eye.
Replies from: FAWS↑ comment by FAWS · 2011-02-23T02:06:58.149Z · LW(p) · GW(p)
So is a single millisecond of torture also infinitely worse than a dust speck and as well as infinitely worse than everything else that isn't infinitely worse than a dust speck itself, or is some time span of torture infinitely worse than a slightly shorter time span? If you postulate a discontinuity that discontinuity has to be somewhere.
Replies from: None↑ comment by [deleted] · 2011-02-23T16:08:19.315Z · LW(p) · GW(p)
I guess this is what I get for replying to a torture post!
The point I was trying to make is mathematical: for sensible definitions of "finitely greater" the statement "if we have a sequence of objects, each of which is only finitely greater than its predecessor, then every object on the list is only finitely greater than eany earlier object" is true, but not relevant to the question of whether or not there exist infinitely large objects.
My goal was to flag up mathematical reasoning that doesn't hold water, which apparently I failed to do.
For completeness, I should also mention that the linked post does not make the same error.
Replies from: FAWS↑ comment by FAWS · 2011-02-24T01:32:38.557Z · LW(p) · GW(p)
The point I was trying to make is mathematical: for sensible definitions of "finitely greater" the statement "if we have a sequence of objects, each of which is only finitely greater than its predecessor, then every object on the list is only finitely greater than eany earlier object" is true, but not relevant to the question of whether or not there exist infinitely large objects.
But extremely relevant to the question whether or not there exist infinitely large objects on the list.
Replies from: None↑ comment by [deleted] · 2011-02-24T18:30:13.859Z · LW(p) · GW(p)
Whether or not everybody has a list is precisely the question asked in the top post of this thread.
Replies from: FAWS↑ comment by FAWS · 2011-02-24T19:20:45.296Z · LW(p) · GW(p)
Not at all. People who treat some things as infinitely worse than others don't do so because they believe that a list that includes both somehow stops being a list, and the threat starter never implied anything in that direction. They just have inconsistent preferences (at least in the sense of being money-pumpable). Either that or they bite the bullet and admit that there is at least one particular item infinitely worse than the preceding for any such list. Denying that a list is a list is just nonsense.
Replies from: None↑ comment by [deleted] · 2011-02-24T21:23:14.724Z · LW(p) · GW(p)
We are in violent agreement (but I'm coming off worse!).
rstarkov suggested that people may have "utility functions" that don't take real values.
Endoself's comment "showed" that this cannot be, starting from the assumption that everybody has a preference system that can be encoded as a real-valued utility function. This is nonsense.
My non-disagreement with you seems to have stemmed from me not wanting to be the first person to say "order-type", and us making different assumptions about how various poster's positions projected onto our own internal models of "lists" (whatever they were).
Replies from: FAWS↑ comment by FAWS · 2011-02-24T23:02:10.375Z · LW(p) · GW(p)
You shouldn't have used the worlds "not relevant", that implied the statement had no important implications for the problem at all, rather than proving the (very relevant since the topic is ulilitarism) hidden assumption wrong for that set of people (unless they bit the bullet).
↑ comment by Nornagest · 2011-02-21T20:59:59.315Z · LW(p) · GW(p)
It absolutely assumes that the two are comparable, and most of the smarter objections to it that I've seen invoke some kind of filtering function to zero out the impact of any particular dust speck on some level of comparison.
There are a number of objections to this that you could raise in practice: given a random distribution of starting values, for example, an additional dust speck would be sufficient to push a small percentage, but an unimaginably huge quantity, of victims' subjective suffering over any threshold of significance we feel like choosing. I'm not too impressed with any of these responses -- they generally seem to leverage special pleading on some level -- but I've got to admit that they don't have anything wrong with them that the filtering argument doesn't.
Welcome to Less Wrong, by the way.
Replies from: rstarkov↑ comment by rstarkov · 2011-02-21T21:37:30.254Z · LW(p) · GW(p)
Argh, I have accidentally reported your comment instead of replying. I did wonder why it asks me if I'm sure... Sorry.
It does indeed appear that the only rational approach is for them to be treated as comparable. I was merely trying to suggest a possible underlying basis for people consistently picking dust specks, regardless of the hugeness of the numbers involved.
Replies from: Alicorncomment by lukeprog · 2011-02-21T02:13:55.504Z · LW(p) · GW(p)
Really? Preference utilitarianism prevails on Less Wrong? I haven't been around too long, but I would have guessed that moral anti-realism (in several forms) prevailed.
Replies from: komponisto, MinibearRex, DanielLC↑ comment by komponisto · 2011-02-21T05:09:19.909Z · LW(p) · GW(p)
Isn't this a confusion of levels, with preference utilitarianism being an ethical theory, and moral anti-realism being a metaethical theory?
Replies from: Clippy↑ comment by MinibearRex · 2011-02-21T02:36:44.669Z · LW(p) · GW(p)
Generally, I doubt that many people on less wrong believe that the universe has an "inherent" moral property, so I suppose your guess is accurate. However, there is a fairly strong emphasis on (trans)humanistic ideas. There doesn't have to be a term for "rightness" in the equations of quantum mechanics. That simply doesn't matter. Humans care. However, often humanity's moral instincts cause us to actually hurt the world more than we help it. That's why Eliezer tells us so frequently to shut up and multiply.
Replies from: Dorikka, lukeprog↑ comment by Dorikka · 2011-02-21T03:14:38.552Z · LW(p) · GW(p)
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
I would probably go for specks here. Many people, I predict, are going to get an emotional high out of thinking that their sacrifice will prevent someone from being tortured (while scope insensitivity prevents them from realizing just how small 1/3^^^3 is). If they actually receive a dust speck in the eye in a short enough time interval afterwards, I think that they would have a significant chance of taking it to mean that specks was chosen instead of torture (and thus they would get more of a high after being specked, if they knew why.)
If you polled people in such a way that they wouldn't get the high, then your answer should be the same as before. Scope insensitivity still kicks in, and people will vote without understanding how big 3^^^3 is (I don't understand it myself -- the description of 'a 1 followed by a string of zeroes as long as the Bible' is just so mind-numbingly big I know I can't comprehend it.)
↑ comment by DanielLC · 2011-02-21T06:30:38.204Z · LW(p) · GW(p)
You can (sort of) be both. I remember there being at least one person on Felicifia.org who said he figures there is no morality, so he might as well maximize happiness.
I'm not a morality anti-realist, but I'm not all that sure of it, and I'm pretty sure I'd stay a Utilitarian anyway.
That said, I'm emphatically not a preference utilitarian. Preferences are a map. Reality is a territory. On a fundamental physical level, they are incomparable.
Replies from: NihilCredo↑ comment by NihilCredo · 2011-02-21T19:18:04.103Z · LW(p) · GW(p)
What.
he figures there is no morality, so he might as well maximize happiness.
he figures there is no morality, so he might as well enter a monastery.
he figures there is no morality, so he might as well maximize unhappiness.
he figures there is no morality, so he might as well put on a clown suit and moonwalk every day.
I'm going to say that the first statement doesn't really seem to make much more sense than the others.
Replies from: TheOtherDave, DanielLC↑ comment by TheOtherDave · 2011-02-21T19:27:00.875Z · LW(p) · GW(p)
I understood DanielLC's acquaintance as meaning that he prefers to maximize happiness and he believes that his only reasons for action are his preferences and (if it existed) morality, so in the absence of morality he will act solely according to his preferences, which are to maximize happiness.
And, sure, if his preferences had been for entering a monastery, or maximizing unhappiness, or moonwalking in a clown suit, then in the absence of morality he would act solely according to those preferences.
I doubt very much that any of these accounts actually describe a real person, and I would be very nervous around anyone they did describe, but none of this is senseless.
comment by Stuart_Armstrong · 2011-02-22T14:35:40.829Z · LW(p) · GW(p)
Namely, do other people's moral intuitions constitute a preference that we should factor into a utilitarian calculation?
If we feel like it. I personally would say yes. What would you say?
Replies from: wedrifid, MinibearRex↑ comment by wedrifid · 2011-02-23T12:09:08.218Z · LW(p) · GW(p)
If we feel like it. I personally would say yes. What would you say?
Yes, regardless of whether it is true. Morality is for lying about.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-23T13:16:45.042Z · LW(p) · GW(p)
If that's you particular moral judgement...
Replies from: wedrifid↑ comment by wedrifid · 2011-02-23T16:24:42.379Z · LW(p) · GW(p)
If that's you particular moral judgement...
Possibly. However there is significant semantic ambiguity centring around 'for' and the concept of purpose. There is legitimate literal meaning there in which the claim is not moral at all (although it would be rather exaggerated.)
A moral judgement that I do make unambiguously is that people should not be expected to answer loaded moral questions of that kind transparently. Most people are, fortunately, equipped with finely tuned hypocrisy instincts so that they can answer with bullshit with full sincerity. I don't expect those that have defective hypocrisy instincts to self sabotage by sharing their private, internally coherent value system.
I also note that questions of that form I will reply to with obfuscation, overt insincerity or outright non-response even when I would comfortably answer yes. In this case 'yes' is a rather weak answer, given that 'factor in' does not specify degree of weighting. Yet many questions (or challenges) of the same form are far less mellow, not anyone else's business unless I choose it to be and potentially have either no answer that sounds acceptable or the appropriate response (once multiplied out) sounds evil.
For example if the degree of 'factoring in other's intuitions' was specified it could be the case that the factoring in others is the 'evil' response, despite being egalitarian. Kind of like I consider CEV to be an incredibly stupid plan even though it sounds kind of like the goody goody heroic altruist response at a superficial level.
But the come to think of it your question was about what should be factored in to a utilitarian calculation. So my answer would really have to be null - because utilitarian morality is an abomination that I would never be using in the first place!
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-23T20:53:08.992Z · LW(p) · GW(p)
A robust and candid position.
↑ comment by MinibearRex · 2011-02-22T15:26:27.259Z · LW(p) · GW(p)
That's mostly the question I wanted to discuss here. If you want my own personal opinion, I think that it should be considered, but we shouldn't assign a massive amount of weight to it. I've studied enough psychology to learn that humans are not often reliable. I also wouldn't be very inclined to count the "moral intuitions" of militant religious groups. In this case, however, I'm more unsure.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-22T17:48:51.889Z · LW(p) · GW(p)
I mean, as you going for a "moral realist" position, where other people might have insight into the "true" morality?
Or is it that other people's moral intuition might bring up issues that you hadn't thought off, or illustrate the consequences of some of your own positions?
Or is it political: it would be better to have a more consensus view (for practical or moral reasons), even if we disagree with certain aspects of it?
comment by rohern · 2011-02-21T07:44:06.650Z · LW(p) · GW(p)
I find it impossible to engage thoughtfully with philosophical questions about morality because I remain unconvinced of the soundness of the first principles that are applied in moral judgments. I am not interested in a moral claim that does not have a basis in some fundamental idea with demonstrable validity. I will try to contain my critique to those claims that do attempt at least what I think to be this basic level of intellectual rigor.
Note 1: I recognize that I introduced many terms in the above statement that are open to challenge as loaded and biased. I hope this will not distract from my real concern, as stated below.
Note 2: I recognize that I am likely ignorant of the thought of philosophers who have dug into this question. If you can present to me any of these ideas, if they respond clearly and directly to my objections, please do.
I find moral problems intractable and even ridiculous because I have not managed to find foundations for moral judgment through my own inquiry and those foundations proposed by others have all proven specious, at least in my judgment. Examples of the latter include religious ethical systems that claim basis in the mind of a deity, ethical systems based on an individual's emotional response to X scenario, and pragmatic claims, such as X is moral because it is useful. I admit that the latter is the argument that comes nearest to intriguing me.
Overall, I am frustrated with the a priori assumption that morality must exist (an assumption seemingly based on the fact that the word exists), so let us set out to FIND it. Perhaps it should be found first, and assumptions can come later. Until it is, I have to be neutral and frustrated.
I have not yet had a conversation in which my interlocutor, while making moral claims, could provide convincing definitions of fundamental principies, including justice, duty, the good, vice, etc., though the speaker will have readily made use of these terms. It is not helpful that the population of individuals in society who have attempted to understand and establish such principles on their own, and done so carefully and analytically, is near zero.
As for scenarios resembling the Torture vs. Dust Specks problem: My response is to reject the premise. No, I do not need to make that choice! Morality, at least as that word seems to be applied in non-academic fashion (meaning in daily use), has nothing to do with such abstraction. Moral choices involve actual theft, actual death, actual starvation, actual inequality, etc. etc. The Torture vs. Dust Specks choice is one that no one will ever need to make, so while it might be an intriguing question, I think it avoids the actual subject of morality, or what you might want to call "applied morality". I feel the same about psychological studies that ask questions about pushing people in front of trains. This is a field built only of theory with no area that actually touches human experience.
Replies from: NihilCredo, TheOtherDave, Perplexed↑ comment by NihilCredo · 2011-02-21T08:52:08.331Z · LW(p) · GW(p)
I support and agree with every paragraph except the last one.
I cannot come up with any sensible, useful separation between "actual scenarios" and "thought experiments". Consider the following question: "You have the ability to instantly make an arbitrarily high number of copies of a book, and distribute them to billions of people, at a negligible cost to everyone. Should you compensate the author before doing so?". To us this is an everyday matter, but to a medieval scholar it is about as much of an abstraction as "torture vs. dust specks". It is likely more abstract to him than "push the fat guy on the train tracks" is to us.
While I don't believe that morality is well-founded, I do believe that the word "morality" has some meaning, even if it is incorrect - in the same way that theism isn't well-founded, but "theism" still means something. And I consider that the word "morality" must indicate a function (or a set of functions) having its domain included in the set of logically possible universes, and its codomain in some sort of algebraic structure (utilitarians should think it's a totally ordered field, but it needn't be such a powerful structure). I do not think that whether we might actually experience a given universe is a relevant criterion to morality as most people intend the word.
Replies from: rohern↑ comment by rohern · 2011-02-21T23:26:33.839Z · LW(p) · GW(p)
If I take you correctly, you are pointing out that thought experiments, now abstract, can become actual through progress and chance of time, circumstance, technology, etc., and thus are useful in understanding morality.
If this is an unfair assessment, correct me!
I agree with you, but I also hold to my original claim, as I do not think that they contradict. I agree that the thought experiment can be a useful tool for talking about morality as a set of ideas and reactions out-of-time. However, I do not agree that the thought experiments I have read have convinced me of anything about morality in actual practice. This is for one reason alone: I am not convinced that the operation of human reason is the same in all cases, and in this particular, in the two cases of the theoretical and the physical/actual.
I am not convinced that if a fat man were actually standing there waiting to be shoved piteously onto the tracks that the human mind would necessarily function in the same way it does when sitting in a cafe and discussing the fate of said to-be switch-pusher.
If I were to stake the distinction between the actual and the theoretical on anything, it would be on the above point. What data have we on the reliability of these -- I think you must agree that, regardless of the hypothetical opinions of medieval scholar types, the Torture vs. Dust Specks scenario abstract for us now and here -- thought experiments to predict human behavior when, to retreat to the cliche, one is actually in the trenches?
This may have some connection to the often-experienced phenomenon when in conversation of casual nonchalance and liberalism about issues that do not affect the speaker and a sudden and contradictory conservatism about issues that do affect the speaker. This is a phenomenon I encounter very often as a college student. It is gratis to be easy-going about topics that never impact oneself, but when circumstances change and a price is paid, reason does not reliably produce similar conclusions. Perhaps this is not a fair objection however, as we could claim that such a person is being More Wrong.
If you can convince me of a reliable connection, you'll have convinced me of the larger point.
Replies from: NihilCredo↑ comment by NihilCredo · 2011-02-22T10:30:27.957Z · LW(p) · GW(p)
I am not convinced that if a fat man were actually standing there waiting to be shoved piteously onto the tracks that the human mind would necessarily function in the same way it does when sitting in a cafe and discussing the fate of said to-be switch-pusher.
If I were to stake the distinction between the actual and the theoretical on anything, it would be on the above point. What data have we on the reliability of these -- I think you must agree that, regardless of the hypothetical opinions of medieval scholar types, the Torture vs. Dust Specks scenario abstract for us now and here -- thought experiments to predict human behavior when, to retreat to the cliche, one is actually in the trenches?
I... don't think the point of such thought experiments was ever to predict what a human will do. That we do not make the same choices under pressure that we do when given reflection and distance is quite obvious. If you are interested in predicting what people will do, you should look at psychological battery tests, which (should) strive to strike a balance between realism and measurability.
The point of train tracks-type experiments was to force one to demand some coherence from their "moral intuition", and to this end the fact that you're making such choices sitting in a café is a feature, not a bug, because it lets you carefully figure out logical conclusions on which (at least in theory) you will then be able to unthinkingly rely once you're in the heat of the moment (probably not an actual train track scenario, but situations like giving money to beggars or voting during jury duty where you only have seconds or hours to make a choice). When you're actually in the trenches, as you put it, your brain is going to be overwhelmed by a zillion more cognitive biases than usual, so it's very much in your interest to try and pre-make as many choices as possible while you have the luxury of double-checking every one of your assumptions and implications.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-02-22T16:10:14.197Z · LW(p) · GW(p)
One problem I have with the way such thought experiments are phrased is that they often ask "what would you do?" rather than "what's the best thing to do?", which muddles this notion of being more interested in my moral intuitions about the latter than my predictions about the former.
But I realize that people and cultures vary widely in how they interpret phrases like that.
↑ comment by TheOtherDave · 2011-02-21T17:02:14.330Z · LW(p) · GW(p)
Moral intuitions demonstrably exist.
That is, many people demonstrably do endorse and reject certain kinds of situations in a way that we are inclined to categorize as a moral (rather than an aesthetic or arbitrary or pragmatic) judgment, and demonstrably do signal those endorsements and rejections to one another.
All of that behavior has demonstrable influences on how people are born and live and die, suffer and thrive, are educated and remain ignorant, discover truths and believe falsehoods, are happy and sad, etc.
I believe all of that stuff matters, so I believe how the moral intuitions that influence that stuff are formed, and how they can be influenced, is worth understanding.
And, of course, the SIAI folks believe it matters because they want to engineer an artificial system whose decisions have predictable relationships to our moral intuitions even when it doesn't directly consult those intuitions.
Now, maybe none of that stuff is what you're talking about... it's hard to tell precisely what you are rejecting the study of, actually, so I may be talking past you.
If what you mean to reject is the study of morality as something that exists in the world outside of our minds and our behavior, for example, I agree with you.
I suspect the best way to encourage the rejection of that is to study the actual roots of our moral judgments; as more and more of our judgments can be rigorously explained, there will be less and less room for a "god of the moral gaps" to explain them.
And I agree with NihilCredo that the distinction between "applied morality" and "theoretical morality" is not a stable one -- especially when considering large-scale engineering projects -- so refusing to consider theoretical questions simply ensures that we're unprepared for the future.
Also, thought experiments are often useful tools to clarify what our intuitions actually are.
Replies from: rohern, orthonormal↑ comment by rohern · 2011-02-21T23:07:39.735Z · LW(p) · GW(p)
I think we may indeed be talking past each other, so I will try to state my case more cogently.
I am not denying that people do possess ideas about something named "morality". It would be absurd to claim otherwise, as we are here discussing such ideas.
I am denying that, even if I accept all of their assumptions, individuals who claim these ideas as more-than-subjective --- by that I think I mean that they claim their ideas able to be applied to a group rather than only to one man, the holder of the ideas --- can convince me that these ideas are not wholly subjective and individual-dependent.
If it is the case that morality is individual only, then that is an interesting conclusion and something to talk about, but it does seem, at least to a first approximation, that for a judgment to be considered moral, it must have some broader applicability among individuals, rather than concerning but one person. What can Justice be if it is among one man only? This seems a critical part of what is meant by "morality". It is in this latter, broad case, that moral philosophy appears null.
If you possess an idea of morality desire that I consider it to have some connection with the world and with all persons --- and surely I must require that it have such a connection, as moral claims attempt to dictate the interaction between people, and thus cannot be content to be contained in one mind alone --- at least enough of a connection that you can, through reasoned argument, convince me that your claims are both valid and sound, then surely your ideas must make reference to principles that I can discover individually to both exist and serve as predicates to your ideas. If you cannot elucidate these foundations, then how can I be brought to your view through reason? This was the intent of my original criticism, to ask why these foundations are so lousy and to beg that someone make them otherwise if moral claims are to be made.
I think that this is the crux of my objection. I cannot find moral claims that I can be brought to accept through reason alone, as even in the most impressive cases such claims are deeply infected by subjective assumptions that are incommunicable and --- dare I write it? --- irrational.
(This is to change the subject somewhat, but I find that the quality of an idea that allows it to be communicated is necessary to its being considered the result of reason and objective. I use that last word with 10,000 pounds of hesitation.)
However, and now I think that we are talking to each other directly, if, when you write of moral ideas, you refer only to those ideas that currently do exist, whether logically well-constructed or not, and you say that you are interested in studying these for their effects, then I am agreed.
I certainly agree that, whether I am convinced of its validity or use, morality does exist as a thing in the minds of men and thus as an influence on human life. But, I think that restricting ourselves to this case has gargantuan ramifications for the definition of "moral" and drastically cuts the domain of objects on which moral ideas can act. It seems this domain can include only those which involve human beings in some fashion. If morality is exclusively a consequence of the history of human evolution and particular to our biology -- and I do agree that it is -- then I feel that I am bound by it only as far as my own biology has imprinted this moral sense upon me. If it is just biological and not possible to derive through application of reason, then, if I desire to make of myself a creature of reason alone, what care have I for it, but as a curiosity of anthropology?
I suspect that we agree, but that I took a bottom-up approach to get there and left the conclusion implicit, if present at all. All apologies.
Avoided in this post has been struggle with the word "morality" itself. I suspect we could write reams on that. If you think it worthwhile, we should, as the debate may be swung on the ability or inability to pin-down this notion.
(Note: As for SIAI, I think imprinting upon an AI human notions of moral judgments would be hideously dangerous for two reasons: 1) Human beings seem capable in almost every situation of overthrowing such judgments. If said AI is bound in similar manner, then what matters it for controlling or predicting its behavior? 2) If said AI is to possess a notion of justice and of a being who has abdicated certain rights due to immoral conduct, what will its judgment be of the humanity that has taught it morals? Can it not glance, not at history, but simply at the current state of the world and find immediately and with disgust ample grounds for the conclusion that very many humans have surrendered any claim to the moral life? It would be a strange moral algorithm if an AI did not come to this conclusion. Perhaps that is rather the point, as morality even among humans is a strange and often-blind algorithm.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-02-22T23:25:12.812Z · LW(p) · GW(p)
I agree with your basic point that moral intuitions reflect psychological realities, and that attempts to derive moral truths without explicitly referring to those realities will inevitably turn out to implicitly embed them.
That said, I think you might be introducing unnecessary confusion by talking about "subjective" and "individual." To pick a simple and trivial objection, it might be that two people, by happenstance, share a set of moral intuitions, and those intuitions might include references to other people. For example, they might each believe "it is best to satisfy the needs of others," or "it is best to believe things believed by the majority" or "it is best to believe things confirmed by experiment." Indeed, hundreds of people might share those intuitions, either by happenstance or by mutual influence. In this case, the intuition would not be inter-subjective and non-individual, but still basically the kind of thing we're talking about.
I assume you mean to contrast it with objective, global things like, say, gravity. Which is fine, but it gets tricky to say that precisely.
It seems this domain can include only those which involve human beings in some fashion.
Here, again, things get slippery. First, I can have moral intuitions about non-humans... for example, I can believe that it's wrong to club cute widdle baby seals. Second, it's not obvious that non-humans can't have moral intuitions.
if I desire to make of myself a creature of reason alone, what care have I for it, but as a curiosity of anthropology?
If that is in fact your desire, then you haven't a care for it. Or, indeed, for much of anything else.
Speaking personally, though, I would be loathe to give up my love of pie, despite acknowledging that it is a consequence of my own biology and history.
Agreed that imprinting an AI with human notions of moral judgments, especially doing so with the same loose binding to actual behavior humans demonstrate, would be relatively foolish. This is, of course, different from building an AI that is constrained to behave consistently with human moral intuitions.
Agreed that such an AI would easily conclude that humans are not bound by the same constraints that it is bound by. Whether this would elicit disgust or not depends on a lot of things. Sharks are not bound by my moral intuitions, but they don't disgust me.
Replies from: rohern↑ comment by rohern · 2011-02-23T06:49:24.652Z · LW(p) · GW(p)
I think we might still be talking past each other, but here goes:
The reason I posit and emphasize a distinction between subjective judgments and those that are otherwise -- I have a weak reason for not using the term "objective" here -- is to highlight a particular feature of moral claims that is lacking, and in thus being lacked, weakens them. That is, I take a claim to be subjective if to hold it myself I must come upon it by chance. I cannot be brought to it through reason alone. It is an opinion or intuition that I cannot trace logically in my own thought, so I cannot communicate it to you by guiding you down the same line. The reason I think that this distinction matters, is that without this logical structure, it not possible for someone to bring me to experience the same intuition through reasoned argument or demonstration. Without this feature, morality must be an island state. This is ruinous, because morality inevitably and necessarily touches upon interactions between people. If it cannot do this, it cannot do much.
Perhaps we should come to common agreement, are at least agreed-upon disagreement on this point before we try other things.
Other Things:
I suspect -- this is an idea I have only recently invented have not entirely examined -- that any idea that is irrational needs must be essentially incommunicable. How could it be otherwise? If you can lay out the logic behind a thought and give support to its predicates carefully and patiently, and of course your logic is valid and your predicates sound, how can I not, if I am open to reason, not accept what you say as true? That is, if you can demonstrate your ideas as the logical consequences of some set of known truths, I must, because that is what logical consequence is, accept your ideas as true.
I have not witnessed with done with moral notions. Hence my doubt about there existence as rational ideas. I do not doubt that people have moral ideas, but I doubt that they can be communicated to people who have not already come upon them by chance, and who then can only be partially sure that you are of common mind.
Perhaps I can draw a parallel with the distinction between Greek and Babylonian mathematics. The difference between demonstration by proof and attempted demonstration by repeated example. The first (except to mathematicians of the subtle variety), if done properly, seems to be able, in its nature, to be powered to accomplish the goal of communication in every case. Can this be said of the latter type? I think only in the case when the examples given are logically structured so as to be a form of the first type.
"I agree with your basic point that moral intuitions reflect psychological realities, and that attempts to derive moral truths without explicitly referring to those realities will inevitably turn out to implicitly embed them."
I have not wanted to make this claim. What I am claiming is that this claim does appear, thus far, to hold water. However, absence of evidence is not evidence of absence, etc. etc. I am asking for someone to show me the light, as it were.
"First, I can have moral intuitions about non-humans... for example, I can believe that it's wrong to club cute widdle baby seals. Second, it's not obvious that non-humans can't have moral intuitions."
As for your first objection, have not you given precisely the sort of case I was talking about? The moral judgment stated is not about bears clubbing baby seals, it is about humans doing it! Clearly that does involve humans. Come up with a moral judgment about trees overusing carbon dioxide and you'll have me pinned.
"If that is in fact your desire, then you haven't a care for it. Or, indeed, for much of anything else."
That is just silly, is it not? I must at least care for reason itself. The desire to be rational is a passion indeed. If I must be paradoxical at least that far, I will take it and move on. As for your love of pie, if it is really a consequence of your biology and history, then you CANNOT give it up. You cannot will yourself to unlove it, or it must thus not be the product of the aforesaid forces alone.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-02-23T13:58:53.858Z · LW(p) · GW(p)
I am fairly sure that we aren't talking past each other, I just disagree with you on some points. Just to try and clarify those points...
You seem to believe that a moral theory must, first and foremost, be compelling... if moral theory X does not convince others, then it can't do much worth doing. I am not convinced of this. For example, working out my own moral theory in detail allows me to recognize situations that present moral choices, and identify the moral choices I endorse, more accurately... which lowers my chances of doing things that, if I understood better, I would reject. This seems worth doing, even if I'm the only person who ever subscribes to that theory.
You seem to believe that if moral theory X is not rationally compelling, then we cannot come to agree on the specific claims of X except by chance. I'm unconvinced of that. People come to agree on all kinds of things where there is a payoff to agreement, even where the choices themselves are arbitrary. Heck, people often agree on things that are demonstrably false.
Relatedly, you seem to believe that if X logically entails Y, then everyone in the world who endorses X necessarily endorses Y. I'd love to live in that world, but I see no evidence that I do. (That said, it's possible that you are actually making a moral claim that having logically consistent beliefs is good, rather than a claim that people actually do have such beliefs. I'm inclined to agree with the former.)
I can have a moral intuition that bears clubbing baby seals is wrong, also. Now, I grant you that I, as a human, am less likely to have moral intuitions about things that don't affect humans in any way... but my moral intuitions might nevertheless be expressible as a general principle which turns out to apply to non-humans as well.
You seem to believe that things I'm biologically predisposed to desire, I will necessarily desire. But lots of biological predispositions are influenced by local environment. My desire for pie may be stronger in some settings than others, and it may be brought lower than my desire for the absence of pie via a variety of mechanisms, and etc. Sure, maybe I can't "will myself to unlove it," but I have stronger tools available than unaided will, and we're developing still-stronger tools every year.
I agree that the desire to be rational is a desire like any other. I intended "much of anything else" to denote an approximate absence of desire, not a complete one.
↑ comment by rohern · 2011-02-24T05:25:07.169Z · LW(p) · GW(p)
I think an important part of our disagreement, at least for me, is that you are interested in people generally and morality as it is now --- at least your examples come from this set --- while I am trying to restrict my inquiry to the most rational type of person, so that I can discover a morality that all rational people can be brought to through reason alone without need for error or chance. If such a morality does not exist among people generally, then I have no interest for the morality of people generally. To bring it up is a non sequitur in such a case.
I do not see that people coming to agree on things that are demonstrably false is a point against me. This fact is precisely why I am turned-off by the current state of ethical thought, as it seems infested with examples of this circumstance. I am not impressed by people who will agree to an intellectual point because it is convenient. I take truth first, at least that is the point of this inquiry.
I am asking a single question: Is there (or can we build) a morality that can be derived with logic from first principles that are obvious to everyone and require no Faith?
Replies from: TheOtherDave, Prolorn↑ comment by TheOtherDave · 2011-02-24T14:23:35.057Z · LW(p) · GW(p)
You're right, I'm concerned with morality as it applies to people generally.
If you are exclusively concerned with sufficiently rational people, then we have indeed been talking past each other. Thanks for clarifying that.
As to your question: I submit that for that community, there are only two principles that matter:
Come to agreement with the rest of the community about how to best optimize your shared environment to satisfy your collective preferences.
Abide by that agreement as long as doing so is in the long-term best interests of everyone you care about.
...and the justification for those principles is fairly self-evident. Perhaps that isn't a morality, but if it isn't I'm not sure what use that community would have for a morality in the first place. So I say: either of course there is, or there's no reason to care.
The specifics of that agreement will, of course, depend on the particular interests of the people involved, and will therefore change regularly. There's no way to build that without actually knowing about the specific community at a specific point in time. But that's just implementation. It's like the difference between believing it's right to not let someone die, and actually having the medical knowledge to save them.
That said, if this community is restricted to people who, as you implied earlier, care only for rationality, then the resulting agreement process is pretty simple. (If they invite people who also care for other things, it will get more complex.)
Replies from: rohern↑ comment by Prolorn · 2011-02-25T07:23:48.160Z · LW(p) · GW(p)
I am asking a single question: Is there (or can we build) a morality that can be derived with logic from first principles that are obvious to everyone and require no Faith?
Perhaps you've already encountered this, but your question calls to mind the following piece by Yudkowsky: No Universally Compelling Arguments, which is near the start of his broader metaethics sequence.
I think it's one of Yudkowsky's better articles.
(On a tangential note, I'm amused to find on re-reading it that I had almost the exact same reaction to The Golden Transcendence, though I had no conscious recollection of the connection when I got around to reading it myself.)
↑ comment by orthonormal · 2011-02-22T04:39:10.556Z · LW(p) · GW(p)
I agree vehemently with your comment.
↑ comment by Perplexed · 2011-02-21T14:45:19.376Z · LW(p) · GW(p)
I have not yet had a conversation in which my interlocutor, while making moral claims, could provide convincing definitions of fundamental principies, including justice, duty, the good, vice, etc., though the speaker will have readily made use of these terms. It is not helpful that the population of individuals in society who have attempted to understand and establish such principles on their own, and done so carefully and analytically, is near zero.
Near zero? The number of people who have given careful and analytic thought to the foundational principles of ethics easily numbers in the thousands. Comparable to the number of people who have given equivalent foundational consideration to logic or thermodynamics or statistics. Given that there are seven billion people in the world, it is understandable that you have not yet had a conversation with one of these people. But it is easy enough to find the things they have written - online or in libraries. Give it a look. I can't promise it will be convincing, or even that it will improve your opinion of the field. But you will find that a lot of serious thought has been given to the subject.
Replies from: rohern↑ comment by rohern · 2011-02-21T23:38:55.713Z · LW(p) · GW(p)
Forgive me for being sloppy with my language. Given what I wrote, your objection is entirely reasonable.
The idea that I meant to express is that, while it seems safe to assume that virtually everyone who has ever lived long enough to become a thinking person has encountered some kind of moral question in his life, we cannot say that an appreciable percentage of these people has sat and carefully analyzed these questions.
Even if we restrict ourselves only to people alive today and living in the United States -- an enormous restriction considering the perhaps 100 billion people who have lived ever -- the population of thousands you point to is pathetically small. Certainly I agree that Socrates, Mill, Kant, & their Merry Band have approached the subject seriously, but beyond these we've but a paucity, which I think is truly surprising given the apparent universality of moral experience.
The comparison to studying thermodynamics or logic (perhaps not quite so with logic), is that while we can say that everyone is of course affected by thermodynamics, almost no one attempts to think about it. The effect of thermodynamics on a person's life is not impacted by that person's ignorance of thermodynamic laws. However, a huge number of people do attempt to think and talk about morality, and I am not convinced that this latter group does so rigorously or well, which does have a real and critical effect on what morality is in actual practice.
However, I hope you will notice that this is a minor point, and was not a premise to the larger objection I was putting forward.
comment by prase · 2011-02-21T16:02:40.069Z · LW(p) · GW(p)
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
Each one with probability of order 1/3^^^3? Well that's what I call overconfidence.
Replies from: MinibearRex↑ comment by MinibearRex · 2011-02-21T21:49:57.890Z · LW(p) · GW(p)
Note the word "vote". I would expect the number in favor of dust specks to exceed (3^^^3)/2.
Replies from: prase↑ comment by prase · 2011-02-22T00:49:52.491Z · LW(p) · GW(p)
Sorry for misunderstanding, then. But what's the point of the big number, if the claim is only about proportion?
Replies from: Pavitra↑ comment by Pavitra · 2011-02-22T02:07:18.107Z · LW(p) · GW(p)
There's a certain intuition that one should assign greater weight to an other-moral-belief if many other people believe it.
Replies from: prase↑ comment by prase · 2011-02-22T06:55:51.559Z · LW(p) · GW(p)
Then this would a clear abuse of that intuition. Among 3^^^3 people with varied beliefs, all moral beliefs that exist today on Earth would be believed by many people.
Actually I think that the intuition applies only if "many" is measured relatively to the size of population.
Replies from: Pavitra↑ comment by Pavitra · 2011-02-24T02:20:23.666Z · LW(p) · GW(p)
I think the population-ratio measure sounds about right. Phrased in those terms, the original idea was that as the number of people unanimously agreeing with you increases, the proportion of total belief-weight represented by your own opinion approaches zero.
Replies from: prase↑ comment by prase · 2011-02-24T09:37:00.438Z · LW(p) · GW(p)
What is belief weight? Does your assertion mean that with 3^^^3 people, any person's own opinion has approximately zero value?
Replies from: Pavitra↑ comment by Pavitra · 2011-02-24T21:53:34.881Z · LW(p) · GW(p)
What is belief weight?
You said in the 3-parent that "many" should be measured relative to the size of the population. I interpret that to mean that beliefs should be weighted by the number of people who believe them, in the sense of a weighted average (although we're computing something different from the average, the concept of "weighting" analogizes over). The weight of a belief is then the number of people who believe it divided by the number of people polled.
Does your assertion mean that with 3^^^3 people, any person's own opinion has approximately zero value?
Yes.
comment by Oligopsony · 2011-02-21T13:19:35.440Z · LW(p) · GW(p)
I think the answer is that morality has to be counted, but we also have to count changes to morality. If moral preferences were entirely a matter of intellectual commitment, this might lead to double counting, but in fact people really do experience pride, guilt, and so on - and I doubt that morality could have any effect on their behavior if it didn't.
Counting the changes to morality can cut both ways. For instance: some people have a strong inclination to have sex with people of the same sex, while many people (sometimes the same ones) are deeply morally troubled by this. A good utilitarian calculation would count their moral anguish, but it would also note that the best long-term equilibrium is one in which the original moral attitude has disappeared. On the other hand, consider torture: perhaps, independent of people's moral attitudes, torture may in some situations be utility-enhancing, and perhaps we may know that repeated sanction of torture will lead to the "good" long-run equilibrium of people's moral intuitions about torture disappearing. In that case we'd still have to deal with any other effects of people's instinctive revulsion towards torture being overrided, which I doubt would be worth the price.
(My apologies for choosing politically loaded topics, which I suspect is mainly unavoidable given the question at hand.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-02-21T17:10:57.653Z · LW(p) · GW(p)
A good utilitarian calculation would count their moral anguish, but it would also note that the best long-term equilibrium is one in which the original moral attitude has disappeared.
Mm? If I have a strong inclination to do X, and a strong moral intuition that "X is wrong", and I suffer anguish because of the conflict, how do you conclude that the best result is that the moral intuition disappears... either in that particular case, or in general? I mean, I happen to agree with you about the particular case you mention, but I don't see how you reached it.
It's worth noting, incidentally, that in addition to "eliminate the intuition" and "eliminate the inclination" there is the option of "eliminate the anguish." That is, I might reach a point where I want to do X, and I think X is wrong, and I experience conflict between those impulses, and I work out some optimal balance between those conflicting impulses, and I am not anguished by this in any way.
Replies from: Oligopsony↑ comment by Oligopsony · 2011-02-22T19:09:23.083Z · LW(p) · GW(p)
Mm? If I have a strong inclination to do X, and a strong moral intuition that "X is wrong", and I suffer anguish because of the conflict, how do you conclude that the best result is that the moral intuition disappears... either in that particular case, or in general? I mean, I happen to agree with you about the particular case you mention, but I don't see how you reached it.
Good question; here's a formal answer:
We can, in the long run, keep the inclination and eliminate the intuition, keep the intuition and drop the inclination, or keep both. The utility of each is:
side with inclination: fun of (less pain of) indulging in inclination (what's intrinsic to the action itself, i.e. the pain of being tortured but not the discomfort we have that it's being inflicted), less effort needed for (and plus any second order effects of) eliminating the intuition side with the intuition: zero less effort needed for (and plus any second order effects of) eliminating the inclination status quo: fun of less pain of indulging in the inclination at the rates people will engage in given moral disapproval, less pain felt by those with the intuition
I think that this adequately explains 1) why the OP felt ambivalent about counting moral intuitions - even if they count in principle, you really should ignore them sometimes - and 2) why your and my intuitions agree in the homosexuality case: gay sex is fun and doesn't intrinsically harm anyone, homophobia brings little joy to even its holders, and it would take much less effort to eliminate homophobia than homosexuality.
This model grounds common moral intuitions like "if it doesn't harm anybody [implied: beyond the feelings of those who morally disapprove], why ban it?," that preferences that are inborn should take precedence over those that are inculcated, and so on. And it seems that even the other side in this debate is operating from this framework: hence their propensity to argue that homosexuality is not inborn, or that homophobia is, or that there would be very bad second-order effects of the disappearance of homophobia (you would marry your box turtle!)
It's worth noting, incidentally, that in addition to "eliminate the intuition" and "eliminate the inclination" there is the option of "eliminate the anguish." That is, I might reach a point where I want to do X, and I think X is wrong, and I experience conflict between those impulses, and I work out some optimal balance between those conflicting impulses, and I am not anguished by this in any way.
I don't really think this can work over the long term. An individual might be mistaken about how much moral anguish she'd experience over something, but in the long run, those moral intuitions that don't affect you emotionally aren't going to affect you behaviorally. (This is the reason real-life utilitarians don't Feed The Utility Monster unless they have an enabling peer group.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-02-22T20:36:37.539Z · LW(p) · GW(p)
Ah, I see what you mean. So if the world changed such that eliminating the inclination cost less than eliminating the intuition -- say, we discover a cheap-to-produce pill that makes everybody who takes it heterosexual without any other side-effects -- a good utilitarian would, by the same token, conclude that the best long-term equilibrium is one in which the inclination disappeared. Yes?
Replies from: Oligopsony↑ comment by Oligopsony · 2011-02-22T21:05:19.469Z · LW(p) · GW(p)
In principle, I suppose so (though if you're in a relationship, a pill to change your orientation is hardly low-cost!)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-02-22T22:55:09.470Z · LW(p) · GW(p)
Agreed, though that simply extends the definition of "long-term equilibrium" a few generations.
Anyway, cool; I'd misunderstood your original claim to be somewhat more sweeping than what you actually meant, which is why I was uncertain. Thanks for clarifying!
comment by David_Gerard · 2011-02-21T08:44:05.230Z · LW(p) · GW(p)
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
I think you've nailed my problem with this scenario: anyone who wouldn't go for this, I would be disinclined to listen to.
Replies from: rohern↑ comment by rohern · 2011-02-22T07:58:23.494Z · LW(p) · GW(p)
Perhaps this is just silliness, but I am curious how you would feel if the question were:
"You have a choice: Either one person gets to experience pure, absolute joy for 50 years, or 3^^^3 people get to experience a moment of pleasure on the level experienced when eating a popsicle."
Do you choose popsicle?
Replies from: David_Gerard, TheOtherDave↑ comment by David_Gerard · 2011-02-22T08:13:25.293Z · LW(p) · GW(p)
I suspect I would. But not only does utility not add linearly, you can't just flip the sign, because positive and negative are calculated by different systems.
↑ comment by TheOtherDave · 2011-02-22T16:16:11.736Z · LW(p) · GW(p)
I don't think it's silly at all.
Personally, I experience more or less the same internal struggle with this question as with the other: I endorse the idea that what matters is total utility, but my moral intuitions aren't entirely aligned with that idea, so I keep wanting to choose the individual benefit (joy or non-torture) despite being unable to justify choosing it.
Also, as David Gerard says, it's a different function... that is, you can't derive an answer to one question from an answer to the other... but the numbers we're tossing around are so huge that the difference hardly matters.