Qualitative differences
post by RST · 2017-11-18T09:02:08.102Z · LW · GW · 38 commentsContents
38 comments
Probably I am making a huge mistake resuming a ten years old discussion, especially because this is my first post. Anyway, let’s make this experiment.
One common argument for choosing “dust specks” over “torture” is that the experience of torture is qualitatively different form the experience of receiving a dust speck in the eye and so the two should not be compared (assuming that there won’t be other long term negative consequences such as car accidents and surgical mistakes).
A moderate pain doesn’t cause the same reactions in the human organism that happen during an extreme and prolonged distress such as: panic attacks, alterations of physiological functions and psychological trauma with all the capacity impairment that derive from it.
But the most obvious characteristic that distinguishes the pain caused by a torture from the pain caused by a dust speck in the eye is the intolerability: people would rather die than feel 50 years of torture. Some argue that these qualitative differences don’t exist because we can create a sequence of injures each comparable to the previous and the next.
For example:
30 dust specks divided among 30 people≃1 slap.
10 slaps divided among 10 people≃1 punch.
10 punches divided among 10 people≃1 small cut.
10 small cuts divided among 10 people≃1 deep cut.
10 deep cuts divided among 10 people≃1 tonsillectomy without anesthesia.
10 tonsillectomies without anesthesia divided among 10 people≃1 torture.
So it seems that we can’t clearly distinguish what is tolerable from what is intolerable, unless we choose on the spectrum of the grades of pains an arbitrary limit, or rather a distance, that separate two incomparable kinds of injuries. Many have rejected this idea because the line would be subjective and because the difference between things near the limit is small.
However these limits have an important function in our decision making process: in some situations there could be a range of options which are similar among each other, but at same time have differences which accumulate step by step, until they become too relevant to be ignored.
Consider this mind experiment: a man has bought a huge house with many rooms, is favorite color is orange and he wants to paint all the rooms with it. Omega offers its help and gives five options, it will paint:
A) One room with R=255 G=150 B=0 paint
B) Two rooms with R=255 G=120 B=0 paint
C) Four rooms with R=255 G=90 B=0 paint
D) Eight rooms with R=255 G=60 B=0 paint
E) Sixteen rooms with R=255 G=30 B=0 paint.
What will the man prefer? Probably not the first option, but also not the last. He will find a balance between his desire to have orange rooms and his laziness. But what would the man choose if the only possible options were A) and E)?
Probably the first one: although red and orange are part of a continuous spectrum and he doesn’t know exactly where their separation line is, he can still distingue the two color, and no amount of red is comparable to his favorite color.
Also the lifespan dilemma shows that for many people the answer can’t be just a matter of expected value, otherwise everyone would agree on reducing the probability of success to values near to 0. When making a choice for the dilemma, people consider two emotions: the desire to live longer and the fear to die, the answer is found when people think that they have reached the line which separates an acceptable risk from an excessive one.
This line is quite arbitrary and subjective, like the separation line between red and orange. Nevertheless, most people can see that the colors of ripe strawberries and ripe oranges are different, that a minimum amount of safety is required and that annoyance and agony are different things, even if there are intermediate situations between them.
One last example: the atmosphere is divided in many layers but the limits of each layer are nebulous because their characteristics get more similar near their edges; moreover the last layer is made of very low density gas whose particles constantly escape into space. Nevertheless if we choose two points in the atmosphere we can see that their characteristics became more different as we increase the difference of their altitude, until we can affirm that they are in two different layer.
This distance is difficult to define and people will disagree on its precise length, however we cannot ignore the difference between things that are very distant from each other, as we cannot ignore that atmosphere is still present at 1km of altitude while it is not present at 100,000km of altitude, despite the fact that its outer limit is very ill defined.
Similarly we can use the knowledge of human physiology during the events of life to choose a distance that separate two incomparable kinds of experiences, and so give the priority to the actions that can really change people life quality. For instance I think it is preferable to give 1,000,000$ to a poor family rather than 3^^^3$ to 3^^^3 middle class families, because each single dollar in the latter case would cause an irrelevant change compared to the former case. In other words only the money in the first case will extinguish hunger.
Personally my empathy usually makes me prefer utilitarian, or rather quantitative choices, when the qualities of experiences are similar, but it also compels me to save one person from something intolerable, rather than 3^^^3 people from something tolerable. This is not scope insensitivity, but rather coherence: I could stand 3^^^3 annoyed but bearable lives, but not 3^^^3-1 not annoyed lives plus one unbearable life, which is quite tautological when you consider the definition of “unbearable”.
I don’t know exactly where my cut-off is, and I won’t be outraged if someone else’s cut-off is higher or lower, but I would be rather skeptical if someone claimed that he can’t bear a dust specks (or rather 3^^^3 dust speck diluted among 3^^^3 lives).
In addition, if we consider our utility to be something more than raw pleasure, then our disutility should be more than raw pain. Surely pleasure is valuable, but many people want something more from life, and for them no amount of pleasure can substitute the value of discovery, accomplishment, relationships, experiences, creativity, etc. All this things are valuable but only together they make something qualitatively more important, which is often called human flourishing or Eudaimonia.
Similarly only if pain is combined with frustration, fear, desperation, panic, etc. it becomes something qualitatively worst such as agony. Eudaimonia can resist a dust speck, or even a stubbed toe, especially since we have the ability to heal, find relief, and even become stronger after a tolerable stress, but it can’t resist 50 years of torture.
The distinction between pain and agony is the same between colours or between pain and pleasure: in each case neurons fires but in different ways, and pain is not the mere inhibition of pleasure centre, after all one can be happy also when feeling a moderate pain.
Utilitarianism has two orthogonal goals: minimize pain and maximize pleasure. People that choose dust specks could simply have another one: minimize agony. If physiological reactions can be used to distinguish pain from pleasure, then why shouldn’t we consider other characteristics and be more precise when considering people values?
Recognizing the importance of qualitative differences and physiological limits surely makes the utility calculus more complicate. However there are also advantages: the result would be a system more compatible with people intuitions, more egalitarian, and more safe from utility monsters.
One final thought: I don’t delude myself that I will change someone opinion on the problem. After all, people who choose torture consider humans intuitions wrong since they go against utilitarianism.
One the other hand people who choose dust specks consider utilitarianism (or at least some forms of utilitarianism) wrong because it goes against moral intuitions, and because we think that to simplify human morality is dangerous.
People in the first case care about maximizing pleasure, and minimizing pain.
We “dust specks chooser” have another aim. This aim is probably to create an ethical system that that establishes a safety net to protect people from unbearable and situations and from injuries that really destroy the capability to pursue a decent, serene, worthy life, which probably is the foremost goal of most people.
For utilitarianism is morally right to torture billions of people for 50 years if this will give someone 3^^^^^^3 years of happiness. There is no reason why we should endorse something so repugnant for the sake of simplicity: utilitarianism is an oversimplification, it deliberately ignores equality and justice which are part of people morality as much as compassion.
I don’t deny that utilitarianism usually works, however to shut up and multiply is an act of blind faith: it makes people stick to the unjust parts of system rather than reform it, and I can’t deny that I am rather worried, but also curios about the origins of this zeal.
I would really like to be able to discuss with some torture chooser. I am sure that it could be an interesting mutual occasion to learn more about minds that are very different from our own.
P.S. I am not from an English speaking country. The assertive manner in which I wrote this post was merely a way to expose my ideas more simply, any advice about style or grammar is appreciated.
38 comments
Comments sorted by top scores.
comment by RST · 2017-12-14T13:46:05.592Z · LW(p) · GW(p)
I am rather new here and I would like to ask: what are the criterions that a post must respect to be put in the front page? How can I improve to reach such criterions?
Replies from: spiralingintocontrol↑ comment by spiralingintocontrol · 2017-12-14T19:16:42.395Z · LW(p) · GW(p)
There are some guidelines on what sort of content belongs on the frontpage.
I think, based on these guidelines, that the issue with this particular post would be "crowdedness" - people here have discussed this topic a lot already.
Replies from: RSTcomment by ryan_b · 2017-11-21T17:57:44.972Z · LW(p) · GW(p)
Do not worry about writing in an assertive manner - I prefer to engage with a strong presentation of the argument. There is no obligation to present all sides of an argument in a single post, although you may find people understand better if you say you are only presenting one side at the beginning.
I have two simpler arguments against preferring 50 years of torture:
1) Suffering is not linear, so summing the harm of dust motes is meaningless.
2) Suffering has a lot of filters. A dust mote is 0 suffering because it falls below the threshold. Linear or not, still comes to ~0.
Replies from: RST↑ comment by RST · 2017-11-24T14:13:54.751Z · LW(p) · GW(p)
Thanks for the encouragement.
I cannot agree with 2) because I think that a dust speck, or a stubbed toe have a disutility which is not ignorable. After all I still would prefer to avoid them.
The solution number 1) is more interesting, but I prefer to say that dust specks are irrelevant compared to torture because they are a different kind of pain (bearable rather than unbearable). The biggest advantage is that rather than use a different formula I just include a physiological limit (pain tolerance).
The bigger problem is: where does this limit exactly lays?
The post's aim is to bring some examples of limits which are nebulous but still evident.
Replies from: ryan_b↑ comment by ryan_b · 2017-11-26T22:11:14.488Z · LW(p) · GW(p)
Following the physiological limit argument, I renew my objection of 2) as being below the pain threshold.
I agree that dust specks are not ignorable disutility, but the linearity objection applies to this too - one dust spec is very different from a dust storm. I find this example instructive, because while a storm is comprised of many specs, clearly the relationship is not simply the sum of all the specs.
Replies from: RST↑ comment by RST · 2017-11-27T18:07:28.991Z · LW(p) · GW(p)
Then I agree with you, certain phenomena present characteristics that emerge only from the interactions of their parts.
I tried to express a similar concept when I wrote: "All this things are valuable but only together they make something qualitatively more important, which is often called human flourishing or Eudaimonia.
Similarly only if pain is combined with frustration, fear, desperation, panic, etc. it becomes something qualitatively worst such as agony."
I would like to ask you a question (because I became curious about people's opinion on the subject after I read these posts): do you think that humans are, or at least should be utility maximizer when pursuing their goals? (I think that we should be utility maximizer when pursuing the resources to reach our goals, but I wonder if people decision process is too incoherent to be expressed as a function)
Replies from: ryan_b↑ comment by ryan_b · 2017-12-03T20:00:59.557Z · LW(p) · GW(p)
At first I was tempted to answer yes, but then I reconsidered. It would be more accurate to say I believe humans should consider utility maximizing strategies when pursuing their goals; I don't just say 'be a utility maximizer' because we have multiple goals, and we are notoriously terrible at considering multiple goals at once.
I am comfortable flatly asserting that people's decision processes are too incoherent to be expressed as a function, at least with respect to utility. If you take a look at the Functional Decision Theory paper, you will see that the latest development in decisionmaking is to imagine a function for your decisions and then behave as if you were implementing it: https://arxiv.org/abs/1710.05060
Replies from: RST↑ comment by RST · 2017-12-04T19:47:51.101Z · LW(p) · GW(p)
Thanks for the article.
To better express my argument I could use this analogy.
Pain can be seen as a poison: people would prefer to avoid it altogether. However it is really dangerous only when it reaches a certain concentration and causes long term damages that really impairs people ability to experience a worthy, or at least bearable, life (which is probably the first goal of most people).
I too have reached the conclusion that people goals are too numerous, mutable, and nebulous to be expressed by a function; and even if I am wrong, probably the resulting function will be too complex to be applied by humans, at least in their everyday choices.
On the other hand, utility functions surely are useful to maximize our resources and efficiency. So they can be used as tools to reach our goals.
comment by DragonGod · 2017-11-18T21:59:37.199Z · LW(p) · GW(p)
Your morality is potentially mathematically incoherent. Your moral system might imply conclusions that you yourself would find reprobate.
Before I write down the reductio of your morality, I need to know the answer to the following question.
- The utility of outcomes is continuous in your moral framework. I.e. you agree with this: For any two outcomes $o_i$ and $o_k (u(o_k) > u(o_j)$, there exists another outcome o_j: u(o_i) < u(o_j) < u(o_k).
- For any two $o_i$, $o_k,$ $(|u(o_i) - u(o_j)| < \epsilon (\epsilon > 0() \implies o_i$ and $o_j$ are on the same level.
Do you accept the above two statements?
Replies from: RST↑ comment by RST · 2017-11-19T08:21:56.892Z · LW(p) · GW(p)
I think that I found a better way to express myself.
Level 1: torture, agony. Depending on the type of torture, I could stand this level only for some second/minutes/hours.
Level 2: pain, depression, very low life quality. I could stand this level for some days/months/years.
Level 3: annoyance. I can stand this level. (especially if the annoyance is not constant.)
Level 4: Eudaimonia. I want to be in this level.
I think that to ignore human pain tolerance only to simplify our ethical system is wrong, for instance this means that I won’t tolerate years of torture to avoid annoyances, only to avoid greater/longer torture. So I think that I have different utility functions that I apply in a hierarchical order.
Then I use empathy and I don’t do to others what I won’t do to myself. Surely if someone has a different pain tolerance I will consider it. However I don’t think that someone would tolerate 50 years of torture.
comment by entirelyuseless2 · 2017-11-18T16:57:08.914Z · LW(p) · GW(p)
" Also the lifespan dilemma shows that for many people the answer can’t be just a matter of expected value, otherwise everyone would agree on reducing the probability of success to values near to 0. "
This is a delusion that stems from using the formulation of "expected value" without understanding it. The basic idea of expected value derives from a utility function, which is the effect of being able to give consistent answers to every question of the form, "Would you prefer X to Y, or Y to X? Or does it just not matter?" Once you have such a set of consistent answers, a natural way to use a real numbering measuring system falls out, which is such that something is called "twice as good" when it just doesn't matter whether you get a 50% chance of the "twice as good" thing, or 100% chance of the "half as good" thing.
But this idea of "twice as good" is just defined in this way because it works well. There is no reason whatsoever to assume "twice as much stuff by some other quantitative measure" is twice as good in this sense; and the lifespan dilemma and all sorts of other things definitively proves that it is not. Twice as much stuff (in the measurement sense) is just not twice as good (in the expected value sense), period, not even when you are talking about lifespan or lives or whatever else.
In this sense, if you can give consistent answers to every pair of preference options, you will want to maximize expected utility. There is no reason whatsoever to think you will be willing to drive the probability of success down to values near zero, however, since there is no reason to believe that value of lifespan scales in that way, and definitive reason to believe otherwise.
Replies from: RST↑ comment by RST · 2017-11-18T17:38:19.152Z · LW(p) · GW(p)
If I have understood correctly, your utility function is asymptotic. I wonder if an asymptote in an utility function can be consider as a sort of arbitrary limit.
Anyway, I agree with you, an asymptotic utility function can work and maintain consistency.
Replies from: TheMajor, entirelyuseless2↑ comment by entirelyuseless2 · 2017-11-18T19:03:02.526Z · LW(p) · GW(p)
"arbitrary limit"
What would constitute a non-arbitrary limit? A utility function just describes the behavior of a physical being. It is not surprising that a physical being has limits in its behavior -- that follows from the fact that physical law imposes limits on what a thing does. This is why e.g. transhumanist hope for real immortality is absurd. Even if you could find a way around entropy, you will never change the fact that you are following physical laws. The only way you will exist forever is if current physical laws extrapolated from your current situation imply that you will exist forever. There is no reason to think this is the case, and extremely strong reasons to think that it is not.
In the same way, everything I do is implied by physical laws, including the fact that I express a preference for one thing rather than another. It may be that a good conman will be able to persuade some people to order their preferences in a way that gets them to commit suicide (e.g. by accepting the lifespan offer), but they will be very unlikely to be able to persuade me to order my preferences that way. This is "arbitrary" only in the sense that in order for this to be true, there have to be physical differences between me and the people who would be persuaded that in theory neither of us is responsible for. I don't have a problem with that. I still don't plan to commit suicide.
Replies from: RSTcomment by DragonGod · 2017-11-18T15:14:51.200Z · LW(p) · GW(p)
Congratulations on your first post.
For instance I think it is preferable to give 1,000,000$ to a poor family rather than 3^^^3$ to 3^^^3 middle class families,
I agree, but only because of the massive inflation caused by the first option making it a net negative in my utility function. Assuming we are talking about earth here, I believe the following:
It is preferable to give $10,000,000 to 10,000 middle class families, than to give $1,000,000 to a single poor class family.
If you are consistent, then you disagree with the above. If you agree with the above, then you are inconsistent. I have only two moral axioms (so far):
- Be consistent.
- Maximise your utility.
If you are inconsistent, then please fix that. :)
Replies from: entirelyuseless2, RST↑ comment by entirelyuseless2 · 2017-11-18T20:25:49.065Z · LW(p) · GW(p)
" If you are inconsistent, then please fix that. :) "
G.K. Chesterton made a lot of errors which he always managed to state in interesting ways. However, one thing he got right was the idea that a lot of insanity comes from an excessive insistence on consistency.
Consider the process of improving your beliefs. You may find out that they have some inconsistencies between one another, and you might want to fix that. But it is far more important to preserve consistency with reality than interal consistency, and an inordinate insistence on internal consistency can amount to insanity. For example, the conjunction of all of your beliefs with "some of my beliefs are false" is logically inconsistent. You could get more consistency by insisting that "all of my beliefs are true, without any exception." But that procedure would obviously amount to an insane arrogance.
The same sort of thing happens in the process of making your preferences consistent. Consistency is a good thing, but it would be stupid to follow it off a cliff. That does not mean that no possible set of preferences would be both sane and consistent. There are surely many such possible sets. But it is far more important to remain sane than to adopt a consistent, but insane, set of preferences.
Replies from: DragonGod↑ comment by DragonGod · 2017-11-18T21:39:08.752Z · LW(p) · GW(p)
For example, the conjunction of all of your beliefs with "some of my beliefs are false" is logically inconsistent.
I don't see how that conjunction is logically inconsistent? (Believing "all my beliefs are false" would be logically inconsistent, but I doubt any sensible person believes that.
I think consistency is good. A map that is not consistent with itself cannot be used for the purposes of predicting the territory. And inconsistent map (especially one where the form and extent of inconsistency is unknown (save that the map is inconsistent)) cannot be used for inference. An inconsistent map is useless. I don't want consistency because consistency is desirable in and of itself—I want consistency because it is useful.
The same sort of thing happens in the process of making your preferences consistent.
An example please? I cannot fathom a reason to possess inconsistent preferences. An agent with inconsistent preferences cannot make rational choices in decision problems involving those preferences. Decision theory requires your preferences first be consistent before any normative rules can be applied. Inconsistent preferences result in a money pump. Consistent preferences are strictly more useful than inconsistent preferences.
That does not mean that no possible set of preferences would be both sane and consistent.
Assuming that "sane" preferences are useful (if usefulness is not a characteristic of sane preferences, then I don't want sane preferences), I make the following claim:
All sane preferences are consistent.Replies from: entirelyuseless2
↑ comment by entirelyuseless2 · 2017-11-18T22:31:38.030Z · LW(p) · GW(p)
" I don't see how that conjunction is logically inconsistent?" Suppose you have beliefs A, B, C, and belief D: "At least one of beliefs A, B, C is false." The conjunction of A, B, C, and D is logically inconsistent. They cannot all be true, because if A, B, and C are all true, then D is false, while if D is true, at least one of the others is false. So if you think that you have some false beliefs (and everyone does), then the conjunction of that with the rest of your beliefs is logically inconsistent.
" I think consistency is good. " I agree.
" A map that is not consistent with itself cannot be used for the purposes of predicting the territory." This is incorrect. It can predict two different things depending on which part is used, and one of those two will be correct.
" And inconsistent map (especially one where the form and extent of inconsistency is unknown (save that the map is inconsistent)) cannot be used for inference." This is completely wrong, as we can see from the example of recognizing the fact that you have false beliefs. You do not know which ones are false, but you can use this map, for example by investigating your beliefs to find out which ones are false.
" An inconsistent map is useless. " False, as we can see from the example.
" I don't want consistency because consistency is desirable in and of itself—I want consistency because it is useful. " I agree, but I am pointing out that it is not infinitely useful, and that truth is even more useful than consistency. Truth (for example "I have some false beliefs") is more useful than the consistent but false claim that I have no false beliefs.
" An example please? I cannot fathom a reason to possess inconsistent preferences." It is not a question of having a reason to have inconsistent preferences, just as we were not talking about reasons to have inconsistent beliefs as though that were virtuous in itself. The reason for having inconsistent beliefs (in the example) is that any specific way to prevent your beliefs from being inconsistent will be stupid: if you arbitrarily flip A, B, or C, that will be stupid because it is arbitrary, and if you say "all of my beliefs are true," that will be stupid because it is false. Inconsistency is not beneficial in itself, but it is more important to avoid stupidity. In the same way, suppose there is someone offering you the lifespan dilemma. If at the end you say, "Nope, I don't want to commit suicide," that will be like saying "some of my beliefs are false." There will be an inconsistency, but getting rid of it will be worse.
(That said, it is even better to see how you can consistently avoid suicide. But if the only way you have to avoid suicide is an inconsistent one, that is better than nothing.)
" Consistent preferences are strictly more useful than inconsistent preferences." This is false, just as in the case of beliefs, if your consistent preferences lead you to suicide, and your inconsistent ones do not.
Replies from: TheMajor, DragonGod↑ comment by TheMajor · 2017-11-19T17:10:36.197Z · LW(p) · GW(p)
" Suppose you have beliefs A, B, C, and belief D: "At least one of beliefs A, B, C is false." The conjunction of A, B, C, and D is logically inconsistent. They cannot all be true, because if A, B, and C are all true, then D is false, while if D is true, at least one of the others is false. So if you think that you have some false beliefs (and everyone does), then the conjunction of that with the rest of your beliefs is logically inconsistent. "
But beliefs are not binary propositions, they are probability statements! It is perfectly consistent to assert that I have ~68% percent confidence in A, in B, in C and in "At least one of A,B,C is false".
Replies from: entirelyuseless2↑ comment by entirelyuseless2 · 2017-12-03T19:05:04.230Z · LW(p) · GW(p)
Most people, most of the time, state their beliefs as binary propositions, not as probability statements. Furthermore, this is not just leaving out an actually existing detail, but it is a detail missing from reality. If I say, "That man is about 6 feet tall," you can argue that he has an objectively precise height of 6 feet 2 inches or whatever. But if I say "the sky is blue," it is false that there is an objectively precise probability that I have for that statement. If you push me, I might come up with the number. But I am basically making the number up: it is not something that exists like someone's height.
In other words, in the way that is relevant, beliefs are indeed binary propositions, and not probability statements. You are quite right, however, that in the process of becoming more consistent, you might want to approach the situation of having probabilities for your beliefs. But you do not currently have them for most of your beliefs, nor does any human.
↑ comment by RST · 2017-11-18T15:19:16.489Z · LW(p) · GW(p)
Thank you very much!
Replies from: DragonGod↑ comment by DragonGod · 2017-11-18T15:20:56.941Z · LW(p) · GW(p)
My comment was prematurely posted, please reread it.
Replies from: RST↑ comment by RST · 2017-11-18T15:44:50.246Z · LW(p) · GW(p)
I think that I am consistent: as you said I disagree with the above, however my disagreement in this case is slightly minor (compared to the 3^^^3$ to 3^^^3 middle class families option) because the level of life quality improvement is starting to become more relevant. Nevertheless the desire to help people that are suffering for economical reason will remain greater than the desire to add happiness in the life of people who are already serene.
Thank you, for the opportunity of reflection.
Replies from: DragonGod↑ comment by DragonGod · 2017-11-18T15:52:22.218Z · LW(p) · GW(p)
Order the following outcomes in terms of their desirability. They are all alternative outcomes, and possess equal opportunity cost.
- $1,000,000 to one poor family.
- $10,000,000 to one poor family.
- $1,000,000 (each) to 100 poor families.
- $10,000,000 (each) to 100 poor families.
- $10,000,000 (each) to 1000 middle class families.
- $10,000,000 (each) to 10,000 middle class families.
- $100,000,000 (each) to 1000 middle class families.
- $100,000,000 (each) to 10,000 middle class families.
Assume negligible inflation results due to the distribution.
Replies from: RST↑ comment by RST · 2017-11-18T16:03:18.606Z · LW(p) · GW(p)
Ok let's try. From the most desirable to the least desirable: 4,3,2,1,8,6,7.
Both 4 and 3 will help 100 poor families so have the priority. 2 and 1 will help one poor family so have the priority compared to the last three options. 8 and 6 will help more people compared to 7. The rest is only a quantity difference.
Replies from: DragonGod↑ comment by DragonGod · 2017-11-18T16:08:24.242Z · LW(p) · GW(p)
We do disagree I guess. However you define your utility function, 8 is worse than 1. I find this very disturbing. How did you arrive at your conclusions (it seems to me naive QALY calculations would place 8 as a clearly better option than 1).
Replies from: RST↑ comment by RST · 2017-11-18T16:43:00.066Z · LW(p) · GW(p)
This is my reasoning: if we assume that the middle class families have a stable economic situation, and if we assume that they have enough money to obtain food, heath care, a good home, instruction for their children etc. while the poor family's members don't have this comforts and are suffering hunger and diseases for that, then the poor family has the priority in my system of values: I could easily stand the lack of a villa with swimming pool for 10,000 lives if this would make me avoid a miserable life. (I think that we can simplify my ethic as a Maslow's Hierarchy of Needs.) Of course if the middle class families would donate lots of money to poor families, my answer would change.
Replies from: DragonGod↑ comment by DragonGod · 2017-11-18T16:52:19.747Z · LW(p) · GW(p)
But there are ten thousand middle class families, and just one poor family? Among those ten thousand, what about the chance that the money e.g. provides necessary funds to:
- Send their children to Ivy League schools.
- Provide necessary treatment for debilitating illnesses.
- Pay off debt.
- Otherwise drastically improve their quality of life?
↑ comment by RST · 2017-11-18T17:11:41.685Z · LW(p) · GW(p)
Good points I admit to have not considered. I live in a country where health care and instruction can be afforded by middle class families and as I have already written I assumed that their economical situation was stable. If we consider this factors then my answer will change.
Replies from: DragonGod↑ comment by DragonGod · 2017-11-18T17:46:01.719Z · LW(p) · GW(p)
Even if they have stable economic condition, I still expect any sensible utilitarian calculation to prefer helping 10,000 middle class families as opposed to one poor family. How exactly did you calculate helping one poor family as better?
Replies from: RST↑ comment by RST · 2017-11-18T18:08:08.434Z · LW(p) · GW(p)
As I tried to express in my post, I think that here are different "levels of life quality". For me, people in the lower levels, have the priority. I adopt utilitarianism only when I have to choose what is better in the same level.
The post's purpose wasn't to convince someone that my values are right. I only want to show throught some examples that, even though some limits are nebulous, we can agree that things that are very distant from the edge can be associated to two different layer.
Replies from: RST