The Argument From Marginal Cases
post by jefftk (jkaufman) · 2013-07-26T13:30:17.215Z · LW · GW · Legacy · 55 commentsContents
55 comments
The argument from marginal cases claims that you can't both think that humans matter morally and that animals don't, because no reasonable set of criteria for moral worth cleanly separates all humans from all animals. For example, perhaps someone says that suffering only matters when it happens to something that has some bundle of capabilities like linguistic ability, compassion, and/or abstract reasoning. If livestock don't have these capabilities, however, then some people such as very young children probably don't either.
This is a strong argument, and it avoids the noncentral fallacy. Any set of qualities you value are going to vary over people and animals, and if you make a continuum there's not going to be a place you can draw a line that will fall above all animals and below all people. So why do I treat humans as the only entities that count morally?
If you asked me how many chickens I would be willing to kill to save your life, the answer is effectively "all of them". [1] This pins down two points on the continuum that I'm clear on: you and chickens. While I'm uncertain where along there things start getting up to significant levels, I think it's probably somewhere that includes no or almost no animals but nearly all humans. Making this distinction among humans, however, would be incredibly socially destructive, especially given how unsure I am about where the line should go, and so I think we end up with a much better society if we treat all humans as morally equal. This means I end up saying things like "value all humans equally; don't value animals" when that's not my real distinction, just the closest schelling point.
[1] Chicken extinction would make life worse for many other people, so I wouldn't actually do that, but not because of the effect on the chickens.
I also posted this on my blog.
55 comments
Comments sorted by top scores.
comment by Pablo (Pablo_Stafforini) · 2013-07-26T17:12:45.329Z · LW(p) · GW(p)
Replies from: jkaufman, KawoombaI'm surprised. Do you mean you wouldn't trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs? If we plotted your attitudes to progressively more intelligent entities, where's the discontinuity or discontinuities?
↑ comment by jefftk (jkaufman) · 2013-07-28T04:02:34.623Z · LW(p) · GW(p)
I think frogs are extremely unlikely to have moral worth, but one dust speck vs 1B frogs is enough to overcome that improbability and I would accept the speck.
↑ comment by Kawoomba · 2013-07-26T17:40:59.728Z · LW(p) · GW(p)
Always a bit awkward to argument by ways of the Sorites paradox.
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2013-07-27T16:15:50.848Z · LW(p) · GW(p)
I would advise you to be cautious in concluding that an argument is an instance of the Sorites paradox. There is a long tradition of dismissing arguments for this reason which upon closer inspection have been found to be relevantly dissimilar to the canonical Sorites formulation. Two examples are Chalmers's "fading qualia" argument and Parfit's "psychological spectrum" argument.
comment by DanArmak · 2013-07-26T16:57:51.658Z · LW(p) · GW(p)
Common descent means you can only behave as you do due to historical accidents, which may not hold for much longer as technology improves.
There is an unbroken series running from any human, to any other animal (like a chicken), where each two successive individuals are mother and child. It is merely historical happenstance that most of the individuals, except those at the two ends, are dead: and one that will no longer hold in the future if we defeat death.
If you force your moral theory to evaluate all the individuals along that line - who, again, are not hypotheticals, counterfactuals or future possibilities, but actual historical deceased individuals - then it might say one of two things:
- At some point along the line, the mother has moral value, and the child and its descendants do not (or vice versa). Or, at a special point along the line, one daughter has moral value, and her sister does not. That seems arbitrary and absurd.
- Moral value gradually changes along the line. People who hold this view differ on whether it ever reaches effectively zero value (in the case of chickens, at least). In this case, you necessarily admit that pre-human individuals (like your ancestor of a few million years ago) have distinctly less-than-human moral value. And you (almost) necessarily admit that, if you extend the line beyond humans into some hypothetical futures, individuals might come to exist who have greater moral value than anyone living today.
The second option flies in the face of your position that:
I think we end up with a much better society if we treat all humans as morally equal
Historically, this merely begs the question: it shifts the argument onto "who is human?" Are blacks? Are women? Are children? Are babies? Are fetuses? Are brain-dead patients? Are mentally or physically crippled individuals? Are factors like (anti-)contraceptives influencing the possibility of future children? Ad infinitum.
So the real value of this statement is merely in the implication that we agree to treat clear non-humans, like chickens, as having no moral value at all. Otherwise we would be forced to admit that some chickens may have more moral value than some proto- or pre-humans.
Replies from: Baughn, Armok_GoB↑ comment by Baughn · 2013-07-28T12:11:35.138Z · LW(p) · GW(p)
I don't think that's begging the question, as such, simply an appeal to history: We've seen what happens if you don't treat all members of the species Homo Sapiens Sapiens as roughly equal, and it's not pretty. Today's society is in fact nicer, along a wide range of fairly concrete axises, and at least some of that can be attributed to increased equality.
Replies from: DanArmak↑ comment by DanArmak · 2013-07-28T13:35:29.959Z · LW(p) · GW(p)
My point was that this merely shifted the ground of debate. People began saying that blacks, women, etc. were "not really human" or "sub-human". Today there are those who think fetuses have the same moral rights as babies, and those who say fetuses are not "really human [individuals]". And so on.
In other words, it's a game of definitions and reference class tennis. You should taboo "members of H. Sapiens Sapiens" and specify how you really assign moral value to someone. Your definition should also work with outright non-, pre- and post-humans too, unless you're willing to say outright that anyone who can't breed with today's humans necessarily has zero moral worth.
Replies from: Baughn↑ comment by Armok_GoB · 2013-07-28T16:23:45.632Z · LW(p) · GW(p)
Steelmaning the argument: "Humans do vary in value, but I should treat all individuals with the power to organize/join political movements and riots in the pursuit of equal rights as equal for wholly practical reasons."
Replies from: DanArmakcomment by Locaha · 2013-07-27T09:45:53.619Z · LW(p) · GW(p)
If you asked me how many chickens I would be willing to kill to save your life, the answer is effectively "all of them"
How many one day old infants would you be willing to kill to save a mentally healthy adult?
Replies from: jkaufman, blacktrance, aelephant↑ comment by jefftk (jkaufman) · 2013-07-27T15:35:23.389Z · LW(p) · GW(p)
This is tricky because day old infants typically have adult humans ("parents") who care very strongly about them. You want me to ignore that, though, and assume that for some reason no adult will care about this infant dying? I think infants probably don't have moral value in and of themselves, and it doesn't horrify me that there have been cultures where infanticide/exposure was common and accepted. Other things being equal I think we should be on the safe side and not kill infants, and I wouldn't advocate legalizing infanticide in the US, though.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2013-07-27T15:43:26.412Z · LW(p) · GW(p)
(Killing infants is also bad because we expect them to at some point have moral value, so maximizing over all time means we should include them.)
Replies from: Kawoomba↑ comment by Kawoomba · 2013-07-27T17:14:19.915Z · LW(p) · GW(p)
(Why doesn't this argument also apply to using all of your gametes?)
Replies from: army1987, jkaufman, aelephant↑ comment by A1987dM (army1987) · 2013-07-28T08:50:05.140Z · LW(p) · GW(p)
P(a randomly chosen sperm will result into an adult human) << P(a randomly chosen ovum will result into an adult human) << P(a randomly chosen baby will result into an adult human). Only one of these probabilities sounds large enough for the word “expect” to be warranted IMO.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-07-28T09:09:12.310Z · LW(p) · GW(p)
P (there will be a child who will grow up to be an adult in the next couple years if you decide to conceive one) is for many about the same as P (a randomly chosen baby will grow up to be an adult)
In each case you can take an action with the expected result of a human with moral value, so jkaufman's argument should apply either way. The opportunity cost difference is low.
Replies from: Baughn↑ comment by Baughn · 2013-07-28T11:57:54.229Z · LW(p) · GW(p)
Steel-man the argument.
Let's say you have a machine that, with absolute certainty, will create an adult human whose life is not worth living, but who would not agree to suicide. Or that is only barely worth living, if you lean towards average utilitarianism.
It currently only has the DNA.
Would you turn it off?
How about if it's already a fetus? A baby? Somewhere along the line, does the actual current state start to matter, and if so where?
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-07-28T19:04:07.450Z · LW(p) · GW(p)
...oh.
That highlights certain conflicts among my moral intuitions I hadn't noticed before.
All in all, I think I would turn the machine off, unless the resulting person was going to live in an underpopulated country, or I know that the DNA is taken from parents with unusually high IQ and/or other desirable genetically inheritable traits.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-07-28T21:08:47.799Z · LW(p) · GW(p)
All in all, I think I would turn the machine off
The machine incubates humans until they are the equivalent of 3 months old (the famed 4th trimester).
Would you turn it off at all stages?
Replies from: Baughn, army1987↑ comment by Baughn · 2013-07-28T21:35:50.371Z · LW(p) · GW(p)
(Not saying you misread me, but:)
The way I put it, it creates an adult human with absolute certainty. There may or may not be an actual, physical test tube involved; it could be an chessmaster AI, or whatnot. The implementation shouldn't matter. For completeness, assume it'll be an adult human who'll live forever, so the implementation becomes an evanescent fraction.
The intended exception is that you can turn it off (destroying the (potential-)human at that stage), any time from DNA string to adult. There are, of course, no legal consequences or whatnot; steel-man as appropriate.
Given that, in what time period - if any - is turning it off okay?
Personally, I'll go with "Up until the brain starts developing, then gradually less okay, based on uncertainty about brain development as well as actual differences in value." I care very little about potential people.
↑ comment by A1987dM (army1987) · 2013-07-28T21:40:37.462Z · LW(p) · GW(p)
I don't know.
I'm so glad that I don't live in the Least Convenient Possible World so I don't have to make such a choice.
↑ comment by jefftk (jkaufman) · 2013-07-28T04:03:38.946Z · LW(p) · GW(p)
(Opportunity cost.)
Replies from: Kawoomba↑ comment by Kawoomba · 2013-07-28T07:27:02.820Z · LW(p) · GW(p)
I don't understand. Take just the two gametes which ended up combining into the infant, t-10 months before the scenario in which your "expect them to at some point have moral value, so maximizing over all time means we should include them" applies.
Why doesn't it apply to the two gametes, why does it apply 10 months later? Is it because the pregnancy is such a big investment? What about if the woman finds the pregnancy utilon-neutral, would the argument translate then?
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2013-07-28T13:55:08.498Z · LW(p) · GW(p)
Let's go up a step. I'm some kind of total utilitarian, which means maximizing over all creatures of moral worth over all time. I don't think gametes have moral worth in and of themselves, and very small children probably don't either, but both do have the potential to grow into creatures that can have positive or negative lives. The goal is, in the long term, to have as many such creatures as possible having the best lives possible.
While "using all your gametes" isn't possible, most people could reproduce much more than they currently do. Parenting takes a lot of time, however, and with time being a limited resource there's lots of other things you can do with time. Many of these have a much larger effect on improving welfare or increasing the all-time number of people than raising children. It's also not clear whether a higher or lower rate of human childbirth is ideal in the long term.
↑ comment by aelephant · 2013-07-28T00:50:13.936Z · LW(p) · GW(p)
There's a difference between an infant, which is already a living, breathing human being & the sperm that are expunged in masturbation. Even if those sperm could have been used to produce more humans, there's no way to prove whether or not they would actually. The woman could fail to conceive, for example. If you wanted to make a law against masturbation, you'd also run into the problem that there is no victim, just the probability of someone that might have existed at some point maybe. I also see a conflict here with autonomy. Can we require people to turn all of their sperm into humans? They didn't choose to produce sperm; it is an accident of biology. On the other hand, people do choose to have children (usually); it requires conscious choice & effort (excluding certain exceptions like the female rape victim).
↑ comment by blacktrance · 2013-07-28T00:19:40.005Z · LW(p) · GW(p)
There's not enough information for me to give an answer. Will there be external negative consequences to me (from the law or from the infants' parents)? Do the infants' parents want them, or are these infants donated to the Center for Moral Dilemmas by parents who don't mind if something happens to them? Is the person I'm saving a stranger, or someone I know?
Replies from: Baughn↑ comment by Baughn · 2013-07-28T11:54:13.618Z · LW(p) · GW(p)
I'll add in:
Will you judge me for the answer I provide? Will someone else do so? Will a potential future employer look this up? Will I, by answering this, slightly alter internal ethical injunctions against murdering children that, frankly, are there for a good reason?
↑ comment by aelephant · 2013-07-27T13:44:48.868Z · LW(p) · GW(p)
None. You get thrown in jail or put to death for that kind of thing.
Replies from: Vladimir_Nesov, Mestroyer↑ comment by Vladimir_Nesov · 2013-07-27T15:15:34.157Z · LW(p) · GW(p)
See The Least Convenient Possible World, Better Disagreement.
↑ comment by Mestroyer · 2013-07-27T14:21:37.393Z · LW(p) · GW(p)
When you bring up things like the law, you're breaking thought experiments, and dodging the real interesting question someone is trying to ask. The obvious intent of the question is to weigh how much you care about one day old infants vs how much you care about mentally healthy adults. Whoever posed the experiment can clarify and add things like "Assume it's legal," or "Assume you won't get caught." But if you force them to, you are wasting their time. And especially over an online forum, there is no incentive to, because if they do, you might just respond with other similar dodges such as "I don't want to kill enough that it becomes a significant fraction of the species," or "By killing, I would damage my inhibitions against killing babies which I want to preserve," or "I don't want to create grieving parents," or "If people find out someone is killing babies, they will take costly countermeasures to protect their babies, which will cause harm across society."
If you don't want to answer without these objections out of the way, bring up the obvious fix to the thought experiment in your first answer, like "Assuming it was legal/ I wouldn't caught, then I would kill N babies," or "I wouldn't kill any babies even if it was legal and I wouldn't get caught, because I value babies so much," and then explain the difference which is important to you between babies and chickens, because that's obviously what Locaha was driving at.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-07-28T19:37:05.995Z · LW(p) · GW(p)
other similar dodges
I wouldn't dismiss those that quickly. The more unrealistic assumptions you make, the less the answer to the dilemma in the thought experiment will be relevant to any decision I'll ever have to make in the real world.
Replies from: Mestroyer↑ comment by Mestroyer · 2013-07-29T04:53:38.003Z · LW(p) · GW(p)
Yes it's less relevant to that, but the thought experiment isn't intended to directly glean information about what you'd do in the real world, it's supposed to gain information about the processes that decide what you would do in the real world. Once enough of this information is gained, it can be used to predict what you'd do in the real world, and also to identify real-world situations where your behavior is determined by your ignorance of facts in the real world, or otherwise deviating from your goals, and in doing so perhaps change that behavior.
comment by Mestroyer · 2013-07-26T14:03:53.172Z · LW(p) · GW(p)
I would advise against answering a question of the form "How many of animal X would you trade for one average human," because of the likelihood of rewriting your values by making a verbal (or in this case, written) commitment to an estimate influenced by scope insensitivity, and the greater availability of what goes on in a human's head than in an animal's.
In general, I think trying to weigh secular values against sacred values is a recipe for reducing the amount you care about the former.
Replies from: Ben_LandauTaylor↑ comment by Ben_LandauTaylor · 2013-07-26T19:21:07.596Z · LW(p) · GW(p)
In general, I think trying to weigh secular values against sacred values is a recipe for reducing the amount you care about the former.
If I understand the sacred/secular terminology correctly, then this seems like a feature, not a bug.
Replies from: Mestroyer↑ comment by Mestroyer · 2013-07-27T03:02:55.634Z · LW(p) · GW(p)
It could be a feature if the secular value is a big house or something, and the sacred value is whatever you might donate to an effective charity for.
It's definitely not a feature if the sacred value is sacred to the individual because when they imagine compromising it, they imagine scorn from their peers, and if society has handed it down as a sacred value of that sort since antiquity.
Also, not all values that people will treat as lexically lower than their most sacred values (In reality, there are probably more than 2 tiers of values, of course) are things you would probably want to get rid of. Most of fun theory is probably much lower on the hierarchy of things that cannot be traded away than human life, and yet you still want concern for it to have a significant role in shaping the future.
And then there's taboo tradeoffs between a certain amount of a thing, and a smaller amount of the same thing, and following the kind of thought process I warned against leads you to into the territory of clear madness like choosing specks over torture no matter the number of specks.
A more troubling counterargument to what I said is that no matter what you do you are living an answer to the question, so you can't just ignore it. This is true if you are an effective altruist (who has already rejected working on existential risk), and trying to decide whether to focus on helping humans or helping animals. Then you really need to do a utilitarian calculus that requires that number.
If I needed that number, I would first try to spend some time around the relevant sort of animal (or the closest approximation I could get), and try to gather as much information as possible about what it was cognitively capable of, hang out with some animal-loving hippies to counteract social pressure in favor of valuing humans infinitely more than animals. Then I might try and figure things out indirectly, through separate comparisons to a third variable (Perhaps my own time? I don't think I feel any hard taboo tradeoffs at work when I think about how much time I'd spend to help animals or how much I'd spend to help humans, though maybe I've just worn out the taboo by thinking about trading my time for lives as much as I have (Edit: that sounded a lot more evil than is accurate. To clarify, killing people to save time does sound horrifying to me, but not bothering to save distant strangers doesn't)).
comment by [deleted] · 2013-07-26T20:15:16.153Z · LW(p) · GW(p)
[1] Chicken extinction would make life worse for many other people, so I wouldn't actually do that, but not because of the effect on the chickens.
Question: If a person is concerned about the existential risks of species, and a person is concerned with lessening suffering of common species of animals, and a person is concerned with human lives, how does that person make tradeoffs among those?
I was thinking about this, and I realized I had no idea how to resolve the following problem:
Omega says "Hi. I can institute anyone one of these three policies, but only one at a time. Other than locking out the other policies, for each year the policy is in place, none has a downside... except that I will mercilessly dutch book you with policy offers if you're inconsistent with your judgement of the ratios."
Policy A: Save X Common Non-Human Animals capable of feeling pain per year from painful, pointless, executions that will not overall affect the viability of the that Species of Common Non-Human Animals.
Policy B: Save Y rarer species per year from extinction. These can be anything from Monkeys, to Mites, to Moss (So they may not have a nervous system).
Policy C: Save Z Humans capable of feeling pain per year from painful, pointless, executions that will not overall affect the viability of the Human Species.
Every time I attempt to construct some acceptable ratio of X:Y:Z, I seem to think "This doesn't seem correct." Thoughts?
Replies from: DanArmak, mwengler↑ comment by DanArmak · 2013-07-26T23:40:51.100Z · LW(p) · GW(p)
I would need to know (or have some prior for) which species, or animals of which species, are affected by policies A and B. I would give very different odds for monkeys, cats, mites, and moss.
Also, individuals (whether human or animal) are a natural sort of thing to assign moral value to. But "species" are not. Species are defined as "groups of individuals, all of whom are capable of interbreeding". (Even then there are exceptions like ring species, and also things like parthenogenic clans; it's not a definition that cuts reality at its joints.)
There is no particular reason for me (or, I think, most people) to care about an animal in inverse proportion to its number of potential mates (= size of breeding group = size of species). I do care about variety, but the 5500 or so known mammal species are far more diverse than many sets of 5500 different insect species, for instance. And the set of all rodents (almost 2300 species) is far less diverse than the relatively tiny set of (Chimpanzee, African elephant, Great white shark). Being separate species is incidental to the things we really care about.
↑ comment by mwengler · 2013-07-27T15:42:02.610Z · LW(p) · GW(p)
Thoughts?
Ultimately, it seems as hard to come up with a ratio of X:Y:Z as it would be to come up with a personal valuation ratio of Apples:Oranges:Education:747s:Laptops.
You are taking morality, which is some inborn urges you have when confronted with certain types of information, urges which started evolving in you long before your ancestors had anything approaching a modern neocortex, and which absolutely evolved in you without any kind of reference to the moral problem you are looking at in this comment. And you are trying to come up with a fixed-in-time, transitive, quantitative description of it.
In the case of Apples:Oranges, the COST of these to you in a store may be close to constant, but their VALUE to you are all over the map: sometimes the Apple wins, sometimes the Orange wins, often the Laptop wins, and when the 747 wins it wins big time.
It seems likely enough that your moral urges would be all over the map, variable in time. And that your effort to summarize them completely with fixed static numbers makes less sense than describing nature using the four elements of earth, air, fire and water.
comment by Shmi (shminux) · 2013-07-26T16:21:09.307Z · LW(p) · GW(p)
It seems to me that this logic necessarily makes you choose torture over dust specks, since one can construct a nearly continuous sequence of animals between chickens and humans and patch the gaps, if any, with probabilities. But you are probably OK with that, since you write
While I'm uncertain where along there things start getting up to significant levels, I think it's probably somewhere that includes no or almost no animals but nearly all humans.
and this argument breaks down once you start making comparisons like "0.01% odds of one human dying now vs all animals dying now" or "1 day reduction in the life expectancy of one human vs all animals dying now" etc.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2013-07-28T04:10:35.205Z · LW(p) · GW(p)
There are animals (chimps etc) where I think the chance that they have moral worth is too large to ignore, so "all animals dying now" would be bad.
I don't understand how you're bringing in torture and specks.
comment by ThisSpaceAvailable · 2013-07-27T21:46:09.663Z · LW(p) · GW(p)
There is also a sticky issue regarding the Turing Test. Suppose we agree that any AI that convinces its interlocutor 70% of the time that is human is "effectively human", and has the same rights as a natural human. What happens with an AI that passes 69.9% of the time?
Replies from: tim↑ comment by tim · 2013-07-27T23:02:47.458Z · LW(p) · GW(p)
It would not have the same rights as a natural human. Presumably, if such a line were drawn, there would be a good reason for not recognizing a below 70% pass rate rather than arbitrarily choosing that number.
This is a more general problem with any sort of cutoff that segregate a continuous quantity into separate groups. (Should someone who is 20.9 years old be legally recognized as an adult? Is there really a difference between the mentally retarded person with a 69.9 IQ and the person who scored a 70.1?)
comment by OneGotBetter · 2013-07-26T15:49:55.579Z · LW(p) · GW(p)
So you treat humans as the only entities that count morally as this is the closest Schelling point?
I think you should recognise you have a finer grained decision tree than this. Think about the difference between swatting a mosquito that could bite you, and harming your neighbour's prize chicken, both animals but different moral statuses.
Replies from: DanArmakcomment by [deleted] · 2013-07-26T14:00:46.805Z · LW(p) · GW(p)
To abandon the myth of natural rights is not to deny the existence of legal rights. Humans and only humans make laws. No animal, humans included, has a natural right.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-07-26T14:57:05.612Z · LW(p) · GW(p)
To abandon the myth of natural rights is not to deny the existence of legal rights. . Humans and only humans make laws. No animal, humans included, has a natural right.
How is this relevant to the question? Lots of humans don't make laws or aren't capable of making laws (either unable due to the political structure (not in a democracy), or mentally unable (e.g. mentally challenged), and others don't for cultural reasons (many hunter gatherer groups have no formal systems of laws). Saying that humans make laws and animals don't fails even worse than many other variants of the marginal cases argument. Moreover, there's no coherent step to get from "humans make laws" to "therefore our laws should only apply to humans or should treat humans and animals differently."
Replies from: None↑ comment by [deleted] · 2013-07-26T16:01:01.891Z · LW(p) · GW(p)
Thank you for your reply, JoshuaZ.
How is this relevant to the question?
I think jkaufman and the wikipedia entry cited are making a claim that natural rights exist, and should be afforded to humans and animals. I think this claim is in error, and arguments based on this claim will also be in error. But if no claim of natural rights is made, then the possibility opens to explain a difference between humans and other animals. I say again: difference. Not better, not worse, not within-rights, only different. That difference is that humans make laws and animals do not make laws.
You say some humans don't make laws for cultural reasons. Can you see the paradox within that single sentence? Differentiating laws from 'formal laws' is a true Scotsman sort of argument. Call it cultural or call it formal laws, there are things the hunter gatherer groups you mention do that no animal does.
Moreover, there's no coherent step to get from "humans make laws" to "therefore our laws should only apply to humans or should treat humans and animals differently."
I did not say anything about how humans should treat each other or how humans should treat animals. 'Should' is a natural law sort of a word, and that's what I'm claiming does not exist.
I am often wrong or inarticulate or both, and I thank you for the chance to clarify. What might seem to some a difference between humans and animals is a confusion between laws (which humans make and animals don't) and natural law (which does not exist for humans or animals). Here I think you and I might be in agreement: there's no line between humans and animals in this regard. All us critters share that lack of something.
Replies from: jkaufman, JoshuaZ↑ comment by jefftk (jkaufman) · 2013-07-26T16:56:39.706Z · LW(p) · GW(p)
I think jkaufman and the wikipedia entry cited are making a claim that natural rights exist
I'm a utilitarian; I'm not claiming anything about rights. The question of "moral status" to me is whether something should get included when aggregating utility.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-07-26T19:20:45.352Z · LW(p) · GW(p)
Clearly not a utilitarian enough to prefer torture over dust specks.
Replies from: blacktrance↑ comment by blacktrance · 2013-07-28T00:10:19.904Z · LW(p) · GW(p)
I don't think utilitarians should prefer torture over dust specks. Dust specks are such an infinitesimally minor amount of disutility that even if they happen to 3^^^3 people, it's still much better than being tortured even for one minute.
Replies from: jkaufman, CAE_Jones, army1987↑ comment by jefftk (jkaufman) · 2013-07-28T04:07:19.145Z · LW(p) · GW(p)
I don't think you really get how big 3^^^3 is.
↑ comment by CAE_Jones · 2013-07-28T00:26:16.983Z · LW(p) · GW(p)
The general argument is one of net suffering. The trouble is that weird things happen when you try to assign values to suffering, add those together across multiple agents, etc. On the one hand, we should avoid scope insensitivity. On the other hand, the assertion that adding up 3^^^3 dust specs is worse than 50 years of torture packs in quite a few other assertions (that suffering should be added linearly across all agents and all types of suffering, for one).
↑ comment by A1987dM (army1987) · 2013-07-28T09:16:02.895Z · LW(p) · GW(p)
What if you replaced 3^^^3 with BusyBeaver(3^^^3), or BusyBeaver(BusyBeaver(3^^^3))?
↑ comment by JoshuaZ · 2013-07-26T22:43:26.432Z · LW(p) · GW(p)
You say some humans don't make laws for cultural reasons. Can you see the paradox within that single sentence? Differentiating laws from 'formal laws' is a true Scotsman sort of argument. Call it cultural or call it formal laws, there are things the hunter gatherer groups you mention do that no animal does.
Actually, if you want to generalize laws to mean just enforced cultural norms, then yes, these exist among animals as well. Different groups of bonobos or chimpanzees have different behavioral sets, including to what extent violence is tolerated in that specific group to in-group members.