Arguments Against Speciesism

post by Lukas_Gloor · 2013-07-28T18:24:58.354Z · LW · GW · Legacy · 476 comments

Contents

  What Is Speciesism?
  Caring about suffering
  Why species membership really is an absurd criterion
  Summary
None
476 comments

There have been some posts about animals lately, for instance here and here. While normative assumptions about the treatment of nonhumans played an important role in the articles and were debated at length in the comment sections, I was missing a concise summary of these arguments. This post from over a year ago comes closest to what I have in mind, but I want to focus on some of the issues in more detail.

A while back, I read the following comment in a LessWrong discussion on uploads:

I do not at all understand this PETA-like obsession with ethical treatment of bits.

Aside from (carbon-based) humans, which other beings deserve moral consideration? Nonhuman animals? Intelligent aliens? Uploads? Nothing else?

This article is intended to shed light on these questions; it is however not the intent of this post to advocate a specific ethical framework. Instead, I'll try to show that some ethical principles held by a lot of people are inconsistent with some of their other attitudes -- an argument that doesn't rely on ethics being universal or objective. 

More precisely, I will develop the arguments behind anti-speciesism (and the rejection of analogous forms of discrimination, such as discrimination against uploads) to point out common inconsistencies in some people's values. This will also provide an illustrative example of how coherentist ethical reasoning can be applied to shared intuitions. If there are no shared intuitions, ethical discourse will likely be unfruitful, so it is likely that not everyone will draw the same conclusions from the arguments here. 

 

What Is Speciesism?

Speciesism, a term popularized (but not coined) by the philosopher Peter Singer, is meant to be analogous to sexism or racism. It refers to a discriminatory attitude against a being where less ethical consideration i.e. caring less about a being's welfare or interests is given solely because of the "wrong" species membership. The "solely" here is crucial, and it's misunderstood often enough to warrant the redundant emphasis.

For instance, it is not speciesist to deny pigs the right to vote, just like it is not sexist to deny men the right to have an abortion performed on their body. Treating beings of different species differently is not speciesist if there are relevant criteria for doing so. 

Singer summarized his case against speciesism in this essay. The argument that does most of the work is often referred to as the argument from marginal cases. A perhaps less anthropocentric, more fitting name would be argument from species overlap, as some philosophers (e.g. Oscar Horta) have pointed out. 

The argument boils down to the question of choosing relevant criteria for moral concern. What properties do human beings possess that makes us think that it is wrong to torture them? Or to kill them? (Note that these are two different questions.) The argument from species overlap points out that all the typical or plausible suggestions for relevant criteria apply equally to dogs, pigs or chickens as they do to human infants or late-stage Alzheimer patients. Therefore, giving less ethical consideration to the former would be based merely on species membership, which is just as arbitrary as choosing race or sex as relevant criterion (further justification for that claim follows below).

Here are some examples for commonly suggested criteria. Those who want may pause at this point and think about the criteria they consult for whether it is wrong to inflict suffering on a being (and separately, those that are relevant for the wrongness of killing).

 

The suggestions are:

A: Capacity for moral reasoning

B: Being able to reciprocate

C: (Human-like) intelligence

D: Self-awareness

E: Future-related preferences; future plans

E': Preferences / interests (in general)

F: Sentience (capacity for suffering and happiness)

G: Life / biological complexity

H: What I care about / feel sympathy or loyalty towards

 

The argument from species overlap points out that not all humans are equal. The sentiment behind "all humans are equal" is not that they are literally equal, but that equal interests/capacities deserve equal consideration. None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.

While H is an unlikely criterion for direct ethical consideration (it could justify genocide in specific circumstances!), it is an important indirect factor. Most humans have much more empathy for fellow humans than for nonhuman animals. While this is not a criterion for giving humans more ethical consideration per se, it is nevertheless a factor that strongly influences ethical decision-making in real-life.

However, such factors can't apply for ethical reasoning at a theoretical/normative level, where all the relevant variables are looked at in isolation in order to come up with a consistent ethical framework that covers all possible cases.

If there were no intrinsic reasons for giving moral consideration to babies, then a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it. If we consider this implication to be unacceptable, then the same must apply for the situations nonhuman animals find themselves in on farms.

Side note: The question whether killing a given being is wrong, and if so, "why" and "how wrong exactly", is complex and outside the scope of this article. Instead of on killing, the focus will be on suffering, and by suffering I mean something like wanting to get out of one's current conscious state, or wanting to change some aspect about it. The empirical issue of which beings are capable of suffering is a different matter that I will (only briefly) discuss below. So in this context, giving a being moral consideration means that we don't want it to suffer, leaving open the question whether killing it painlessly is bad/neutral/good or prohibited/permissible/obligatory. 

The main conclusion so far is that if we care about all the suffering of members of the human species, and if we reject question-begging reasoning that could also be used to justify racism or other forms of discrimination, then we must also care fully about suffering happening in nonhuman animals. This would imply that x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads. (Though admittedly the latter wouldn't be anti-speciesist but rather anti-"substratist", or anti-"fleshist".)

The claim is that there is no way to block this conclusion without:

1. using reasoning that could analogically be used to justify racism or sexism
or
2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

I've tried and have asked others to try -- without success. 

 

Caring about suffering

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past. 

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb). However, I don't see why absurd conclusions that will likely remain hypothetical would be significantly less bad than other absurd conclusions. Their mere possibility undermines the whole foundation one's decisional algorithm is grounded in. (Compare hypothetical problems for specific decision theories.) 

Furthermore, while D and E seem plausible candidates for reasons against killing a being with these properties (E is in fact Peter Singer's view on the matter), none of the criteria from A to E seem relevant to suffering, to whether a being can be harmed or benefitted. The case for these being bottom-up morally relevant criteria for the relevance of suffering (or happiness) is very weak, to say the least. 

Maybe that's the speciesist's central confusion, that the rationality/sapience of a being is somehow relevant for whether its suffering matters morally. Clearly, for us ourselves, this does not seem to be the case. If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it! 

Those who do consider biting the bullet should ask themselves whether they would have defended that view in all contexts, or whether they might be driven towards such a conclusion by a self-serving bias. There seems to be a strange and sudden increase in the frequency of people who are willing to claim that there is nothing intrinsically wrong with torturing babies when the subject is animal rights, or more specifically, the steak they intend to have for dinner.

It is an entirely different matter if people genuinely think that animals or human infants or late-stage demented people are not sentient. To be clear about what is meant by sentience: 

A sentient being is one for whom "it feels like something to be that being". 

I find it highly implausible that only self-aware or "sapient" beings are sentient, but if true, this would constitute a compelling reason against caring for at least most nonhuman animals, for the same reason that it would pointless to care about pebbles for the pebbles' sake. If all nonhumans truly weren't sentient, then obviously singling out humans for the sphere of moral concern would not be speciesist.

What irritates me, however, is that anyone advocating such a view should, it seems to me, still have to factor in a significant probability of being wrong, given that both philosophy of mind and the neuroscience that goes with it are hard and, as far as I'm aware, not quite settled yet. The issue matters because of the huge numbers of nonhuman animals at stake and because of the terrible conditions these beings live in. 

I rarely see this uncertainty acknowledged. If we imagine the torture-scenario outlined above, how confident would we really be that the torture "won't matter" if our own advanced cognitive capacities are temporarily suspended? 

 

Why species membership really is an absurd criterion

In the beginning of the article, I wrote that I'd get back to this for those not convinced. Some readers may still feel that there is something special about being a member of the human species. Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form. 

The following likely isn't news to most of the LW audience, but it is worth spelling it out anyway: There exists a continuum of "species" in thing-space as well as in the actual evolutionary timescale. The species boundaries seem obvious just because the intermediates kept evolving or went extinct. And even if that were not the case, we could imagine it. The theoretical possibility is enough to make the philosophical case, even though psychologically, actualities are more convincing.

We can imagine a continuous line-up of ancestors, always daughter and mother, from modern humans back to the common ancestor of humans and, say, cows, and then forward in time again to modern cows. How would we then divide this line up into distinct species? Morally significant lines would have to be drawn between mother and daughter, but that seems absurd! There are several different definitions of "species" used in biology. A common criterion -- for sexually reproducing organisms anyway -- is whether groups of beings (of different sex) can have fertile offspring together. If so, they belong to the same species. 

That is a rather odd way of determining whether one cares about the suffering of some hominid creature in the line-up of ancestors -- why should that for instance be relevant in regard to determining whether some instance of suffering matters to us? 

Moreover, is that really the terminal value of people who claim they only care about humans, or could it be that they would, upon reflection, revoke such statements?

And what about transhumanism? I remember that a couple of years ago, I thought I had found a decisive argument against human enhancement. I thought it would likely lead to speciation, and somehow the thought of that directly implied that posthumans would treat the remaining humans badly, and so the whole thing became immoral in my mind. Obviously this is absurd; there is nothing wrong with speciation per se, and if posthumans will be anti-speciesist, then the remaining humans would have nothing to fear! But given the speciesism in today's society, it is all too understandable that people would be concerned about this. If we imagine the huge extent to which a posthuman, or not to mention a strong AI, would be superior compared to current humans, isn't that a bit like comparing chickens to us?

A last possible objection I can think of: Suppose one held the belief that group averages are what matters, and that all members of the human species deserve equal protection because of the group average for a criterion that is considered relevant and that would, without the group average rule, deny moral consideration to some sentient humans. 

This defense too doesn't work. Aside from seeming suspiciously arbitrary, such a view would imply absurd conclusions. A thought experiment for illustration: A pig with a macro-mutation is born, she develops child-like intelligence and the ability to speak. Do we refuse to allow her to live unharmed -- or even let her go to school -- simply because she belongs to a group (defined presumably by snout shape, or DNA, or whatever the criteria for "pigness" are) with an average that is too low?

Or imagine you are the head of an architecture bureau and looking to hire a new aspiring architect. Is tossing out an application written by a brilliant woman going to increase the expected success of your firm, assuming that women are, on average, less skilled at spatial imagination than men? Surely not!

Moreover, taking group averages as our ethical criterion requires us to first define the relevant groups. Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others? 

 

Summary

Our speciesism is an anthropocentric bias without any reasonable foundation. It would be completely arbitrary to give special consideration to a being simply because of its species membership. Doing so would lead to a number of implications that most people clearly reject. A strong case can be made that suffering is bad in virtue of being suffering, regardless of where it happens. If the suffering or deaths of nonhuman animals deserve no ethical consideration, then human beings with the same relevant properties (of which all plausible ones seem to come down to having similar levels of awareness) deserve no intrinsic ethical consideration either, barring speciesism. 

Assuming that we would feel uncomfortable giving justifications or criteria for our scope of ethical concern that can analogously be used to defend racism or sexism, those not willing to bite the bullet about torturing babies are forced by considerations of consistency to care about animal suffering just as much as they care about human suffering. 

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments. 

Edit: As Carl Shulman has pointed out, discounting may also apply for "intensity of sentience", because it seems at least plausible that shrimps (for instance), if they are sentient, can experience less suffering than e.g. a whale. 

476 comments

Comments sorted by top scores.

comment by CarlShulman · 2013-07-28T22:14:23.589Z · LW(p) · GW(p)

I agree that species membership as such is irrelevant, although it is in practice an extremely powerful summary piece of information about a creature's capabilities, psychology, relationship with moral agents, ability to contribute to society, responsiveness in productivity to expected future conditions, etc.

Animal happiness is good, and animal pain is bad. However, the word anti-speciesism, and some of your discussion, suggests treating experience as binary and ignoring quantitative differences, e.g. here:

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments.

This leaves out the idea of the quantity of experience. In human split-brain patients the hemispheres can experience and act quite independently without common knowledge or communication. Unless you think that the quantity of happiness or suffering doubles when the corpus callosum is cut, then happiness and pain can occur in substructures of brains, not just whole brains. And if intensive communication and coordination were enough to diminish moral value why does this not apply to social groups like firms, herds, flocks, hives and the like?

Animals vary enormously in the number of neurons and substructures, including ones engaged in reinforcement learning responsive to pleasure and pain. For example, a fly's brain contains 100,000 neurons, where a human's contains about a million times as many. Here are brain masses for some animals:

  • Adult elephants at around 5000 g
  • Adult humans 1300-1400g
  • Chimpanzees are about 420 g, about a 3:1 ratio with humans, with the ratio for cortex neurons around 3:1 to 4:1
  • Cows are 425-458g, about a 3:1 ratio; if their cortex neuron counts resemble horses that would be closer to 8:1
  • Pigs are at 180 g, a ratio of 7.5:1
  • Domestic cats stand at 25-30 g, ~50:1 with the cortex ratio somewhat bigger
  • Pekin Duck at 6.3 g, 214:1 ratio
  • Owl brains are 2.2 g, around 600:1, and European quail at 0.9 g, about 1500:1
  • Goldfish have 0.097 g, just under a 14,000:1 ratio

Particularly for birds, fish, and insects one sees extremely large ratios. If, as is quite plausible in light of the decentralized operations of brains (stunningly demonstrated in split-brain patients, but also a routine feature of information processing in nervous systems), smaller subsystems can experience pleasure and pain, then animals with large nervous systems may be orders of magnitude more important than one would otherwise think. Importantly, this is not a consideration lowering the expected experience of animals with small nervous systems, but increasing the expected experience of animals with large nervous systems, so it does not need to be held with very high confidence to much affect behavior: "what if small neural systems suffer and delight?" is analogous to "what if snails sufffer and delight?").

Would you say that making such adjustments is speciesist? For example wikipedia gives the world chicken population as 24 billion, mostly kept in horrible conditions, and 1.3 billion cows. If one ignores nervous system scale the welfare of the chickens dominates in importance, but if one thinks that quantity of experience scales then the aggregate welfare of the cows looms larger. Is it speciesist to prioritize cows over chickens or fish on this basis?

Replies from: Lukas_Gloor, Xodarap, Armok_GoB, army1987, jkaufman, DanArmak
comment by Lukas_Gloor · 2013-07-28T22:29:26.663Z · LW(p) · GW(p)

I fully agree with this point you make, I should have mentioned this. I think "probabilistic discounting" should refer to both "probability of being sentient" and "intensity of experiences given sentient". I'm not convinced that (relative) brain size makes a difference in this regard, but I certainly wouldn't rule it out, so this indeed factors in probabilistically and I don't consider this to be speciesist.

comment by Xodarap · 2013-07-31T00:13:34.615Z · LW(p) · GW(p)

Note that by this measure, ants are six times more important than humans.

But to address your question: "specieism" is not a label that's slapped on people who disagree with you. It's merely a shorthand way of saying "many people have a cognitive bias that humans are more 'special' than they actually are, and this bias prevents them from updating their beliefs in light of new evidence."

Brain-to-body quotient is one type of evidence we should consider, but it's not a great one. The encephalization quotient improves on it slightly by considering the non-linearity of body size, but there are many other metrics which are probably more relevant.

Replies from: CarlShulman, Lumifer
comment by CarlShulman · 2013-07-31T01:34:54.563Z · LW(p) · GW(p)

Note that by this measure, ants are six times more important than humans.

You linked to a page comparing brain-to-body-weight ratios, rather than any absolute features of the brain, and referring not to ants in general but to unusually miniaturized ants in which the rest of the body is shrunken. That seems pretty irrelevant.

Brain-to-body quotient is one type of evidence we should consider, but it's not a great one.

I was using total brain mass and neuron count, not brain-to-body-mass.

but there are many other metrics which are probably more relevant.

I agree these are relevant evidence about quality of experience, and whether to attribute experience at all. But I would say that quality and quantity of experience are distinguishable (although the absence of experience implies quantity 0).

comment by Lumifer · 2013-07-31T00:34:17.168Z · LW(p) · GW(p)

It's merely a shorthand way of saying "many people have a cognitive bias that humans are more 'special' than they actually are

This statement implies that humans can be more or less special "actually", as if it were a matter of fact, of objective reality.

That is not true, however. Humans are special in the same way a roast is tasty or a host charming. It is entirely in the eye of the beholder, it's a subjective opinion and as such there is no "actually" about it.

Your point is equivalent to saying "many people have a cognitive bias that roses are more 'pretty' than they actually are".

Replies from: Xodarap, Vaniver, MugaSofer
comment by Xodarap · 2013-07-31T11:46:36.495Z · LW(p) · GW(p)

It is entirely in the eye of the beholder, it's a subjective opinion and as such there is no "actually" about it.

As mentioned in the original post, the same can be said of race: I may subjectively prefer white people.

You might bite the bullet here and say that yes, in fact, racism, sexism etc. is morally acceptable, but I think most people would agree that these __isms are wrong, and so speciesism must also be wrong.

Replies from: Lumifer, wedrifid
comment by Lumifer · 2013-07-31T15:21:51.900Z · LW(p) · GW(p)

the same can be said of race: I may subjectively prefer white people.

Yes. That's perfectly fine. In fact, if you examine the revealed preferences (e.g. who people prefer to have as their neighbours or who do they prefer to marry) you will see that most people in reality do prefer others of their own race.

And, of course, the same can be said of sex, too. Unless you are an evenhanded bi, you're most certainly guilty of preferring some specific sex (or maybe gender, it varies).

You might bite the bullet here and say that yes, in fact, racism, sexism etc. is morally acceptable

"Morally acceptable" is a judgement, it is conditional on which morality you're using as your standard. Different moralities will produce different moral acceptability for the same actions.

Perhaps you wanted to say "socially acceptable"? In particular, "socially acceptable in contemporary US"? That, of course, is a very different thing.

I think most people would agree that these __isms are wrong, and so speciesism must also be wrong.

Sigh. This is a rationality forum, no? And you're using emotionally charged guilt-by-association arguments? (it's actually designed guilt-by-association since the word "speciesism" was explicitly coined to resemble "racism", etc.).

Warning: HERE BE MIND-KILLERS!

Replies from: davidpearce, Xodarap
comment by davidpearce · 2013-07-31T20:22:28.340Z · LW(p) · GW(p)

Lumifer, should the charge of "mind-killers" be levelled at anti-speciesists or meat-eaters? (If you were being ironic, apologies for being so literal-minded.)

Replies from: NotInventedHere, wedrifid, Lumifer
comment by NotInventedHere · 2013-07-31T20:31:22.606Z · LW(p) · GW(p)

I'm fairly sure it's for the examples referencing the politically charged issues of racism and sexism.

comment by wedrifid · 2013-08-02T03:02:00.894Z · LW(p) · GW(p)

Lumifer, should the charge of "mind-killers" be levelled at anti-speciesists or meat-eaters?

It can be levelled at most people who use employ either of those terms.

comment by Lumifer · 2013-07-31T20:32:10.085Z · LW(p) · GW(p)

Neither. It just looks like it would be a useful sign in front of animal-rights discussions.

I see people having strong emotional priors and marshaling arguments in favour of predefined conclusions. Not that different from politics, really, except maybe there's less tribal identity involved.

comment by Xodarap · 2013-08-01T01:46:10.048Z · LW(p) · GW(p)

I apologize for presenting the argument in a way that's difficult to understand. Here are the facts:

  1. If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that sexism, racism, etc. are acceptable
  2. We* don't believe that sexism, racism, etc. are acceptable
  3. Therefore, we cannot accept arguments based on subjective opinions

Is there a better way to phrase this?

(* "We" here means the broader LW community. I realize that you disagree, but I didn't know that at the time of writing.)

Replies from: SaidAchmiz, solipsist, Vaniver, Lumifer, wedrifid
comment by Said Achmiz (SaidAchmiz) · 2013-08-01T02:50:51.859Z · LW(p) · GW(p)

Y'got some... logical problems going on, there.

Firstly, your (1), while true, is misleading; it should read "If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that [long, LONG, probably literally infinite list of possible views, of which sexism and racism may be members but which contains innumerably more other stuff] are morally acceptable". Sure, accepting beliefs without evidence may lead us to sexism and/or racism, but that's hardly our biggest problem at that point.

Secondly, you presuppose that sexism and racism are necessarily not based on evidence. Of course, you may say that sexism and racism are by definition not based on evidence, because if there's evidence, then it's not sexist/racist, but that would be one of those "37 Ways That Bad Stuff Can Happen" or what have you; most people, after all, do not use your definition of "sexist" or "racist"; the common definition takes no notice of whether there's evidence or not.

Thirdly, for every modus ponens there is a modus tollens — and, as in this case, vice versa: we could decide that "subjective" opinions not based on evidence are morally acceptable (after all, we're not talking about empirical matters, right? These are moral positions). This, by your (1) and modus ponens, would lead us to accept sexism and racism. Intended? Or no?

Finally — and this is the big one — it strikes me as fundamentally backwards to start from broad moral positions, and reason from them to a decision about whether we need evidence for our moral positions.

Replies from: Jiro, Xodarap
comment by Jiro · 2013-08-01T03:08:58.743Z · LW(p) · GW(p)

There's a bigger logical flaw: "belief that subjective opinions not based on evidence are acceptable" is an ambiguous English phrase. It can mean belief that:

1) if X is a subjective opinion, then X is acceptable.

2) there exists at least one X such that X is a subjective opinion and is acceptable

Needless to say, the argument depends on it being #1, while most people who would say such a thing would mean #2.

I believe that hairdryers are for sale at Wal-Mart. That doesn't mean that every hairdryer in existence is for sale at Wal-Mart.

Replies from: SaidAchmiz, Xodarap
comment by Said Achmiz (SaidAchmiz) · 2013-08-01T03:19:58.236Z · LW(p) · GW(p)

Yes, good point — the "some" vs. "all" distinction is being ignored.

comment by Xodarap · 2013-08-01T11:57:19.593Z · LW(p) · GW(p)

Good point, thank you. I have tried again here.

comment by Xodarap · 2013-08-01T11:56:42.400Z · LW(p) · GW(p)

Thank you Said for your helpful comments. How is this:

  1. Suppose we are considering whether being A is more morally valuable than being B. If we don't require evidence when making that decision, then lots of ridiculous conclusions are possible, including racism and sexism.
  2. We don't want these ridiculous conclusions.
  3. Therefore, when judging the moral worth of beings, the differentiation must be based on evidence.

Regarding your "Finally" point - I was responding to Lumifer's statement:

Humans are special in the same way a roast is tasty or a host charming. It is entirely in the eye of the beholder, it's a subjective opinion and as such there is no "actually" about it.

I agree that most people wouldn't take this position, so my argument is usually more confusing than helpful. But in this case it seemed relevant.

Replies from: Jiro, Lumifer
comment by Jiro · 2013-08-01T16:30:38.381Z · LW(p) · GW(p)

This has the same flaw as before, just phrased a little differently. "Suppose I am ordering a pizza. If we don't require it to be square, then all sorts of ridiculous possibilities are possible, such as a pizza a half inch wide and 20 feet long. We don't want these ridiculous possibilities, so we better make sure to always order square pizzas."

"If we don't require evidence, then ridiculous conclusions are possible" can be interpreted in English to mean

1) In any case where we don't require evidence, ridiculous conclusions are possible.

2) In at least one case where we don't require evidence, ridiculous conclusions are possible.

Most people who think that the statement is true would be agreeing with it in sense #2, just like with the pizzas. And your argument depends on sense #1.

In other words, you're assuming that if evidence isn't used to rule out racism, then nothing else can rule out racism either.

Replies from: Xodarap
comment by Xodarap · 2013-08-03T13:20:19.522Z · LW(p) · GW(p)

Fair enough. What if we replace (1) with

  1. If we allow subjective opinions, then ridiculous conclusions are possible.

Keep in mind that I was responding to Lumifer's comment:

Humans are special in the same way a roast is tasty or a host charming. It is entirely in the eye of the beholder, it's a subjective opinion and as such there is no "actually" about it.

This is not intended to be a grand, sweeping axiom of ethics. I was just pointing out that allowing these subjective opinions proves more than we probably want.

Replies from: Jiro
comment by Jiro · 2013-08-03T17:07:11.894Z · LW(p) · GW(p)

That still has the same flaw. If we allow any and all subjective opinions, then ridiculous conclusions are possible. But it doesn't follow that if we allow some subjective opinions, ridiculous conclusions are possible. And nobody's claiming the former.

comment by Lumifer · 2013-08-01T16:00:56.375Z · LW(p) · GW(p)

Suppose we are considering whether being A is more morally valuable than being B. If we don't require evidence when making that decision, then lots of ridiculous conclusions are possible, including racism and sexism.

The issue isn't whether you require evidence. The issue is solely which moral yardstick are you using.

The "evidence" is the application of that particular moral metric to beings A and B, but it seems to me you're should be more concerned with the metric itself.

To give a crude and trivial example, if the metric is "Long noses are better than short noses" then the evidence is length of noses of A and B and on the basis of this evidence we declare the long-nose being A to be more valuable (conditional on this metric, of course) than the short-nose being B. I don't think you'll be happy with this outcome :-)

Oh, and you are still starting with the predefined conclusion and then looking for ways to support it.

comment by solipsist · 2013-08-02T12:42:44.751Z · LW(p) · GW(p)

By the way, thank you for spelling out your position with a clear, valid argument that keeps the conversation moving forward. In the heat of argument we often forget to express our appreciation of well-posed comments.

comment by Vaniver · 2013-08-02T03:54:09.466Z · LW(p) · GW(p)

(* "We" here means the broader LW community. I realize that you disagree, but I didn't know that at the time of writing.)

This is not a core belief of the broader LW community. An actual core belief of the LW community:

That which can be destroyed by the truth should be.

Replies from: wedrifid
comment by wedrifid · 2013-08-02T05:20:09.320Z · LW(p) · GW(p)

An actual core belief of the LW community:

That which can be destroyed by the truth should be.

I'm not sure that is quite true. It is controversial and many are not comfortable with it without caveats.

comment by Lumifer · 2013-08-01T15:48:54.187Z · LW(p) · GW(p)

Here are the facts

You keep using that word. I do not think it means what you think it means.

If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that sexism, racism, etc. are acceptable

That's curious. My and your ideas of morality are radically different. There's even not that much of a common base.

Let me start by re-expressing in my words how do I read your position (so that you could fix my misinterpretations). First, you're using "morally acceptable" without any qualifiers of conditionals. This means that you believe there is One True Morality, the Correct One, on the basis of which we can and should judge actions and opinions. Given your emphasis on "evidence", you also seem to believe that this One True Morality is objective, that is, can be derived from actual reality and proven by facts.

Second, you divide subjective opinions into two classes: "not based on evidence" and, presumably, "based on evidence". Note that this is not at all the same thing as "falsifiable" vs. "non-falsifiable". For example, let's say I try two kinds of wine and declare that I like the second wine better. Is such a subjective opinion "based on evidence"?

You also have major logic problems here (starting with the all/some issue), but it's a mess and I think other comments have addressed it.

To contrast, I'll give a brief outline of how I view morality. I think of morality as a more or less coherent set of values at the core of which is a subset of moral axioms. These moral axioms are certainly not arbitrary -- many factors influence them, the three biggest ones are probably biology, societal/cultural influence, and individual upbringing and history -- but they are not falsifiable. You cannot prove them right or wrong.

Evidence certainly matters, but it matters mostly at the interface of moral values and actions: evidence tells you whether the actual outcomes of your actions match your intent and your values. It is, of course, often the case that they do not. However evidence cannot tell you what you should want or what you should value.

We* don't believe that sexism, racism, etc. are acceptable

Heh. I neither believe you have the power to speak for the entire LW community, nor do I care what you find morally acceptable or unacceptable.

Therefore, we cannot accept arguments based on subjective opinions

As has been noted, your logic is flawed. However the bigger issue is your confusion between arguments and declarative statements (that e.g. reflect personal values). Arguments serve to persuade, to change someone's mind -- subjective opinions do not. If I say I hate tomatoes that's not a reason for you to modify your attitude towards tomatoes, it's just an observation about myself. I am not sure what do you mean by "accepting" it.

comment by wedrifid · 2013-08-02T04:47:49.108Z · LW(p) · GW(p)

Here are the facts:

If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that sexism, racism, etc. are acceptable

This does not follow. (It can be repaired by adding an "all" to the antecedent but then then the conclusion in '3' would not follow from 1 and 2.)

Is there a better way to phrase this?

Basically, no. Your argument is irredeemably flawed.

comment by wedrifid · 2013-08-02T04:40:22.750Z · LW(p) · GW(p)

You might bite the bullet here and say that yes, in fact, racism, sexism etc. is morally acceptable, but I think most people would agree that these __isms are wrong, and so speciesism must also be wrong.

This does not follow.

comment by Vaniver · 2013-08-02T03:56:57.881Z · LW(p) · GW(p)

Humans are special in the same way a roast is tasty or a host charming. It is entirely in the eye of the beholder, it's a subjective opinion and as such there is no "actually" about it.

The local explanation of this concept is the 2-place word, which I rather like.

comment by MugaSofer · 2013-07-31T14:43:00.764Z · LW(p) · GW(p)

This statement implies that humans can be more or less special "actually", as if it were a matter of fact, of objective reality.

Well yes, yes it does. Even if "specialness" is defined purely within human neurology doesn't mean you can't apply it's criteria to parts of reality and be objectively right or wrong about the result - just like, say, numbers.

Now, you could argue that humans vary with regards to how "special" humanity is to them, I suppose ... but in practice we seem to have a common cause, generally. Alternately, you could complain that paperclippers disagree about our "specialness" (or rather mean something different by the term, since their specialness algorithm returns high values for paperclips and low ones for humans and rocks), and is therefore insufficiently objective, but ...

Replies from: Lumifer
comment by Lumifer · 2013-07-31T15:32:59.345Z · LW(p) · GW(p)

Well yes, yes it does.

I disagree. Here is the relevant difference: if you're using "special" unconditionally, you're only expressing a fuzzy opinion which is just that, an opinion. To get to the level of facts you need to make your "special" conditional on some specific standard or metric and thus convert it into a measurement.

It's still the same as saying that prettiness of roses is objective. Unconditionally, it's not. But if you want to, you can define 'prettiness' sufficiently precisely to make it a measurement and then you can objectively talk about prettiness of roses.

Replies from: MugaSofer
comment by MugaSofer · 2013-07-31T16:31:21.181Z · LW(p) · GW(p)

Indeed. The difference being that humans don't all have the same prettiness-metrics, which is why the comparison fails.

Replies from: Lumifer
comment by Lumifer · 2013-07-31T16:51:09.071Z · LW(p) · GW(p)

Humans all have the same specialness metrics?? I don't think so.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-04T14:22:02.113Z · LW(p) · GW(p)

Well, obviously some of them are biased in different directions ... but yeah, it looks to me like CEV coheres.

EDIT: Unless I've completely misunderstood you somehow. Far from impossible.

comment by Armok_GoB · 2013-07-29T21:55:19.715Z · LW(p) · GW(p)

Brain size or number of neurons might work within a general group such as "mammals", however for example birds seem to be significantly smarter in some sense than a mammal of equivalently-sized brain, probably accounting for some difference in underlying architecture.

Replies from: Douglas_Knight, CarlShulman
comment by Douglas_Knight · 2013-07-31T21:17:39.343Z · LW(p) · GW(p)

Do you have a specific bird and mammal in mind?

Brain mass grows with body mass. It's so noisy that people can't decide whether it is the 2/3 or 3/4 power of body mass.* It is said that a mouse is as smart as a cow. What the cow is doing with all that gray matter, I don't know. Smart animals, like apes, dolphins, and ravens have bigger brains than the trend line, but the deviation is small, so they have smaller brains than larger animals. From this point of view, saying that birds are smart for their brain size is just saying that they are small.

* probably the right answer is 3/4 and 2/3 is just promoted by people who found 3/4 inexplicable, but Geoffrey West says that denominators of 4 are OK.

Replies from: Armok_GoB
comment by Armok_GoB · 2013-08-01T18:37:34.400Z · LW(p) · GW(p)

Well yea. Although i guess mammals tend to have bigger brain relative their bodies so you'd still expect the opposite?

comment by CarlShulman · 2013-07-29T22:46:25.703Z · LW(p) · GW(p)

Some of the relevant differences to look at are energy consumption, synapses, relative emphasis on different brain regions, selective pressure on different functions, sensory vs cognitive processing, neuron and nerve size (which affects speed and energy use), speed/firing rates. I'm just introducing the basic point here. Also see my other point about the distinction between intelligence and experience.

comment by A1987dM (army1987) · 2013-07-29T19:58:10.051Z · LW(p) · GW(p)

For example wikipedia gives the world .

I think there's a link not showing due to broken formatting.

Replies from: CarlShulman
comment by CarlShulman · 2013-07-29T20:51:57.639Z · LW(p) · GW(p)

Fixed.

comment by jefftk (jkaufman) · 2013-07-29T12:09:06.359Z · LW(p) · GW(p)

How small a subsystem can experience pleasure or pain? If we developed configurations specifically for this purpose and sacrificed all the other things you normally want out of a brain we could likely get far more sentience per gram of neurons than you get with any existing brain. If someone built a "happy neuron farm" of these, would that be a good thing? Would a "sad neuron farm" be bad?

EDIT: expanded this into a top level post.

Replies from: CarlShulman
comment by CarlShulman · 2013-07-29T12:59:47.854Z · LW(p) · GW(p)

I don't think that we should be confident that such things are all that matter (indeed, I think that's not true), or that the value is independent of features like complexity (a thermostat program vs an autonomous social robot).

If someone built a "happy neuron farm" of these, would that be a good thing? Would a "sad neuron farm" be bad?

I would answer "yes" and "yes," especially in expected value terms.

comment by DanArmak · 2013-07-28T22:27:37.453Z · LW(p) · GW(p)

Here are brain masses for some animals:

Isn't it better to consider brain-to-body mass ratios? A lion isn't 1.5 orders of magnitude smarter than a housecat. I wouldn't assume that quantity of experience is linear in the number of neurons.

Replies from: CarlShulman
comment by CarlShulman · 2013-07-28T22:46:11.681Z · LW(p) · GW(p)

Computer performance in chess (among many other things) scales logarithmically or worse with computer speeds/hardware. Humans with more time and larger collaborating groups also show diminishing returns.

But if we're talking about reinforcement learning and sensory experience in themselves, we're not interested in the (sublinear) usefulness of scaling for intelligence, but the number of subsystems undergoing the morally relevant processes. Neurons are still a rough proxy for that (details of the balance of nervous system tissue between functions, energy supply, firing rates, and other issues would matter substantially), but should be far closer to linear.

comment by jefftk (jkaufman) · 2013-07-28T20:30:16.284Z · LW(p) · GW(p)

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb).

This is pretty much my view. You dismiss it as unacceptable and absurd, but I would be interested in more detail on why you think that.

a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it

This definitely hits the absurdity heuristic, but I think it is fine. The problem with the Babyeaters in Three Worlds Collide is not that they eat their young but that "the alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering internal lights that the aliens used to communicate."

If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

(Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally? The "why species membership really is an absurd criterion" section is completely reasonable, reasonable enough that I have trouble seeing non-religious arguments against.)

Replies from: Lukas_Gloor, Xodarap, Estarlio, Jabberslythe, MugaSofer
comment by Lukas_Gloor · 2013-07-28T20:50:09.679Z · LW(p) · GW(p)

Your view seems consistent. All I can say is that I don't understand why intelligence is relevant for whether you care about suffering. (I'm assuming that you think human infants can suffer, or at least don't rule it out completely, otherwise we would only have an empirical disagreement.)

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Me too. But we can control for memories by comparing the scenario I outlined with a scenario where you are first tortured (in your normal mental state) and then have the memory erased.

Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally?

You're right, it's not a big deal once you point it out. The interesting thing is that even a lot of secular people will at first (and sometimes even afterwards) bring arguments against the view that animals matter that don't stand the test of the argument of species overlap. It seems like they simply aren't thinking through all the implications of what they are saying, as if it isn't their true rejection. Having said that, there is always the option of biting the bullet, but many people who argue against caring about nonhumans don't actually want to do that.

Replies from: jkaufman, atucker
comment by jefftk (jkaufman) · 2013-07-28T21:12:04.176Z · LW(p) · GW(p)

I'm assuming that you think human infants can suffer

I definitely think human infants can suffer, but I think their suffering is different from that of adult humans in an important way. See my response to Xodarap.

comment by atucker · 2013-07-29T03:41:50.901Z · LW(p) · GW(p)

All I can say is that I don't understand why intelligence is relevant for whether you care about suffering.

Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering.

As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge -- the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal.

Basically, it seems like alleviating one human's suffering has more potential to help the far future than alleviating one animal's suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front.

So my opinion winds up being something like "We should help the animals, but not now, or even soon, because other issues are more important and more pressing".

Replies from: threewestwinds
comment by threewestwinds · 2013-07-30T01:51:04.657Z · LW(p) · GW(p)

I agree with this point entirely - but at the same time, becoming vegetarian is such a cheap change in lifestyle (given an industrialized society) that you can have your cake and eat it too. Action - such as devoting time / money to animal rights groups - has to be ballanced against other action - helping humans - but that doesn't apply very strongly to innaction - not eating meat.

You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale. And most of those costs disappear if you merely reduce meat consumption, rather than eliminate it outright.

Replies from: Jiro
comment by Jiro · 2013-07-30T03:17:36.897Z · LW(p) · GW(p)

You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale.

By saying this, yoiu're trying to gloss over the very reason why becoming vegetarian is not a cheap change. Human beings are wired so as not to be able to ignore having to make many minor decisions or face many minor changes, and the fact that such things cannot be ignored means that being vegetarian actually has a high cost which involves being mentally nickel-and-dimed over and over again. It's a cheap change in the sense that you can do it without paying lots of money or spending lots of time, but that isn't sufficient to make the choice cheap in all meaningful senses.

Or to put it another way, being a vegetarian "just to try it" is like running a shareware program that pops up a nag screen every five minutes and occasionally forces you to type a random phrase in order to continue to run. Sure, it's light on your pocketbook, doesn't take much time, and reasding the nag screens and typing the phrases isn't difficult, but that's beside the point.

Replies from: threewestwinds
comment by threewestwinds · 2013-07-31T19:05:03.716Z · LW(p) · GW(p)

As has been mentioned elsewhere in this conversation, that's a fully general argument - it can be applied to every change one might possibly make in one's behavior.

Let's enumerate the costs, rather than just saying "there are costs."

  • Money wise, you save or break even.
  • It has no time cost in much of the US (most restaurants have vegetarian options).
  • The social cost depends on your situation - if you have people who cook for you, then you have to explain the change to them (in Washington state, this cost is tiny - people are understanding. In Texas, it is expensive).
  • The mental cost is difficult to discuss in a universal way. I found them to be rather small in my own case. Other people claim them to be quite large. But "I don't want to change my behavior because changing behavior is hard" is not terribly convincing.

Your discounting of non-human life has to be rather extreme for "I will have to remind myself to change my behavior" to out weigh an immediate, direct and calculable reduction in world suffering.

Replies from: SaidAchmiz, Jiro
comment by Said Achmiz (SaidAchmiz) · 2013-07-31T23:44:01.930Z · LW(p) · GW(p)

Money wise, you save or break even.

This is false. Unless you eat steak or other expensive meats on a regular basis, meat is quite cheap. For example, my meat consumption is mostly chicken, assorted processed meats (salamis, frankfurters, and other sorts of sausages, mainly, but also things like pelmeni), fish (not the expensive kind), and the occasional pork (canned) and beef (cheap cuts). None of these things are pricy; I am getting a lot of protein (and fat and other good/necessary stuff) for my money.

It has no time cost in much of the US (most restaurants have vegetarian options).

Do you eat at restaurants all the time? Learning how to cook the new things you're now eating instead of meat is a time cost.

Also, there are costs you don't mention: for instance, a sudden, radical change in diet may have unforeseen health consequences. If the transition causes me to feel hungry all the time, that would be disastrous; hunger has an extreme negative effect on my mental performance, and as a software engineer, that is not the slightest bit acceptable. Furthermore, for someone with food allergies, like me, trying new foods is not without risk.

comment by Jiro · 2013-07-31T22:00:06.768Z · LW(p) · GW(p)

it can be applied to every change one might possibly make in one's behavior.

And it would be correct to deny that a change that would possibly be made to one's behavior is "such a cheap change" that we don't need to weigh the cost of the change very much.

Your discounting of non-human life has to be rather extreme for "I will have to remind myself to change my behavior" to out weigh an immediate, direct and calculable reduction in world suffering.

That only applies to someone who already agrees with you about animal suffering to a sufficient degree that he should just become a vegetarian immediately anyway. Otherwise it's not all that calculable.

comment by Xodarap · 2013-07-28T20:48:03.297Z · LW(p) · GW(p)

I wasn't able to glean this from your other article either, so I apologize if you've said it before: do you think non-human animals suffer? Or do you believe they suffer, but you just don't care about their suffering?

(And in either case, why?)

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-07-28T20:52:51.447Z · LW(p) · GW(p)

I think suffering is qualitatively different when it's accompanied by some combination I don't fully understand of intelligence, self awareness, preferences, etc. So yes, humans are not the only animals that can suffer, but they're the only animals whose suffering is morally relevant.

Replies from: davidpearce, Lukas_Gloor, Xodarap, Emile
comment by davidpearce · 2013-07-28T23:15:59.146Z · LW(p) · GW(p)

jkaufman, the dimmer-switch metaphor of consciousness is intuitively appealing. But consider some of the most intense experiences that humans can undergo, e.g. orgasm, raw agony, or blind panic. Such intense experiences are characterised by a breakdown of any capacity for abstract rational thought or reflective self-awareness. Neuroscanning evidence, too, suggests that much of our higher brain function effectively shuts down during the experience of panic or orgasm. Contrast this intensity of feeling with the subtle and rarefied phenomenology involved in e.g. language production, solving mathematical equations, introspecting one's thoughts-episodes, etc - all those cognitive capacities that make mature members of our species distinctively human. For sure, this evidence is suggestive, not conclusive. But the supportive evidence converges with e.g. microelectrode studies using awake human subjects. Such studies suggest the limbic brain structures that generate our most intense experiences are evolutionarily very ancient. Also, the same genes, same neurotransmitter pathways and same responses to noxious stimuli are found in our fellow vertebrates. In view of how humans treat nonhumans, I think we ought to be worried that humans could be catastrophically mistaken about nonhuman animal sentience.

Replies from: Kawoomba
comment by Kawoomba · 2013-07-28T23:21:01.497Z · LW(p) · GW(p)

"Accompanied" can also mean "reflected upon after the fact".

I agree with your last sentence though.

comment by Lukas_Gloor · 2013-07-28T21:12:15.605Z · LW(p) · GW(p)

How certain are you that there is such a qualitative difference, and that you want to care about it? If there is some empirical (or perhaps also normative) uncertainty, shouldn't you at least attribute some amount of concern for sentient beings that lack self-awareness?

Replies from: thebestwecan
comment by thebestwecan · 2014-06-11T00:54:38.442Z · LW(p) · GW(p)

I second this. Really not sure what justifies such confidence.

comment by Xodarap · 2013-07-28T21:36:36.455Z · LW(p) · GW(p)

It strikes me that the only "disagreement" you have with the OP is that your reasoning isn't completely spelled out.

If you said, for example, "I don't believe pigs' suffering matters as much because they don't show long-term behavior modifications as a result of painful stimuli" that wouldn't be a speciesist remark. (It might be factually wrong, though.)

comment by Emile · 2013-07-30T21:10:22.492Z · LW(p) · GW(p)

So yes, humans are not the only animals that can suffer, but they're the only animals whose suffering.

There's missing something at the end, like "... is morally relevant", right?

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-07-31T02:14:51.203Z · LW(p) · GW(p)

Fixed; thanks!

comment by Estarlio · 2013-07-29T00:14:03.273Z · LW(p) · GW(p)

How do you avoid it being kosher to kill you when you're asleep - and thus unable to perform at your usual level of consciousness - if you don't endorse some version of the potential principle?

If you were to sleep and never wake, then it wouldn't necessarily seem wrong, even from my perspective, to kill you. It seems like your potential for waking up that makes it wrong.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-07-29T02:24:08.491Z · LW(p) · GW(p)

Killing me when I'm asleep is wrong for the same reason as killing me instantly and painlessly when I'm awake is wrong. Both ways I don't get to continue living this life that I enjoy.

(I'm not as anti-death as some people here.)

Replies from: Estarlio
comment by Estarlio · 2013-07-29T11:47:54.233Z · LW(p) · GW(p)

So, presumably, if you were destined for a life of horrifying squicky pain some time in the next couple of weeks, you'd approve of me just killing you. I mean ideally you'd probably like to be killed as close to the point HSP as possible but still, the future seems pretty important when determining whether you want to persist - it's even in the text you linked

A death is bad because of the effect it has on those that remain and because it removes the possibilty for future joy on the part of the deceased.

So, bearing in mind that you don't always seem to be performing at your normal level of thought - e.g. when you're asleep - how do you bind that principle so that it applies to you and not infants?

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-07-29T12:10:37.253Z · LW(p) · GW(p)

I don't think you should kill infants either, again for the "effect it has on those that remain and because it removes the possibility for future joy on the part of the deceased" logic.

Replies from: Estarlio
comment by Estarlio · 2013-07-29T12:29:35.393Z · LW(p) · GW(p)

How do you reconcile that with:

a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it

This definitely hits the absurdity heuristic, but I think it is fine. The problem with the Babyeaters in Three Worlds Collide is not that they eat their young but that "the alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering internal lights that the aliens used to communicate."

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-07-29T12:42:41.315Z · LW(p) · GW(p)

The "as long as the people are ok with it" deals with the "effect it has on those that remain". The "removes the possibility for future joy on the part of the deceased" remains, but depending on what benefits the society was getting out of consuming their young it might still come out ahead. The future experiences of the babies are one consideration, but not the only one.

Replies from: Estarlio
comment by Estarlio · 2013-07-29T13:44:50.330Z · LW(p) · GW(p)

Granted, but do you really think that they're going to be so incredibly tasty that the value people gain from eating babies over not eating babies outweighs the loss of all the future experiences of the babies?

To link that back to the marginal cases argument, which I believe - correct me if I'm wrong - you were responding to: Do you think that meat diets are just that much more tasty than vegetarian diets that the utility gained for human society outweighs the suffering and death of the animals? (Which may not be the only consideration, but I think at this point - may be wrong - you'd admit isn't nothing.) If so, have you made an honest attempt to test this assumption for yourself by, for instance, getting a bunch of highly rated veg recipes and trying to be vegetarian for a month or so?

Replies from: jkaufman, rocurley
comment by jefftk (jkaufman) · 2013-07-29T14:08:52.184Z · LW(p) · GW(p)

that the value people gain from eating babies over not eating babies outweighs the loss of all the future experiences of the babies?

The value a society might get from it isn't limited to taste. They could have some sort of complex and fulfilling system set up around it. But I think you're right, that any world I can think of where people are eating (some of) their babies would be improved by them switching to stop doing that.

that the utility gained for human society outweighs the suffering and death of the animals?

The "loss of all the future experiences of the babies" bit doesn't apply here. Animals stay creatures without moral worth through their whole lives, and so the "suffering and death of the animals" here has no moral value.

Replies from: Estarlio
comment by Estarlio · 2013-07-29T15:27:43.125Z · LW(p) · GW(p)

The "loss of all the future experiences of the babies" bit doesn't apply here. Animals stay creatures without moral worth through their whole lives, and so the "suffering and death of the animals" here has no moral value.

Pigs can meaningfully play computer games. Dolphins can communicate with people. Wolves have complex social structures and hunting patterns. I take all of these to be evidence of intelligence beyond the battery farmed infant level. They're not as smart as humans but it's not like they've got 0 potential for developing intelligence. Since birth seems to deprive your of a clear point in this regard - what's your criteria for being smart enough to be morally considerable, and why?

comment by rocurley · 2013-08-02T15:56:49.362Z · LW(p) · GW(p)

If you're considering opening a baby farm, not opening the baby farm doesn't mean the babies get to live fulfilling lives: it means they don't get to exist, so that point is moot.

Replies from: Estarlio
comment by Estarlio · 2013-08-02T16:31:02.264Z · LW(p) · GW(p)

If you view human potential as valuable then you end up saying something like that people should maximise that via breeding up to whatever the resource boundary is for meaningful human life. Unless that is implicitly bound - which I think to be a reasonable assumption to make for most people's likely world views.

comment by Jabberslythe · 2013-07-28T20:38:54.858Z · LW(p) · GW(p)

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Is this because you expect the torture wouldn't be as bad if that happened or because you would care less about yourself in that state? Or a combination?

Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

What if you were killed immediately afterwards, so long term memories wouldn't come into play?

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-07-28T20:48:19.402Z · LW(p) · GW(p)

Is this because you expect the torture wouldn't be as bad if that happened or because you would care less about yourself in that state? Or a combination?

If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn't matter morally. I also wouldn't be "me" anymore in any meaningful sense.

What if you were killed immediately afterwards

If you offered me the choice between:

A) 50% chance you are tortured and then released, 50% chance you are killed immediately

B) 50% chance you are tortured and then killed, 50% chance you are released immediately

I would strongly prefer B. Is that what you're asking?

Replies from: Jabberslythe
comment by Jabberslythe · 2013-07-28T21:04:24.436Z · LW(p) · GW(p)

If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn't matter morally. I also wouldn't be "me" anymore in any meaningful sense.

If not morally, do the two situations not seem equivalent in terms of your non-moral preference for either? In other words, would you prefer one over the other in purely self interested terms?

I would strongly prefer B. Is that what you're asking?

I was just making the point that if your only reason for thinking that it would be worse for you to be tortured now was that you would suffer more overall through long term memories we could just stipulate that you would be killed after in both situations so long term memories wouldn't be a factor.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-07-28T21:09:16.328Z · LW(p) · GW(p)

do the two situations not seem equivalent

I'm sorry, I'm confused. Which two situations?

we could just stipulate that you would be killed after in both situations so long term memories wouldn't be a factor

I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it's not the only way.

Replies from: Jabberslythe
comment by Jabberslythe · 2013-07-28T21:21:58.487Z · LW(p) · GW(p)

I'm sorry, I'm confused. Which two situations?

A) Being tortured as you are now

B) Having your IQ and cognitive abilities lowered then being tortured.

EDIT:

I am asking because it is useful to consider pure self interest because it seems like a failure of a moral theory if it suggests people act outside of their self interest without some compensating goodness. If I want to eat an apple but my moral theory says that shouldn't even though doing so wouldn't harm anyone else, that seems like a point against that moral theory.

I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it's not the only way.

Different cognitive abilities would matter in some ways for how much suffering is actually experienced but not as much as most people think. There are also situations where it seems like it could increase the amount an animal suffers by. While a chicken is being tortured it would not really be able to hope that the situation will change.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-07-29T02:26:32.170Z · LW(p) · GW(p)

A) Being tortured as you are now B) Having your IQ and cognitive abilities lowered then being tortured.

Strong preference for (B), having my cognitive abilities lowered to the point that there's no longer anyone there to experience the torture.

comment by MugaSofer · 2013-07-29T16:08:40.886Z · LW(p) · GW(p)

Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Those are not the same thing. They're not even remotely similar beyond both involving brain surgery.

Speciesism has always seemed like a straw-man to me.

Me too, but I never could persuade the people arguing for it of this fact :(

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-07-29T17:05:50.015Z · LW(p) · GW(p)

Those are not the same thing.

Agreed.

They're not even remotely similar beyond both involving brain surgery.

I was attempting to give an example of other ways in which I might find torture more palatable if I were modified first.

I never could persuade the people arguing for it ...

Right, which is why this argument isn't actually a straw-man and why ice9's post is useful.

Replies from: MugaSofer
comment by MugaSofer · 2013-07-29T22:57:59.533Z · LW(p) · GW(p)

Those are not the same thing.

Agreed.

They're not even remotely similar beyond both involving brain surgery.

I was attempting to give an example of other ways in which I might find torture more palatable if I were modified first.

Ah, OK.

Right, which is why this argument isn't actually a straw-man and why ice9's post is useful.

Hah, yes. Sorry, I thought you were complaining it was actually a strawman :/ Whoops.

comment by Qiaochu_Yuan · 2013-07-29T00:01:06.258Z · LW(p) · GW(p)

I strongly object to the term "speciesism" for this position. I think it promotes a mindkilled attitude to this subject ("Oh, you don't want to be speciesist, do you? Are you also a sexist? You pig?").

Replies from: Lukas_Gloor, Zvi, Xodarap, MugaSofer
comment by Lukas_Gloor · 2013-07-29T00:07:33.463Z · LW(p) · GW(p)

You pig?

Speciesist language, not cool!

Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness. And the parallels to racism or sexism are valid, I think.

Replies from: Zvi, Vaniver, SaidAchmiz
comment by Zvi · 2013-07-29T13:07:49.021Z · LW(p) · GW(p)

Haha only serious. My brain reacts with terror to that reply, with good reason: It has been trained to. You're implicitly threatening those who make counter-arguments with charges of every ism in the book. The number of things I've had to erase because one "can't" say them without at least ending any productive debate, is large.

comment by Vaniver · 2013-07-29T01:20:19.895Z · LW(p) · GW(p)

Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness.

I don't think that's a "but on the other hand;" I think that's a "it is a good way to raise awareness because it promotes mindkilled attitude."

comment by Said Achmiz (SaidAchmiz) · 2013-07-29T02:30:13.088Z · LW(p) · GW(p)

Actually, I think it's precisely the parallels to racism and sexism that are invalid. Perhaps ableism? That's closer, at any rate, if still not really the same thing.

comment by Zvi · 2013-07-29T12:59:42.904Z · LW(p) · GW(p)

It's not only the term. The post explicitly uses that exact argument: Since sexism and racism are wrong, and any theoretical argument that disagrees with me can be used to argue for sexism or racism, if you disagree with me you are a sexist, which is QED both because of course you aren't sexist/racist and because regardless, even if you are, you certainly can't say such a thing on a public forum!

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-29T14:03:59.727Z · LW(p) · GW(p)

No no no. I'm not saying "Since sexism and racism are wrong," - I'm saying that those who don't want their arguments to be of the sort that it could analogously justify racism or sexism (even if the person is neither of those), then they would also need to reject speciesism.

Replies from: Zvi
comment by Zvi · 2013-07-29T14:32:58.833Z · LW(p) · GW(p)

Mindkilling-related issues aside, I am going to do my best to un-mindkill at least one aspect of this question, which is why the frame change.

Is this similar to arguing that if the bloody knife was the subject of an illegal search, which we can't allow because allowing that would lead to other bad things, and therefore is not admissible in trial, then you must not only find the defendant not guilty but actually believe that the defendant did not commit the crime and should be welcome back to polite society?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-29T14:56:47.921Z · LW(p) · GW(p)

No, what makes the difference is that you'd be mixing up the normative level with the empirical one, as I explained here (parent of the linked post also relevant).

Replies from: Zvi
comment by Zvi · 2013-07-29T15:22:37.126Z · LW(p) · GW(p)

In that post, you seem to be making the opposite case: That you should not reject X (animal testing) simply because your argument could be used to support repugnant proposal Y (unwilling human testing); you say that the indirect consequences of Y would be very bad (as they obviously would) but then you don't make the argument that one must then reject X, instead that you should support X but reject Y for unrelated reasons, and you are not required to disregard argument Q that supports both X and Y and thereby reject X (assuming X was in fact utility increasing).

Or, that the fact that a given argument can be used to support a repugnant conclusion (sexism or racism) should not be a justification for not using an argument. In addition, the argument for brain complexity scaling moral value that you now accept as an edit is obviously usable to support sexism and racism, in exactly the same way that you are using as a counterargument:

For any given characteristic, different people will have different amounts of that characteristic, and for any two groups (male / female, black / white, young / old, whatever) there will be a statistical difference in that measurement (because this isn't physics and equality has probability epsilon, however small the difference) so if you tie any continuous measurement to your moral value of things, or any measurement that could ever not fully apply to anything human, you're racist and sexist.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-29T15:35:49.816Z · LW(p) · GW(p)

In that post, you seem to be making the opposite case: That you should not reject X (animal testing) simply because your argument could be used to support repugnant proposal Y (unwilling human testing); you say that the indirect consequences of Y would be very bad (as they obviously would) but then you don't make the argument that one must then reject X, instead that you should support X but reject Y for unrelated reasons, and you are not required to disregard argument Q that supports both X and Y and thereby reject X (assuming X was in fact utility increasing).

Exactly. This is because the overall goal is increasing utility, and not a societal norm of non-discrimination. (This is of course assuming that we are consequentialists.) My arguments against discrimination/speciesism apply at the normative level, when we are trying to come up with a definition of utility.

For any given characteristic, different people will have different amounts of that characteristic, and for any two groups (male / female, black / white, young / old, whatever) there will be a statistical difference in that measurement (because this isn't physics and equality has probability epsilon, however small the difference) so if you tie any continuous measurement to your moral value of things, or any measurement that could ever not fully apply to anything human, you're racist and sexist.

I wouldn't classify this as sexism/racism. If there are sound reasons for considering the properties in question relevant, then treating beings of different species differently because of a correlation between species, and not because of the species difference itself, is in my view not a form of discrimination.

As I wrote:

It refers to a discriminatory attitude against a being where less ethical consideration i.e. caring less about a being's welfare or interests is given solely because of the "wrong" species membership. The "solely" here is crucial, and it's misunderstood often enough to warrant the redundant emphasis.

comment by Xodarap · 2013-07-30T12:23:18.483Z · LW(p) · GW(p)

It's not sexist to say that women are more likely to get breast cancer. This is a differentiation based on sex, but it's empirically founded, so not sexist.

Similarly, we could say that ants' behavior doesn't appear to be affected by narcotics, so we should discount the possibility of their suffering. This is a judgement based on species, but is empirically founded, so not speciesist.

Things only become ___ist if you say "I have no evidence to support my view, but consider X to be less worthy solely because they aren't in my race/class/sex/species."

I genuinely don't think anyone on LW thinks speciesism is OK.

Replies from: SaidAchmiz, Lumifer, AndHisHorse
comment by Said Achmiz (SaidAchmiz) · 2013-07-30T13:14:05.741Z · LW(p) · GW(p)

You evade the issue, I think. It is sexist (or _ist) if you say "I consider X to be less worthy because they aren't in my race/class/sex/species, and I do have evidence to support my view."?

Sure, saying women are more likely to get breast cancer isn't sexist; but this is a safe example. What if we had hard evidence that women are less intelligent? Would it be sexist to say that, then? (Any objection that contains the words "on average" must contend with the fact that any particular women may have a breast cancer risk that falls anywhere on the distribution, which may well be below the male average.)

No one is saying "I think pigs are less worthy than humans, and this view is based on no empirical data whatever; heck, I've never even seen a pig. Is that something you eat?"

We have tons of empirical data about differences between the species. The argument is about exactly which of the differences matter, and that is unlikely to be settled by passing the buck to empiricism.

Replies from: MugaSofer, army1987, Xodarap
comment by MugaSofer · 2013-07-31T15:01:57.934Z · LW(p) · GW(p)

"I think pigs are less worthy than humans, and this view is based on no empirical data whatever; heck, I've never even seen a pig. Is that something you eat?"

Upvoted just for this.

comment by A1987dM (army1987) · 2013-07-31T10:25:36.164Z · LW(p) · GW(p)

Sure, saying women are more likely to get breast cancer isn't sexist; but this is a safe example. What if we had hard evidence that women are less intelligent? Would it be sexist to say that, then? (Any objection that contains the words "on average" must contend with the fact that any particular women may have a breast cancer risk that falls anywhere on the distribution, which may well be below the male average.)

I wouldn't say it is, but other people would use the word “sexist” with a broader sense than mine (assuming that each person defines “sexism” and “racism” in analogous ways).

comment by Xodarap · 2013-07-30T23:22:57.538Z · LW(p) · GW(p)

It is sexist (or _ist) if you say "I consider X to be less worthy because they aren't in my race/class/sex/species, and I do have evidence to support my view."?

No. Because your statement "X is less worthy because they aren't of my gender" in that case is synonymous with "X is less worthy because they lack attribute Y", and so gender has left the picture. Hence it can't be sexist.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-30T23:38:56.887Z · LW(p) · GW(p)

Ok, but if you construe it that way, then "X is less worthy just because of their gender" is a complete strawman. No one says that. What people instead say is "people of type T are inferior in way W, and since X is a T, s/he is inferior in way W".

Examples: "women are less rational than men, which is why they are inferior, not 'just' because they're women"; "black people are less intelligent than white people, which is why they are inferior, not 'just' ..."; etc.

By your construal, are these things not sexist/racist? But then neither is this speciesist: "nonhumans are not self-aware, unlike humans, which is why they are inferior, not 'just' because they're nonhumans".

Replies from: Xodarap
comment by Xodarap · 2013-07-30T23:48:34.154Z · LW(p) · GW(p)

I think we are getting into a discussion about definitions, which I'm sure you would agree is not very productive.

But I would absolutely agree that your statement "nonhumans are not self-aware, unlike humans, which is why they are inferior, not 'just' because they're nonhumans" is not speciesist. (It is empirically unlikely though.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-30T23:56:53.778Z · LW(p) · GW(p)

Agreed entirely, let's not argue about definitions.

Do we disagree on questions of fact? On rereading this thread, I suspect not. Your thoughts?

Replies from: Xodarap, MugaSofer
comment by Xodarap · 2013-08-01T01:57:29.405Z · LW(p) · GW(p)

Do we disagree on questions of fact? On rereading this thread, I suspect not

I think so? You seem to have indicated in a few comments that you don't believe nonhuman animals are "self-aware" or "conscious" which strikes me as an empirical statement?

If this is true (and I give at least 30% credence that I've just been misunderstanding you), I'd be interested to hear why you think this. We may not end up drawing the moral line at the same place, but I think consciousness is a slippery enough subject that I at least would learn something from the conversation.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-08-01T02:19:51.843Z · LW(p) · GW(p)

Ok. Yes, I think that nonhuman animals are not self-aware. (Dolphins might be an exception. This is a particularly interesting recent study.)

Dolphins aside, we have no reason to believe that animals are capable of thinking about themselves; of considering their own conscious awareness; of having any self-concept, much less any concept of themselves as persistent conscious entities with a past and a future; of consciously reasoning about other minds, or having any concept thereof; or of engaging in abstract reasoning or thought of any kind.

I've commented before that one critical difference between "speciesism" and racism or sexism or other such prejudices is that a cow can never argue for its own equal treatment; this, I have said, is not a trivial or irrelevant fact. And it's not just a matter of not having the vocal cords to speak, or of not knowing the language, or any other such trivial obstacles to communication; a cow can't even come close to having the concepts required to understand human behavior, human concepts, and human language.

Now, you might not think any of this is morally relevant. Fine. But I would meet with great skepticism — and, sans compelling evidence, probable outright dismissal — any claim that a cow, or a pig, or, even more laughably, a chicken, is self-aware in anything like the sense I outlined above.

(By the way, I am reluctant to commit to any position on "consciousness", merely because the word is used in such a diverse range of ways.)

Replies from: davidpearce
comment by davidpearce · 2013-08-01T08:43:55.662Z · LW(p) · GW(p)

Birds lack a neocortex. But members of at least one species, the European magpie, have convincingly passed the "mirror test" [cf. "Mirror-Induced Behavior in the Magpie (Pica pica): Evidence of Self-Recognition" http://www.plosbiology.org/article/fetchObject.action?representation=PDF&uri=info:doi/10.1371/journal.pbio.0060202] Most ethologists recognise passing the mirror test as evidence of a self-concept. As well as higher primates (chimpanzees, orang utans, bonobos, gorillas) members of other species who have passed the mirror test include elephants, orcas and bottlenose dolphins. Humans generally fail the mirror test below the age of eighteen months.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-08-01T15:04:52.210Z · LW(p) · GW(p)

You are right, the mirror test is evidence of self-concept. I do not take it to be nearly sufficient evidence, but it is evidence.

Humans generally fail the mirror test below the age of eighteen months.

This supports my view that very young humans are not self-aware (and therefore not morally important) either.

Replies from: davidpearce, Lumifer
comment by davidpearce · 2013-08-01T16:57:17.727Z · LW(p) · GW(p)

Could you possibly say a bit more about why the mirror test is inadequate as a test of possession of a self-concept? Either way, making self-awareness a precondition of moral status has troubling implications. For example, consider what happens to verbally competent adults when feelings intense fear turn into uncontrollable panic. In states of "blind" panic, reflective self-awareness and the capacity for any kind of meta-cognition is lost. Panic disorder is extraordinarily unpleasant. Are we to make the claim that such panic-ridden states aren't themselves important - only the memories of such states that a traumatised subject reports when s/he regains a measure of composure and some semblance of reflective self-awareness is restored? A pig, for example, or a prelinguistic human toddler, doesn't have the meta-cognitive capacity to self-reflect on such states. But I don't think we are ethically entitled to induce them - any more than we are ethically entitled to waterboard a normal adult human. I would hope posthuman superintelligence can engineer such states out of existence - in human and nonhuman animals alike.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-08-01T17:41:54.111Z · LW(p) · GW(p)

Could you possibly say a bit more about why the mirror test is inadequate as a test of possession of a self-concept?

Surely it is a reach to say that the mirror test, alone, with all of its methodological difficulties, can all by itself raise our probability estimate of a creature's possessing self-awareness to near-certainty? I agree that it's evidence, but calling it a test is pushing it, to say the least. To see just one reason why I might say this, consider that we can, right now, probably program a robot to pass such a test; such a robot would not be self-aware.

As for the rest of your post, I'd like to take this opportunity to object to a common mistake/ploy in such discussions:

"This general ethical principle/heuristic leads to absurdity if applied with the literal-mindedness of a particularly dumb algorithm, therefore reductio ad absurdum."

Your argument here seems to be something like: "Adult humans are sometimes not self-aware, but we still care about them, even during those times. Is self-awareness therefore irrelevant??" No, of course it's not. It's a complex issue. But a chicken is never self-aware, so the point is moot.

Also:

In states of "blind" panic, reflective self-awareness and the capacity for any kind of meta-cognition is lost.

Please provide a citation for this, and I will response, as my knowledge of this topic (cognitive capacity during states of extreme panic) is not up to giving a considered answer.

Panic disorder is extraordinarily unpleasant.

Having experienced a panic attack on one or two occasions, I am inclined to agree. However, I did not lose my self-concept at those times.

Finally:

But I don't think we are ethically entitled to induce [panic states in pigs/toddlers] - any more than we are ethically entitled to waterboard a normal adult human.

"Ethically entitled" is not a very useful phrase to use in isolation; utilitarianism[1] can only tell us which of two or more world-states to prefer. I've said that I prefer that dogs not be tortured, all else being equal, so if by that you mean that we ought to prefer not to induce panic states in pigs, then sure, I agree. The question is what happens when all else is not equal — which it pretty much never is.

[1] You are speaking from a utilitarian position, yes? If not, then that changes things; "ethically entitled" means something quite different to a deontologist, naturally.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-04T16:24:57.639Z · LW(p) · GW(p)

Your argument here seems to be something like: "Adult humans are sometimes not self-aware, but we still care about them, even during those times. Is self-awareness therefore irrelevant??" No, of course it's not. It's a complex issue. But a chicken is never self-aware, so the point is moot.

Um, "Why don't we stop caring about people who temporarily lose this supposed be-all and end-all of moral value" seems like a valid question, albeit one you hopefully are introspective enough to have an answer for.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-08-04T16:41:35.224Z · LW(p) · GW(p)

Is the question "why don't we temporarily stop caring about people who temporarily lose this etc."?

If so, then maybe we should, if they really lose it. However, please tell me what actions would ensue from, or be made permissible by, a temporary cessation of caring, provided that I still care about that person after they return from this temporary loss of importance.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-06T14:14:54.776Z · LW(p) · GW(p)

That depends on the details of your personal moral system, doesn't it? As I said already, you may well be consistent on this point, but you have not explained how.

comment by Lumifer · 2013-08-01T16:14:29.743Z · LW(p) · GW(p)

This supports my view that very young humans are not self-aware (and therefore not morally important) either.

Try telling a mother that her baby is not morally important.

(I would recommend some training in running and ducking before doing that...)

Replies from: MugaSofer, SaidAchmiz
comment by MugaSofer · 2013-08-04T16:21:26.713Z · LW(p) · GW(p)

I find the idea that babies aren't morally important highly unlikely, but did you have to pick the most biased possible example?

comment by Said Achmiz (SaidAchmiz) · 2013-08-01T16:36:45.589Z · LW(p) · GW(p)

Is this a rebuttal, or merely a snarky quip?

If the latter, then carry on. If the former, please elaborate.

Replies from: Lumifer
comment by Lumifer · 2013-08-01T16:43:26.723Z · LW(p) · GW(p)

Both. I like multiple levels of meaning.

In particular, think about it in the context of whether morality is objective or subjective, what makes subjective opinions morally acceptable, and what is the role of evidence in all this.

Specifically, do you think there's any possible evidence that could lead to you and a mother attaching the same moral importance to her baby?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-08-01T16:59:53.393Z · LW(p) · GW(p)

Is there any evidence that could lead to the mother assigning her baby the same value as I do? Couldn't tell you. (I've never been a mother.)

Vice versa? Probably not.

After all, it's possible that two agents are in possession of the same facts, the same true beliefs, and nonetheless have different preferences. So evidence doesn't do very much for us, here.

In any case, your objection proves too much: after all, try telling a dog owner that his dog is not morally important. For extra laughs, try telling the owner of a custom-built, lovingly-maintained hot rod that his car is not morally important. People (myself included) get attached to all manner of things.

We have to distinguish between valuing something for its own sake (i.e. persons), and valuing things that those persons value (artwork, music, babies, cars, dogs, elegant math theorems, etc.).

Replies from: Lumifer
comment by Lumifer · 2013-08-01T17:31:11.736Z · LW(p) · GW(p)

After all, it's possible that two agents are in possession of the same facts, the same true beliefs, and nonetheless have different preferences. So evidence doesn't do very much for us, here.

I quite agree, but evidently that's a point of contention on this thread.

We have to distinguish between valuing something for its own sake (i.e. persons), and valuing things that those persons value (artwork, music, babies, cars, dogs, elegant math theorems, etc.).

That is true, but I think my quip still stands. I suspect that the mother in my example would strongly insist that the moral value of the baby is high for its own sake and not just because she happens to love the baby (along with her newly remodeled kitchen). Would you call her mistaken?

Replies from: SaidAchmiz, army1987
comment by Said Achmiz (SaidAchmiz) · 2013-08-01T17:46:43.306Z · LW(p) · GW(p)

I suspect that the mother in my example would strongly insist that the moral value of the baby is high for its own sake and not just because she happens to love the baby (along with her newly remodeled kitchen). Would you call her mistaken?

Only if she agrees with me that self-awareness is a key criterion for moral relevance.

Of course, the real answer is that mothers are almost never capable of reasoning rationally about their children, especially in matters of physical harm to the child, and especially when the child is quite young. So the fact that a mother would, in fact insist on this or that isn't terribly interesting. (She might also insist that her baby is objectively the cutest baby in the maternity ward, but so what?)

comment by A1987dM (army1987) · 2013-08-02T12:14:11.762Z · LW(p) · GW(p)

I suspect that the mother in my example would strongly insist that the moral value of the baby is high for its own sake and not just because she happens to love the baby (along with her newly remodeled kitchen).

Same would apply to other things in SaidAchmiz's list, too.

Replies from: Lumifer
comment by Lumifer · 2013-08-02T15:35:07.632Z · LW(p) · GW(p)

I don't think that is true. For a dog, maybe, for a hot rod, definitely not.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-08-02T17:25:36.245Z · LW(p) · GW(p)

What about for the Mona Lisa?

Replies from: Lumifer, Eugine_Nier
comment by Lumifer · 2013-08-02T17:38:01.614Z · LW(p) · GW(p)

Things are not persons and their price or symbolism does not affect that.

Replies from: SaidAchmiz, Nornagest
comment by Said Achmiz (SaidAchmiz) · 2013-08-02T18:28:27.288Z · LW(p) · GW(p)

My point was: many people would say that the existence of the Mona Lisa is independently good, that it has value for its own sake, regardless of any individual person's appreciation of it.

They would be talking nonsense, of course. But they would say it.

Just like the mother with the baby.

Edit: Also what Nornagest said.

comment by Nornagest · 2013-08-02T17:45:22.137Z · LW(p) · GW(p)

I'm not sure most people treat personhood as the end of the story. It's not uncommon to talk about artistic virtuosity or historical significance as a source of intrinsic value: watch the framing the next time a famous painting gets stolen or a national museum gets bombed or looted in wartime.

Granted, it seems clear to me that these things are only important if there are persons to appreciate them, but the question was about popular intuitions, not LW-normative ethics.

comment by Eugine_Nier · 2013-08-04T07:30:04.533Z · LW(p) · GW(p)

The question of whether the aesthetic value of beautiful objects can be terminal is an interesting but unrelated question.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-08-04T15:55:03.389Z · LW(p) · GW(p)

Unrelated to what...?

The discussion has gone like so:

SaidAchmiz: Babies are not morally important.
Lumifer: A mother would disagree!
SaidAchmiz: Yeah, but that doesn't tell us much, because someone might also disagree with the same thing about the Mona Lisa (Implication: And there, they would clearly be wrong, so the fact that a person makes such a claim is not particularly meaningful.)

Replies from: MugaSofer
comment by MugaSofer · 2013-08-04T16:02:52.973Z · LW(p) · GW(p)

SaidAchmiz: Babies are not morally important.

Lumifer: A mother would disagree!

SaidAchmiz: Yeah, but that doesn't tell us much

A ... random person off the street would disagree? People who are cool with eating babies be rare, mate. Even rarer than people who consider the Mona Lisa morally important (by the same order of magnitude as human lives, anyway.)

Um, are you by any chance a psychopath*? This seems like a basic part of the human operating system, subjectively.

*Not a serious question, unless you are in which case this is valuable information to bear in mind.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-08-04T16:35:04.606Z · LW(p) · GW(p)

Be careful how broadly you cast the "basic part of the human operating system" net. Even without the Typical Mind Fallacy, there are some pretty big and pretty surprising cultural differences out there. (Not that I am necessarily claiming such differences to be the cause of any disagreement in this particular case.)

As for the random person off the street... a random person off the street is likely to disagree with many utilitarian (or ethical in general) claims that your average LessWronger might make. How much weight should we give to this disagreement?

Replies from: MugaSofer
comment by MugaSofer · 2013-08-06T14:18:59.078Z · LW(p) · GW(p)

Be careful how broadly you cast the "basic part of the human operating system" net.

I try to be. But that is certainly the subjective experience of my valuing the lives of children.

As for the random person off the street... a random person off the street is likely to disagree with many utilitarian (or ethical in general) claims that your average LessWronger might make. How much weight should we give to this disagreement?

That depends on our grounds for believing we have identified their mistake, of course.

comment by MugaSofer · 2013-07-31T15:03:17.944Z · LW(p) · GW(p)

Do we disagree on questions of fact? On rereading this thread, I suspect not.

Well, do you disagree WRT conclusions? Are you, in fact, a vegetarian?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-31T15:20:06.120Z · LW(p) · GW(p)

Nope, definitely not a vegetarian. I think that's a broader topic though.

Replies from: MugaSofer
comment by MugaSofer · 2013-07-31T16:33:23.367Z · LW(p) · GW(p)

To be absolutely clear: you agree that nonhumans are probably self-aware, feel pain, and so on and so forth, and are indeed worthy of moral consideration ... but for reasons not under discussion here, you are not a vegetarian? Fair enough, I guess.

EDIT: Apparently not.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-31T17:17:08.746Z · LW(p) · GW(p)

Huh? What? Have you been reading my posts?? Are you perhaps confusing me with someone else...? (Though I haven't seen anyone else here take the position you describe either...)

Yes, I think nonhumans almost certainly feel pain; no, I don't think they're self-aware; no, I don't think they're worthy of moral consideration.

Edit: I don't mean to be harsh on you. Illusion of transparency, I suppose?

Replies from: MugaSofer
comment by MugaSofer · 2013-08-04T14:20:48.274Z · LW(p) · GW(p)

Huh? What? Have you been reading my posts?? Are you perhaps confusing me with someone else...? (Though I haven't seen anyone else here take the position you describe either...)

No, not really. I just read the post where you said you two agreed on facts and was confused - this is why.

comment by Lumifer · 2013-07-30T18:23:47.320Z · LW(p) · GW(p)

I genuinely don't think anyone on LW thinks speciesism is OK.

Ah, the slaying of a beautiful hypothesis by one little ugly fact... :-D

I do feel speciesism is perfectly fine.

Replies from: Emile
comment by Emile · 2013-07-30T21:22:20.954Z · LW(p) · GW(p)

Same here, I think speciesism is a fine heuristic here and now (it may not be so in the future).

Replies from: Xodarap
comment by Xodarap · 2013-07-30T23:36:20.898Z · LW(p) · GW(p)

If it's a heuristic, then it's not speciesism.

If it's a "heuristic" that overrides lots of evidence, then it's speciesism. Which is just another way of saying that you aren't performing a Bayesian update correctly.

comment by AndHisHorse · 2013-07-30T13:18:30.186Z · LW(p) · GW(p)

The issue, though, is not that beliefs are founded on no evidence. Rather, it is that they are founded on insufficient evidence. It would, in my estimation, require some strange, inhuman bigot to say such a thing; rather, they will hold up their prejudices based on evidence which sounds entirely reasonable to them. There is nearly always a justification for treating the other tribe poorly; healthy human psychology doesn't do well with baseless discrimination, so it invents (more accurately, seeks out with a hefty does of confirmation bias) reasons that its discrimination is well-founded.

In this case, the fact that ants do not appear to be affected by narcotics is evidence that they are different from humans, but it seems that it is insufficient to discount their suffering. I am very curious, however, as to why a lack of behavioral reaction to narcotics indicates that ant suffering is morally neutral. I feel that there is an implicit step I missed there.

Replies from: Xodarap
comment by Xodarap · 2013-07-30T23:30:57.620Z · LW(p) · GW(p)

I am very curious, however, as to why a lack of behavioral reaction to narcotics indicates that ant suffering is morally neutral.

The question of pain in insects is incredibly complicated, so please don't take my glib example as anything more than that.

But if ants don't have something analogous to opiods, then that would indicate that pain is never "bad" for them, which would be an (non-conclusive) indication they don't suffer.

comment by MugaSofer · 2013-07-29T15:46:50.840Z · LW(p) · GW(p)

Maybe I was already mindkilled (vegetarian speaking), but it seems like a precisely appropriate term to use, given the content of this post.

What term would you prefer?

[Bonus points: if racism and speciesism were well-known errors of the past, would sexist!you object to the term "sexism" on the same grounds?]

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-29T18:52:51.096Z · LW(p) · GW(p)

Humanism, maybe. Yes.

Replies from: MugaSofer
comment by MugaSofer · 2013-07-29T22:15:47.606Z · LW(p) · GW(p)

That's taken, though ... but then it's been taken before, and repurposed, it's such a catchy word with such lovely connotations.

comment by katydee · 2013-07-28T22:02:46.278Z · LW(p) · GW(p)

I would prefer to see posts like this in the Discussion section.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-28T22:04:43.254Z · LW(p) · GW(p)

May I ask why?

Replies from: katydee
comment by katydee · 2013-07-28T22:35:24.809Z · LW(p) · GW(p)

I think Main should be for posts that directly pertain to rationality. This post doesn't seem to do that.

That said, my standards for what belongs in main seem somewhat different from those of other users. For instance I think "The Robots, AI, and Unemployment Anti-FAQ" belongs in Discussion as well, and that post is not only in Main but promoted to boot.

Replies from: Lukas_Gloor, SaidAchmiz
comment by Lukas_Gloor · 2013-07-29T15:24:10.616Z · LW(p) · GW(p)

Since grandparent received so many upvotes, I'm going to explain my reasoning for posting in Main:

Rules of thumb:

Your post discusses core Less Wrong topics.

The material in your post seems especially important or useful.

[...]

(At least one of) LW's primary goal(s) is to get people thinking about far future scenarios to improve the world. LW is about rationality, but it is also about ethics. Whether anti-speciesism is especially important or useful is something that people have different opinions on, but the question itself is clearly important because it may lead to different/adjusted prioritizing in practice.

Replies from: katydee
comment by katydee · 2013-07-29T17:51:32.571Z · LW(p) · GW(p)

I disagree with the FAQ in that respect (among others-- see for instance my thoughts on the use of the term "tapping out"). My preference is that people only post to Main if their post discusses core Less Wrong topics, and maybe not even then.

comment by Said Achmiz (SaidAchmiz) · 2013-07-28T22:52:07.720Z · LW(p) · GW(p)

Upvoted for the "directly pertain to rationality" rule of thumb; I agree with that. That said, I thought that the Anti-FAQ was appropriate for Main.

Replies from: Larks
comment by Larks · 2013-07-29T12:49:16.955Z · LW(p) · GW(p)

The anti-FAQ was of much higher quality.

comment by Shmi (shminux) · 2013-07-28T20:28:20.874Z · LW(p) · GW(p)

A generic problem with this type of reasoning is some form of the repugnant conclusion. If you don't put a Schelling fence somewhere, you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.

In actuality, different groups of people implicitly have different Schelling points and then argue whose Schelling point is morally right. A standard Schelling point, say, 100 years ago, was all humans or some subset of humans. The situation has gotten more complicated recently, with some including only humans, humans and cute baby seals, humans and dolphins, humans and pets, or just pets without humans, etc.

So a consequentialist question would be something like

Where does it make sense to put a boundary between caring and not caring, under what circumstances and for how long?

Note this is no longer a Schelling point, since no implicit agreement of any kind is assumed. Instead, one tests possible choices against some terminal goals, leaving morality aside.

Replies from: None, Xodarap, SaidAchmiz
comment by [deleted] · 2013-07-28T20:45:16.339Z · LW(p) · GW(p)

I feel like you're saying this:

"There are a great many sentient organisms, so we should discriminate against some of them"

Is this what you're saying?

EDIT: Sorry, I don't mean that bacteria or viruses are sentient. Still, my original question stands.

Replies from: shminux
comment by Shmi (shminux) · 2013-07-28T21:31:20.449Z · LW(p) · GW(p)

All I am saying is that one has to make an arbitrary care/don't care boundary somewhere. and "human/non-human" is a rather common and easily determined Schelling point in most cases. It fails in some, like the intelligent pig example from the OP, but then every boundary fails on some example.

Replies from: None
comment by [deleted] · 2013-07-28T22:23:11.575Z · LW(p) · GW(p)

Where does sentience fail as a boundary?

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-29T09:33:40.499Z · LW(p) · GW(p)

if sentience isn't a boolean condition.

comment by Xodarap · 2013-07-28T20:42:26.752Z · LW(p) · GW(p)

you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.

Why do you say that? Bacteria, viruses etc. seem to lack not just one, but all of the capacities A-H the OP mentioned.

comment by Said Achmiz (SaidAchmiz) · 2013-07-28T21:07:58.109Z · LW(p) · GW(p)

A generic problem with this type of reasoning is some form of the repugnant conclusion. If you don't put a Schelling fence somewhere, you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.

Indeed. I've alluded to this before as "how many chickens would I kill/torture to save my grandmother?" The answer, of course, is N, where N may be any number.

This means that, if we start with basic (total) utilitarianism, we have to throw out at least one of the following:

  1. Additive aggregation of value.
  2. Valuing my grandmother a finite amount (as opposed to an infinite amount).
  3. Valuing a chicken a nonzero amount.

Throwing out #2 leads to incorrect results (it is not the case that I value my grandmother more than anything else — sorry, grandma). Throwing out #1 is possible, and I have serious skepticism about that one anyway... but it also leads to problems (don't I think that killing or torturing two people is worse than killing or torturing one person? I sure do!).

Throwing out #3 seems unproblematic.

Replies from: Vaniver, shminux, Xodarap, Armok_GoB
comment by Vaniver · 2013-07-28T21:58:42.451Z · LW(p) · GW(p)

The answer, of course, is N, where N may be any number. ... Throwing out #3 seems unproblematic.

Relatedly, you could choose to throw out your ability to assess N. When you say N could be any number, that looks to me like scope neglect. I don't have a good sense of what a billion chickens is like, or what a billionth chance of dying looks like, and so I don't expect my intuitions to give good answers in that region. If you ask the question as "how many chickens would I kill/torture to extend my grandmother's life by one second?", then if you do actually value chickens at zero then the answer will again be N, but that seems much less intuitive.

So it looks like an answer to the 'save' question that avoids the incorrect results is something like "I don't know how many, but I'm pretty sure it's more than a million."

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-28T22:07:28.902Z · LW(p) · GW(p)

If you ask the question as "how many chickens would I kill/torture to extend my grandmother's life by one second?", then if you do actually value chickens at zero then the answer will again be N, but that seems much less intuitive.

The answer is, indeed, still the same N.

Relatedly, you could choose to throw out your ability to assess N. When you say N could be any number, that looks to me like scope neglect.

I don't find scope neglect to be a serious objection here. It's certainly relevant in cases of inconsistencies, like the classic "how much would you pay to save a thousand / a million birds from oil slicks" scenario, but where is the inconsistency here? Scope neglect is a scaling error — what quantity is it you think I am scaling incorrectly?

The "scope neglect" objection also misconstrues what I am saying. When I say "I would kill/torture N chickens to save my grandmother", I am here telling you what I would, in fact, do. Offer me this choice right now, and I will make it. This is the input to the discussion. I have a preference for my grandmother's life over any N chickens, and this is a preference that I support on consideration — it is reflectively consistent.

For "scope neglect" to be a meaningful objection, you have to show that there's some contradiction, like if I would torture up to a million chickens to give my grandmother an extra day of life, but also up to a million to give her an extra year... or something to that effect. But there's no contradiction, no inconsistency.

Replies from: Vaniver, MTGandP
comment by Vaniver · 2013-07-29T00:21:01.509Z · LW(p) · GW(p)

Scope neglect is a scaling error — what quantity is it you think I am scaling incorrectly?

When I imagine sacrificing one chicken, it looks like a voodoo ritual or a few pounds of meat, worth maybe tens of dollars. When I imagine sacrificing a thousand chickens, it looks like feeding a person for several years, and maybe tens of thousand dollars. When I imagine sacrificing a million chickens, it looks like feeding a thousand people for several years, and maybe tens of millions of dollars. When I imagine sacrificing a billion chickens, it looks like feeding millions of people for several years, and a sizeable chunk of the US poultry industry. When I imagine sacrificing a trillion chickens, it looks like feeding the population of the US for a decade, and several times the global poultry industry. (I know this is in terms of their prey value, but since I view chickens as prey that's how I imagine them, not in terms of individual subjective experience.)

And that's only 1e9! There are lots of bigger numbers. What I meant by scope neglect was it looked to me like you took the comparison between one chicken and one human and rounded your impression of their relative values to 0, rather than trying to find a level where you're indifferent between them. When I imagine weighing one person against the global poultry industry, it's not obvious to me that one person is the right choice, and it feels to me that if it's not obvious, you can just increase the number of chickens.

One counterargument to this is "but chickens and humans are on different levels of moral value, and it's wrong to trade off a higher level for a lower level." I don't think that's a good approach to morality, and I got the impression that was not your approach since you were reluctant to throw out #2 (which many people who do endorse multi-level moralities are willing to do).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-29T00:47:21.104Z · LW(p) · GW(p)

I... don't see how your examples/imagery answer my question.

When I imagine weighing one person against the global poultry industry, it's not obvious to me that one person is the right choice, and it feels to me that if it's not obvious, you can just increase the number of chickens.

It is completely obvious to me. (I assume by "global poultry industry" you mean "that number of chickens", since if we literally eradicated global chicken production, lots of bad effects (on humans) would result.)

One counterargument to this is "but chickens and humans are on different levels of moral value, and it's wrong to trade off a higher level for a lower level." I don't think that's a good approach to morality, and I got the impression that was not your approach since you were reluctant to throw out #2 (which many people who do endorse multi-level moralities are willing to do).

Don't be so sure! Multi-level morality, by the way, does not necessarily mean that my grandmother occupies the top level all by herself. However, that's a separate discussion; I started this subthread from an assumption of basic utilitarianism.

Anyway, I think — with apologies — that you are still misunderstanding me. Take this:

What I meant by scope neglect was it looked to me like you took the comparison between one chicken and one human and rounded your impression of their relative values to 0, rather than trying to find a level where you're indifferent between them.

There is no level where I'd be indifferent between them. That's my point. Why would I try to find such a level? What moral intuition do you think I might have that would motivate me to try this?

Replies from: Vaniver
comment by Vaniver · 2013-07-29T01:49:13.411Z · LW(p) · GW(p)

Anyway, I think — with apologies — that you are still misunderstanding me.

Yes and no. I wasn't aware that you were using a multi-level morality, but agree with you that it doesn't obviously break and doesn't require infinite utilities in any particular level.

That said, my experience has been that every multi-level morality I've looked at hard enough has turned out to map to the real line, but because of measurement difficulties it looked like there were clusters of incomparable utilities. It is very hard to tell the difference between a chicken being worth 0 people and 1e-12 people and 1e-24 people, and so when someone says that it's 0 I don't take their confidence as informative. If they're an expert in decision science and eliciting this sort of information, then I do take it seriously, but I'm still suspicious that This Time It's Different.

Another big concern here is revealed preferences vs. stated preferences. Many people, when you ask them about it, will claim that they would not accept money in exchange for a risk to their life, but then in practice do that continually- but on the level where they accept $10 in exchange for a millionth chance of dying, for example. One interpretation is that they're behaving irrationally, but I think the more plausible interpretation is that they're acting rationally but talking irrationally. (Talking irrationally can be a rational act, like I talk about here.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-29T02:18:47.973Z · LW(p) · GW(p)

Well, as far as revealed vs. stated preferences go, I don't think we have any way of subjecting my chicken vs. grandmother preference to a real-world test, so I suppose You'll Just Have To Take My Word For It. As for the rest...

It is very hard to tell the difference between a chicken being worth 0 people and 1e-12 people and 1e-24 people, and so when someone says that it's 0 I don't take their confidence as informative.

What would it mean for me to be mistaken about this? Are you suggesting that, despite my belief that I'd trade any number of chickens to save my grandmother, there's some situation we might encounter, some really large number of chickens, faced with which I would say: "Well, shit. I guess I'll take the chickens after all. Sorry, grandma"?

I find it very strange that you are taking my comments to be statements about which particular real number value I would assign to a single chicken. I certainly do not intend them that way. I intend them to be statements about what I would do in various situations; which choice, out of various sets of options, I would make.

Whether or not we can then transform those preferences into real-number valuations of single chickens, or sets of many chickens, is a question we certainly could ask, but the answer to that question is a conclusion that we would be drawing from the givens. That conclusion might be something like "my preferences do not coherently translate into assigning a real-number value to a chicken"! But even more importantly, we do not have to draw any conclusion, assign any values to anything, and it would still, nonetheless, be a fact about my preferences that I would trade any number of chickens for my grandmother. So it does not make any sense whatsoever to declare that I am mistaken about my valuation of a chicken, when I am not insisting on any such valuation to begin with.

Replies from: Vaniver
comment by Vaniver · 2013-07-29T03:31:42.011Z · LW(p) · GW(p)

What would it mean for me to be mistaken about this?

Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma.

I should also make clear that I'm not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility. This is mostly useful when thinking about death / lifespan extension and other sacred values, where refusing to explicitly calculate means that you're not certain the marginal value of additional expenditure will be equal across all possible means for expenditure. For this particular case, it's unlikely that you will ever come across a situation where the value system "grandma first, then chickens" will disagree with "grandma is worth a really big number of chickens," and separating the two will be unlikely to have any direct meaningful impact.

But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere. I also think it's important to cultivate a mentality where a 1e-12 chance of saving grandma feels different from a 1e-6 chance of saving grandma, rather than your mind just interpreting them both as "a chance of saving grandma."

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-29T03:54:09.731Z · LW(p) · GW(p)

Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma.

Any chance of saving my grandmother is worth any number of chickens.

I should also make clear that I'm not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility.

Well, ok. I am not committed to a multi-level system; I was only formulating a bit of skepticism. That being said, if we are using real-valued utilities, then we're back to either assigning chickens 0 value or abandoning additive aggregation. (Or perhaps even just giving up on utilitarianism as The One Unified Moral System. There are reasons to suspect we might have to do this anyway.)

For this particular case, it's unlikely that you will ever come across a situation where the value system "grandma first, then chickens" will disagree with "grandma is worth a really big number of chickens," and separating the two will be unlikely to have any direct meaningful impact.

Perhaps. But you yourself say:

But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere.

So I don't think I ought to just say "eh, let's call grandma's worth a googolplex of chickens and call it a day".

Replies from: CoffeeStain
comment by CoffeeStain · 2013-07-29T09:46:27.381Z · LW(p) · GW(p)

So I don't think I ought to just say "eh, let's call grandma's worth a googolplex of chickens and call it a day".

Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn't by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.

If we are using real-valued utilities, then we're back to either assigning chickens 0 value or abandoning additive aggregation.

Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom. It's just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states.

As this comment points out, the additivity of the value of two events which have dependencies has no claim on their additivity when completely independent. Having two pillows isn't having one pillow twice.

Any chance of saving my grandmother is worth any number of chickens.

So I actually don't think you have to give this up to remain rational. Rationality is creating heuristics for the ideal version of yourself, a self of course which isn't ideal in any fundamental sense but rather however you choose to define it. Let's call this your preferred self. You should create heuristics that cause you to emulate your preferred self such that your preferred self would choose you out of any of your available options for doing metaethics, when applying you to the actual moral situations you'll have in your lifetime (or a weighted-by-probability integral over expected moral situations).

What I'm saying is that I wouldn't be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn't check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment.

This all to say, it's not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-29T14:12:29.356Z · LW(p) · GW(p)

So I don't think I ought to just say "eh, let's call grandma's worth a googolplex of chickens and call it a day".

Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn't by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.

Because, as you say:

This all to say, it's not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.

Indeed, and the right answer here is choosing my grandmother. (btw, it's "googolplex", not "googleplex")

If we are using real-valued utilities, then we're back to either assigning chickens 0 value or abandoning additive aggregation.

Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom.

Indeed; but...

It's just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states.

They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.

Something has to change. Setting M = 0 is easiest and most consistent with my moral intuitions, and leads to correct results in all choices involving humans. (Of course we might have other motivations for choosing a different path, such as abandoning real-valued utilities or abandoning additive aggregation.)

What I'm saying is that I wouldn't be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn't check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment.

Now here, I am not actually sure what you're saying. Could you clarify? What theory?

Replies from: CoffeeStain
comment by CoffeeStain · 2013-07-29T21:57:16.403Z · LW(p) · GW(p)

They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.

I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you're not willing to think there isn't a consistent reconciliation.

My point about your preferred ethical self is that for him to be a formal agent that you wish to emulate, he is required to have a consistent reconciliation. The suggestion is that most people who claim M = 0, insofar as it relates to N, create inconsistencies elsewhere when trying to relate it to O, P, and Q. Inconsistencies which they as flawed agents are permitted to have, but which ideal agents aren't. The theory I refer to is the one that takes M = 0.

These are the inconsistencies that the multi-level morality people are trying to reconcile when they still wish to claim that they prefer a dying worm to a dying chicken. Suffice to say that I don't think an ideal rational agent can reconcile them, but other point was that our actual selves aren't required to (but that we should acknowledge this).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-29T23:08:02.706Z · LW(p) · GW(p)

I see. I confess that I don't find your "preferred ethical self" concept to be very compelling (and am highly skeptical about your claim that this is "what rationality is"), but I'm willing to hear arguments. I suspect that those would be longer than should be posted deep in a tangential comment thread.

You shouldn't take me to have any kind of "theory that takes M = 0"; that is, IMO, a misleading way to talk about this. Setting M = 0 is merely the (apparently, at-first-glance) best resolution of a particular problem that arises when one starts with a certain set of moral intuitions and attempts to resolve them with a certain moral system (total utilitarianism). Does this resolution cause further issues? Maybe; it depends on other moral intuitions that we might have. Can we resolve them? Maybe; perhaps with a multi-tier valuation system, perhaps with something else.

My primary point, way back at the beginning of this comment thread, is that something has to give. I personally think that giving up nonzero valuation of chickens is the least problematic on its own, as it resolves the issue at hand, most closely accords with my other moral intuitions, and does not seem, at least at first glance, to create any new major issues.

Then again, I happen to think that we have other reasons to seriously consider giving up additive aggregation, especially over the real numbers. By the time we're done resolving all of our difficulties, we might end up with something that barely resembles the simple, straightforward total utilitarianism with real-number valuation that we started with, and that final system might not need to assign the real number 0 to the value of a chicken. Or it still might. I don't know.

(For what it's worth, I am indifferent between the worm and the chicken, but I would greatly prefer a Mac SE/30 to either of them.)

Replies from: CoffeeStain
comment by CoffeeStain · 2013-07-30T07:55:23.914Z · LW(p) · GW(p)

I suspect that those would be longer than should be posted deep in a tangential comment thread.

Yeah probably. To be honest I'm still rather new to the rodeo here, so I'm not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn't listen to me :)

I'm sure it's been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It's probably not obvious why there would be problems with every proposed difficulty for additive aggregation. I would probably annoyingly often fall back on the claim that any particular case doesn't satisfy the criteria but that additive value still holds.

I don't think it'd be a lengthy list of criteria though. All you need is causal independence. The kind of independence that makes counterfactual (or probabilistic) worlds independent enough to be separable. You disvalue a situation where grandma dies with certaintly equivalently with a situation where all of your 4 grandmas (they got all real busy after the legalization of gay marriage in their country) are subjected to 25% likelihood of death. You do this because you value the possible worlds equally according to their likelihood, and you sum the values. My intuition that refusing to not also sum the values in analogous non-probabilistic circumstances would cause inconsistencies down the line, but I'm not sure.

comment by MTGandP · 2013-07-29T03:25:00.392Z · LW(p) · GW(p)

Suppose you're walking down the street when you see a chicken trapped under a large rock. You can save it or not. If you save it, it costs you nothing except for your time. Would you save it?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-29T03:46:52.054Z · LW(p) · GW(p)

Maybe.

Realistically, it would depend on my mood, and any number of other factors.

Why?

Replies from: MTGandP
comment by MTGandP · 2013-07-29T04:34:47.382Z · LW(p) · GW(p)

If you would save the chicken, then you think its life is worth 10 seconds of your life, which means you value its life as about 1/200,000,000th of your life as a lower bound.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-29T05:01:42.429Z · LW(p) · GW(p)

In your view, how much do I think the chicken's life is worth if I would either save it or not save it, depending on factors I can't reliably predict or control? If I would save it one day, but not save it the next? If I would save a chicken now, and eat a chicken later?

I don't take such tendencies to be "revealed preferences" in any strong sense if they are not stable under reflective equilibrium. And I don't have any belief that I should save the chicken.

Edit: Removed some stuff about tendencies, because it was actually tangential to the point.

comment by Shmi (shminux) · 2013-07-28T21:14:30.646Z · LW(p) · GW(p)

Throwing out #3 seems unproblematic.

It is problematic once you start fine-graining, exactly like in the dust specks/torture debate, where killing a chicken ~ dust speck and killing your grandma ~ torture. There is almost certainly an unbroken chain of comparables between the two extremes.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-28T21:23:25.974Z · LW(p) · GW(p)

For what it's worth, I also choose specks in specks/torture, and find the "chain of comparables" argument unconvincing. (I'd be happy to discuss this, but this is probably not the thread for it.)

That, however, is not all that relevant in practice: the human/nonhuman divide is wide (unless we decide to start uplifting nonhuman species, which I don't think we should); the smartest nonhuman animals (probably dolphins) might qualify for moral consideration, but we don't factory-farm dolphins (and I don't think we should), and chickens and cows certainly don't qualify; the question of which humans do or don't qualify is tricky, but that's why I think we shouldn't actually kill/torture them with total impunity (cf. bright lines, Schelling fences, etc.).

In short, we do not actually have to do any fine-graining. In the case where we are deciding whether to torture, kill, and eat chickens — that is, the actual, real-world case — my reasoning does not encounter any problems.

Replies from: shminux
comment by Shmi (shminux) · 2013-07-28T21:42:01.389Z · LW(p) · GW(p)

Do you assign any negative weight to a suffering chicken? For example, is it OK to simply rip off a leg from a live one and make a dinner, while the injured bird is writhing on the ground slowly bleeding to death?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-28T22:18:25.786Z · LW(p) · GW(p)

Sure. However, you raise what is in principle a very solid objection, and so I would like to address it.

Let's say that I would, all else being equal, prefer that a dog not be tortured. Perhaps I am even willing to take certain actions to prevent a dog from being tortured. Perhaps I also think that two dogs being tortured is worse than one dog being tortured, etc.

However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.

What are we to make of this?

In that case, some component of our utilitarianism might have to be re-examined. Perhaps dogs have a nonzero value, and a lot of dogs have more value than only a few dogs, but no quantity of dogs adds up to one grandmother; but on the other hand, some things are worth more than one grandmother (two grandmothers? all of humanity?).

Real numbers do not behave this way. Perhaps they are not a sufficient number system for utilitarian calculations.

(Of course, it's possible to suppose that we could, if we chose, construct various hypotheticals (perhaps involving some complex series of bets) which would tease out some inconsistency in that set of valuations. That may be the case here, but nothing obvious jumps out at me.)

Replies from: habanero, DanArmak, shminux, CarlShulman
comment by habanero · 2013-07-29T17:05:09.709Z · LW(p) · GW(p)

However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.

This sounds a bit like the dustspeck vs. torture argument, where some claim that no number of dustspecks could ever outweigh torture. I think that there we have to deal with scope insensitivity. On the utilitarian aggregation I recommend section V of following paper. It shows why the alternatives are absurd. http://spot.colorado.edu/~norcross/2Dogmasdeontology.pdf

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-29T17:23:39.863Z · LW(p) · GW(p)

By the way, the dogs vs. grandma case differs in an important way from specks vs. torture:

The specks are happening to humans.

It is not actually inconsistent to choose TORTURE in specks/torture while choosing GRANDMA in dogs/grandma. All you have to do is value humans (and humans' utility) while not valuing dogs (or placing dogs on a "lower moral tier" than your grandmother/humans in general).

In other words, "do many specks add up to torture" and "do many dogs add up to grandma" are not the same question.

Replies from: habanero
comment by habanero · 2013-08-01T15:44:41.407Z · LW(p) · GW(p)

That seems a little bit ad hoc to me. Either you care about dogs (and then even the tiniest non-zero amount of caring should be enough for the argument) or you don't. People often come up with lexical constructs when they feel uncomfortable with the anticipation of having to change their behaviour. As a consequentialist, I figured out that I care a bit about dog welfare, and being aware of my scope insensitivity, I can see why some people dislike biting the bullet which results from simple additive reasoning. An option would be, though, to say that one's brain (and anyhow therefore one‘s moral framework) is only capable of a certain amount of caring for dogs and that this variable is independent of the number of dogs. For me that wouldn't work out though for I care about the content of sentient experience in a additive way. But for the sake of the argument : if a hyperintelligent alien came to the Earth (eg. an AI), what would you propose how the alien should figure out which mechanisms in the universe should be of moral concern? What would you think of the agent’s morality if it discounted your welfare lexically?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-08-01T16:48:48.405Z · LW(p) · GW(p)

That seems a little bit ad hoc to me. Either you care about dogs (and then even the tiniest non-zero amount of caring should be enough for the argument) or you don't.

Eliezer handled this sort of objection in Newcomb's Problem and Regret of Rationality:

You can't tell me, first, that above all I must conform to a particular ritual of cognition, and then that, if I conform to that ritual, I must change my morality to avoid being Dutch-booked. Toss out the losing ritual; don't change the definition of winning.

What you are doing here is insisting that I conform to your ritual of cognition (i.e. total utilitarianism with real-number valuation and additive aggregation). I see no reason to accede to such a demand.

The following are facts about what I do and don't care about:

1) All else being equal, I prefer that a dog not be tortured.
2) All else being equal, I prefer that my grandmother not be tortured.
3) I prefer any number of dogs being tortured to my grandmother being tortured.
4 through ∞) Some other stuff about my preferences, skipped for brevity.

#s 2 and 3 are very strong preferences. #1 is less so.

Now I want to find a moral calculus that captures those facts. You, on the other hand, are telling me that, first, I must accept your moral calculus, and then, that if I do so, I must toss out one of the aforementioned preferences.

I decline to do either of those things. (As Eliezer says in the above link: The utility function is not up for grabs.)

But for the sake of the argument : if a hyperintelligent alien came to the Earth (eg. an AI), what would you propose how the alien should figure out which mechanisms in the universe should be of moral concern?

I don't know. This is the kind of thing that demonstrates why we need FAI theory and CEV.

What would you think of the agent’s morality if it discounted your welfare lexically?

I would think that its morality is different from mine. Also, I would be sad, because presumably such a morality on the AI's part would result in bad things for me. Your point?

Replies from: habanero
comment by habanero · 2013-08-01T18:35:43.425Z · LW(p) · GW(p)

Ok, let's do some basic friendly AI theory: Would a friendly AI discount the welfare of "weaker" beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI? If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.

My bad about the ritual. Thanks. Out of interest about your preferences: Imagine the grandmother and the dog next to each other. A perfect scientist starts to exchange pairs of atoms (let's assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages). For the scientist knows his experiment very well, none of the objects will die; in the end it'll look like the two objects changed their places. At which point does the mother stop counting lexically more than the dog? Sometimes continuity arguments can be defeated by saying: "No I don't draw an arbitrary line; I adjust gradually whereas in the beginning I care a lot about the grandmother and in the end just very little about the remaining dog." But I think that this argument doesn't work here for we deal with a lexical prioritization. How would you act in such a scenario?

Replies from: Jiro, army1987, SaidAchmiz
comment by Jiro · 2013-08-01T22:34:01.104Z · LW(p) · GW(p)

You can ask the same question with the grandmother turning into a tree instead of into a dog.

comment by A1987dM (army1987) · 2013-08-02T12:25:30.271Z · LW(p) · GW(p)

A perfect scientist starts to exchange pairs of atoms (let's assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages).

Identity isn't in specific atoms. The effect of swapping a carbon atom in the grandma with a carbon atom in the dog is none at all.

comment by Said Achmiz (SaidAchmiz) · 2013-08-01T23:07:58.296Z · LW(p) · GW(p)

Jiro's response shows one good reason why I don't find that thought experiment very interesting. Another obvious reason is its extreme implausibility and, I strongly suspect, actual incoherence (given what we know about physics and biology). I think I can safely say "I have no idea what I would prefer", much like Eliezer finds no reason to answer how he would explain his arm being turned into a blue tentacle, and not have that be counted against me.

On to FAI theory:

Would a friendly AI discount the welfare of "weaker" beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI?

By definition, it would not, because if it did, then it would be an Unfriendly AI.

If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.

How do you get from facts about the behavior of an FAI to claims about how we should act? I spy one of those pesky "is-ought" transitions that bedeviled Hume!

Corollary: why should we care that our behavior results in bad things for animals? Isn't that the question in the first place, and doesn't your statement beg said question?

comment by Said Achmiz (SaidAchmiz) · 2013-07-29T17:20:34.894Z · LW(p) · GW(p)

As I've said elsewhere in this thread, I also choose SPECKS in specks/torture. As for the paper, I will read it when I have time, and try to get back to you with my thoughts.

Edit: And see this thread for a discussion of whether scope neglect applies to my views.

comment by DanArmak · 2013-07-28T22:40:08.498Z · LW(p) · GW(p)

Real numbers do not behave this way. Perhaps they are not a sufficient number system for utilitarian calculations.

Hence this recent post on surreal utilities.

comment by Shmi (shminux) · 2013-07-28T22:59:09.570Z · LW(p) · GW(p)

Real numbers do not behave this way. Perhaps they are not a sufficient number system for utilitarian calculations.

My suspicion that what has to give is the assumption of unlimited transitivity in VNM, but I never bothered to flesh out the details.

Replies from: pragmatist
comment by pragmatist · 2013-07-30T14:55:41.836Z · LW(p) · GW(p)

Actually, I believe it's the continuity axiom that rules out lexicographic preferences.

Replies from: shminux
comment by Shmi (shminux) · 2013-07-30T17:56:33.300Z · LW(p) · GW(p)

I examined this one, too, but the continuity axiom intuitively makes sense for comparables, except possibly in cases of extreme risk aversion. I am leaning more toward abandoning the transitivity chain when the options are too far apart. Something like A > B having some uncertainty increasing with the chain length between A and B or with some other quantifiable value.

comment by CarlShulman · 2013-07-28T22:36:44.106Z · LW(p) · GW(p)

[Removed.]

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-28T22:50:21.667Z · LW(p) · GW(p)

Hours you spend helping dogs are hours you could have spent helping humans, e.g. having more money is associated with longer life.

This point is of course true, hence my "all else being equal" clause. I do not actually spend any time helping dogs, for pretty much exactly the reasons you list: there are matters of human benefit to attend to, and dogs are strictly less important.

Your last paragraph is mostly moot, since the behavior you allude to is not at all my actual behavior, but I would like to hear a bit more about the behavior model you refer to. (A link would suffice.)

I'm not entirely sure what the relevance of the speed limit example is.

comment by Xodarap · 2013-07-28T21:39:26.998Z · LW(p) · GW(p)

The problem with throwing out #3 is you also have to throw out:

(4) How we value a being's moral worth is a function of their abilities (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)

Which is a rather nice proposition.

Edit: As Said points out, this should be:

(4) How we value a being's pain is a function of their ability to feel pain (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-28T21:54:21.362Z · LW(p) · GW(p)

You don't, actually. For example, the following is a function):

Let a be a variable representing the abilities of a being. Let E(a) be the ethical value of a being with abilities a. The domain is the nonnegative reals, the range is the reals. Let H be some level of abilities that we have chosen to identify as "human-level abilities". We define E(a) thus:

a < H : E(a) = 0.
aH: E(a) = f(a), where f(x) is some other function of our choice.

Replies from: Xodarap
comment by Xodarap · 2013-07-28T22:10:00.685Z · LW(p) · GW(p)

Fair enough. I've updated my statement:

(4) How we value a being's pain is a function of their ability to feel pain (or other faculties that in some way relate to pain).

Otherwise we could let H be "maleness" and justify sexism, etc.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-28T22:25:41.629Z · LW(p) · GW(p)

Uh, would you mind editing your statement back, or adding a note about what it said before? Otherwise I am left looking like a crazy person, spouting non sequiturs ;) Edit: Thanks!

Anyway, your updated statement is no longer vulnerable to my objection, but neither is it particularly "nice" anymore (that is, I don't endorse it, and I don't think most people here who take the "speciesist" position do either).

(By the way, letting H be "maleness" doesn't make a whole lot of sense. It would be very awkward, to say the least, to represent "maleness" as some nonnegative real number; and it assumes that the abilities captured by a are somehow parallel to the gender spectrum; and it would make it so that we value male chickens but not human women; and calling "maleness" a "level of abilities" is pretty weird.)

Replies from: Xodarap
comment by Xodarap · 2013-07-28T23:37:29.325Z · LW(p) · GW(p)

Haha, sure, updated.

But why don't you think it's "nice" to require abilities to be relevant? If you feel pain more strongly than others do, then I care more about when you're in pain than when others are in pain.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-28T23:48:36.932Z · LW(p) · GW(p)

I probably[1] do as well...

... provided that you meet my criteria for caring about your pain in the first place — which criteria do not, themselves, have anything directly to do with pain. (See this post).

[1] Well, at first glance. Actually, I'm not so sure; I don't seem to have any clear intuitions about this in the human case — but I definitely do in the sub-human case, and that's what matters.

Replies from: Xodarap
comment by Xodarap · 2013-07-30T12:04:45.334Z · LW(p) · GW(p)

Well, if you follow that post far enough you'll see that the author thinks animals feel something that's morally equivalent to pain, s/he just doesn't like calling it "pain".

But assuming you genuinely don't think animals feel something morally equivalent to pain, why? That post gives some high level ideas, but doesn't list any supporting evidence.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-30T14:59:22.993Z · LW(p) · GW(p)

But assuming you genuinely don't think animals feel something morally equivalent to pain, why?

I had a longer response typed out, about what properties make me assign moral worth to entities, before I realized that you were asking me to clarify a position that I never took.

I didn't say anything about animals not feeling pain (what does it "morally equivalent to pain" mean?). I said I don't care about animal pain.

... the more I write this response, the more I want to ask you to just reread my comment. I suspect this means that I am misunderstanding you, or in any case that we're talking past each other.

Replies from: Xodarap
comment by Xodarap · 2013-07-30T23:44:44.538Z · LW(p) · GW(p)

I apologize for the confusion. Let me attempt to summarize your position:

  1. It is possible for subjectively bad things to happen to animals
  2. Despite this fact, it is not possible for objectively bad things to happen to animals

Is that correct? If so, could you explain what "subjective" and "objective" mean here - usually, "objective" just means something like "the sum of subjective", in which case #2 trivially follows from #1, which was the source of my confusion.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-30T23:58:08.881Z · LW(p) · GW(p)

I don't know what "subjective" and "objective" mean here, because I am not the one using that wording.

What do you mean by "subjectively bad things"?

comment by Armok_GoB · 2013-07-29T22:23:40.535Z · LW(p) · GW(p)

My intuition here is solid to an hilariously unjustified degree on "10^20".

comment by Vaniver · 2013-07-28T21:49:11.258Z · LW(p) · GW(p)

None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.

This strikes me as a very impatient assessment. The human infant will turn into a human, and the piglet will turn into a pig, and so down the road A through E will suggest treating them differently.

Similarly, the demented can be given the reverse treatment (though it works differently); they once deserved moral standing, and thus are extended moral standing because the extender can expect that when their time comes, they will be treated by society in about the same way as society treated its elders when they were young. (This mostly falls under B, except the reciprocation is not direct.)

(Looking at the comments, Manfred makes a similar argument more vividly over here.)

Replies from: Lukas_Gloor, davidpearce, DxE, Xodarap
comment by Lukas_Gloor · 2013-07-28T21:56:28.315Z · LW(p) · GW(p)

If we use cognitive enhancements on animals, we can turn them into highly intelligent, self-aware beings as well. And the argument from potentiality would also prohibit abortion or experimentation on embryos. I was thinking about including the argument from potentiality, but then I didn't because the post is already long and because I didn't want to make it look like I was just "knocking down a very weak argument or two". I should have used a qualifier though in the sentence you quoted, to leave room for things I hadn't considered.

Replies from: Vaniver
comment by Vaniver · 2013-07-29T01:59:39.946Z · LW(p) · GW(p)

If we use cognitive enhancements on animals, we can turn them into highly intelligent, self-aware beings as well.

And then arguments A through E will not argue for treating the enhanced animals differently from humans.

And the argument from potentiality would also prohibit abortion or experimentation on embryos.

It would make the difference between abortion and infanticide small. It does seem to me that the arguments for allowing abortion but not allowing infanticide are weak and the most convincing one hinges on legal convenience.

I was thinking about including the argument from potentiality, but then I didn't because the post is already long and because I didn't want to make it look like I was just "knocking down a very weak argument or two".

I think this is a hazard for any "Arguments against X" post; the reason X is controversial is generally because there are many arguments on both sides, and an argument that seems strong to one person seems weak to another.

Replies from: threewestwinds
comment by threewestwinds · 2013-07-30T01:29:37.653Z · LW(p) · GW(p)

What level of "potential" is required here? A human baby has a certain amount of potential to reach whatever threshold you're comparing it against - if it's fed, kept warm, not killed, etc. A pig also has a certain level of potential - if we tweak its genetics.

If we develop AI, then any given pile of sand has just as much potential to reach "human level" as an infant. I would be amused if improved engineering knowledge gave beaches moral weight (though not completely opposed to the idea).

Your proposed category - "can develop to contain morally relevant quantity X" - tends to fail along similar edge cases as whatever morally relevant quality it's replacing.

Replies from: Vaniver
comment by Vaniver · 2013-07-30T01:57:14.125Z · LW(p) · GW(p)

What level of "potential" is required here? A human baby has a certain amount of potential to reach whatever threshold you're comparing it against - if it's fed, kept warm, not killed, etc. A pig also has a certain level of potential - if we tweak its genetics.

I have given a gradualist answer to every question related to this topic, and unsurprisingly I will not veer from that here. The value of the potential is proportional to the difficulty involved in realizing that potential, as the value of oil in the ground depends on what lies between you and it.

comment by davidpearce · 2013-07-29T10:18:42.903Z · LW(p) · GW(p)

Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves? Young human children with genetic disorders are given love, care and respect - even if the nature of their illness means they will never live to see their third birthday. We don't hold their lack of "potential" against them. Likewise, pigs are never going to acquire generative syntax or do calculus. But their lack of cognitive sophistication doesn't make them any less sentient.

Replies from: Vaniver, MixedNuts, MugaSofer
comment by Vaniver · 2013-07-29T10:51:12.807Z · LW(p) · GW(p)

Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves?

My intuitions say the former. I would not be averse to a quick end for young human children who are not going to live to see their third birthday.

But their lack of cognitive sophistication doesn't make them any less sentient.

Agreed, mostly. (I think it might be meaningful to refer to syntax or math as 'senses' in the context of subjective experience and I suspect that abstract reasoning and subjective sensation of all emotions, including pain, are negatively correlated. The first weakly points towards valuing their experience less, but the second strongly points towards valuing their experience more.)

Replies from: davidpearce
comment by davidpearce · 2013-07-29T11:56:58.081Z · LW(p) · GW(p)

Vanvier, you say that you wouldn't be averse to a quick end for young human children who are not going to live to see their third birthday. What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?

Replies from: Vaniver
comment by Vaniver · 2013-07-29T20:37:28.812Z · LW(p) · GW(p)

What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?

I'm not sure what this would look like, actually. The first thing that comes to mind is Down's Syndrome, but the impression I get is that that's a much smaller reduction in cognitive capacity than the one you're describing. The last time I considered that issue, I favored abortion in the presence of a positive amniocentesis test for Down's, and I suspect that the more extreme the reduction, the easier it would be to come to that direction.

I hope you don't mind that this answers a different question than the one you asked- I think there are significant (practical, if not also moral) differences between gamete selection, embryo selection, abortion, infanticide, and execution of adults (sorted from easiest to justify to most difficult to justify). I don't think execution of cognitively impaired adults would be justifiable in the presence of modern American economic constraints on grounds other than danger posed to others.

comment by MixedNuts · 2013-08-05T06:44:47.358Z · LW(p) · GW(p)

Historically, we have dismissed very obviously sapient people as lacking moral worth (people with various mental illnesses and disabilities, and even the freaking Deaf). Since babies are going to have whatever-makes-them-people at some point, it may be more likely that they already have it and we don't notice, rather than they haven't yet. That's why I'm a lot iffier about killing babies and mentally disabled humans than pigs.

comment by MugaSofer · 2013-07-29T15:55:59.627Z · LW(p) · GW(p)

Speaking as a vegetarian for ethical reasons ... yes. That's not to say they don't deserve some moral consideration based on raw brainpower/sentience and even a degree of sentimentality, of course, but still.

comment by DxE · 2013-07-29T21:09:14.562Z · LW(p) · GW(p)

My sperm has the potential to become human. When I realized almost all of them were dying because of my continued existence, I decided that I will have to kill myself. It was the only rational thing to do.

Replies from: Vaniver
comment by Vaniver · 2013-07-29T23:58:58.857Z · LW(p) · GW(p)

My sperm has the potential to become human.

It seems to me there is a significant difference between requiring an oocyte to become a person and requiring sustenance to become a person. I think about half of zygotes survive the pregnancy process, but almost all sperm don't turn into people.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-30T00:11:28.355Z · LW(p) · GW(p)

Would this difference disappear if we developed the technology to turn millions of sperm cells into babies?

Replies from: dspeyer, Vaniver
comment by dspeyer · 2013-08-05T02:29:31.274Z · LW(p) · GW(p)

Doesn't our current cloning technology allow us to turn any ordinary cell into a baby, albeit one with aging-related diseases?

comment by Vaniver · 2013-07-30T01:48:13.276Z · LW(p) · GW(p)

Would this difference disappear if we developed the technology to turn millions of sperm cells into babies?

Probably, but in such a world, I don't think human life would be scarce, and I think that the value of human life would plummet accordingly. They would still represent a significant time and capital investment, and so be more valuable than the em case, but I think that people would be seen as much more replaceable.

It is possible that human reproduction is horrible by many moral standards which seem reasonable. I think it's more convenient to jettison those moral standards than reshape reproduction, but one could imagine a world where people were castrated / had oophorectomies to prevent gamete production, with reproduction done digitally from sequenced genomes. It does not seem obviously worse than our world, except that it seems like a lot of work for minimal benefit.

comment by Xodarap · 2013-07-28T22:02:00.909Z · LW(p) · GW(p)

Is it possible to create some rule like this? Yeah, sure.

The problem is that you have to explain why that rule is valid.

If two babies are being tortured and one will die tomorrow but the other grows into an adult, your rule would claim that we should only stop one torture, and it's not clear why since their phenomenal pain is identical.

Replies from: Vaniver, DanArmak, MugaSofer, army1987
comment by Vaniver · 2013-07-29T00:06:00.492Z · LW(p) · GW(p)

The problem is that you have to explain why that rule is valid.

It comes from valuing future world trajectories, rather than just valuing the present. I see a small difference between killing a fetus before delivery and an infant after delivery, and the difference I see is roughly proportional to the amount of time between the two (and the probability that the fetus will survive to become the infant).

These sorts of gradual rules seem to me far more defensible than sharp gradations, because the sharpness in the rule rarely corresponds to a sharpness in reality.

Replies from: MugaSofer, OnTheOtherHandle
comment by MugaSofer · 2013-07-29T15:57:38.060Z · LW(p) · GW(p)

What about a similar gradual rule for varying sentience levels of animal?

Replies from: Vaniver
comment by Vaniver · 2013-07-29T20:40:01.047Z · LW(p) · GW(p)

What about a similar gradual rule for varying sentience levels of animal?

A quantitative measure of sentience seems much more reasonable than a binary measure. I'm not a biologist, though, and so don't have a good sense of how sharp the gradations of sentience in animals are; I would naively expect basically every level of sentience from 'doesn't have a central nervous system' to 'beyond humans' to be possible, but don't know if there are bands that aren't occupied for various practical reasons.

Replies from: Xodarap
comment by Xodarap · 2013-07-30T12:08:52.588Z · LW(p) · GW(p)

I don't think anyone is advocating a binary system. No one is supporting voting rights for pigs, for example.

comment by OnTheOtherHandle · 2013-07-31T20:27:44.138Z · LW(p) · GW(p)

While sliding scales may more accurately represent reality, sharp gradations are the only way we can come up with a consistent policy. Abortion especially is a case where we need a bright line. The fact that we have two different words (abortion and infanticide) for what amounts to a difference of a couple of hours is very significant. We don't want to let absolutely everyone use their own discretion in difficult situations.

Most policy arguments are about where to draw the bright line, not about whether we should adopt a sliding scale instead, and I think that's actually a good idea. Admitting that most moral questions fall under a gray area is more likely to give your opponent ammunition to twist your moral views than it is to make your own judgment more accurate.

comment by DanArmak · 2013-07-28T22:30:44.425Z · LW(p) · GW(p)

Some people value the future-potential of things and even give them moral value in cases when the present-time precursor or cause clearly has no moral status of its own. This corresponds to many people's moral intuitions, and so they don't need to explain why this is valid.

Replies from: Xodarap
comment by Xodarap · 2013-07-28T23:37:15.451Z · LW(p) · GW(p)

This corresponds to many people's moral intuitions, and so they don't need to explain why this is valid.

If you believe sole justification for a moral proposition is that you think it's intuitively correct, then no one is ever wrong, and these types of articles are rather pointless, no?

Replies from: DanArmak
comment by DanArmak · 2013-07-29T10:42:45.867Z · LW(p) · GW(p)

I'm a moral anti-realist. I don't think there's a "true objective" ethics out there written into the fabric of the Universe for us to discover.

That doesn't mean there is no such thing as morals, or that debating them is pointless. Morals are part of what we are and we perceive them as moral intuitions. Because we (humans) are very similar to one another, our moral intuition are also fairly similar, and so it makes sense to discuss morals, because we can influence one another, change our minds, better understand each other, and come to agreement or trade values.

Nobody is ever "right" or "wrong" about morals. You can only be right or wrong about questions of fact, and the only factual, empirical thing about morals is what moral intuitions some particular person has at a point in time.

comment by MugaSofer · 2013-07-29T15:58:30.643Z · LW(p) · GW(p)

If we can only stop one, sure. If we could stop both, why not do so?

comment by A1987dM (army1987) · 2013-07-29T14:18:14.916Z · LW(p) · GW(p)

If Alice bets $10,000 against $1 on heads and Bob bets $10,000 against $1 on tails, they're both idiots, even though only one of them will lose.

comment by Lumifer · 2013-07-29T20:14:27.570Z · LW(p) · GW(p)

We can imagine a continuous line-up of ancestors, always daughter and mother, from modern humans back to the common ancestor of humans and, say, cows, and then forward in time again to modern cows. How would we then divide this line up into distinct species? Morally significant lines would have to be drawn between mother and daughter, but that seems absurd!

That's a common fallacy. Let me illustrate:

The notions of hot and cold water are nonsensical. The water temperature is continuous from 0C to 100C. How would you divide this into distinct areas? You would have to draw a line between neighboring values different by tiny fractions of a degree, but that seems absurd!

Replies from: Lukas_Gloor, drnickbone, Carinthium, dspeyer
comment by Lukas_Gloor · 2013-07-30T02:59:39.681Z · LW(p) · GW(p)

I'm not the one arguing for dividing this up into distinct areas, my whole point was to just look at the relevant criteria and nothing else. If the relevant criterion is temperature, you get a gradual scale for your example. If it is sentience, you have too look for each individual animal separately and ignore species boundaries for that.

Replies from: Lumifer
comment by Lumifer · 2013-07-30T04:20:35.437Z · LW(p) · GW(p)

I'm not the one arguing for dividing this up into distinct areas

Right, you're the one arguing for complete continuity in the species space and lack of boundaries between species. Similar to the lack of boundary between cold and hot water.

you have too look for each individual animal separately and ignore species boundaries for that.

I'm confused. You seem to think it's useful to sit by an anthill and test each individual ant for sentience..?

Replies from: Creutzer
comment by Creutzer · 2013-07-31T06:01:24.259Z · LW(p) · GW(p)

I'm confused. You seem to think it's useful to sit by an anthill and test each individual ant for sentience..?

I think "animal" was used in the sense of "kind of animal" here.

comment by drnickbone · 2013-07-30T18:04:23.270Z · LW(p) · GW(p)

For a morally relevant example, it is quite absurd to suppose that humans aged 18 years and 0 days are mature enough to vote, whereas humans aged 17 years and 364 days are not mature enough. So voting ages are morally unacceptable?

Ditto: ages for drinking alcohol, sexual consent, marriage, joining the armed services etc.

Replies from: Carinthium
comment by Carinthium · 2013-08-04T06:44:06.321Z · LW(p) · GW(p)

Actually, there is a case to say that they are. Discrimination by category membership, instead of on a spectrum, means that candidates which have more merit are passed aside in favor of ones with lesser merit- particularly in the case of species, this can be problematic. The right of a person to be judged on their merits, if asked in abstract, would be accepted.

The only counter-case I can think of it is to say that society simply does not have the resources to discriminate (since discrimination it is) more precisely. However, even this does not entirely work out as within limits society could easily improve it's classification methods to better allow for unusual cases.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-04T07:33:37.175Z · LW(p) · GW(p)

The main advantage of simple discrimination rules is that they are less subject to Goodhart's law.

comment by Carinthium · 2013-08-04T06:39:06.250Z · LW(p) · GW(p)

If you're going to say that "hot" and "cold" are absolute things rather than continous on a spectrum, yes. Similiarly, it is absurd to say that species is an absolute thing rather than an arbitrary system of classification imposed on various organisms which fit into types broadly at best.

comment by dspeyer · 2013-08-05T02:14:39.025Z · LW(p) · GW(p)

The usual solution involving water temperature is to have levels of suitability.

I want to shower in hot water, not cold water. Absurd? Not really. Just simplified. In fact, the joy I will gain from a shower is a continuous function of water temperature with a peak somewhere near 45C. The first formulation just approximated this with a piecewise line function for convenience.

Carrying the analogy back, we can propose that the moral weight of suffering is proportional to the sentience of the sufferer. Estimating degrees of sentience now becomes important. ISTR that research review board have stricter standards for primates than rodents, and rodents than insects, so aparently this isn't a completely strange idea.

comment by [deleted] · 2013-07-28T20:36:13.184Z · LW(p) · GW(p)

"If all nonhumans truly weren't sentient, then obviously singling out humans for the sphere of moral concern would not be speciesist."

David Pearce sums up antispeciesism excellently saying:

"The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

Replies from: CarlShulman, Larks, SaidAchmiz, Richard_Kennaway
comment by CarlShulman · 2013-07-29T10:33:29.977Z · LW(p) · GW(p)

sums up antispeciesism excellently saying: "The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

If one takes "other things being equal" very seriously that could be quite vacuous, since there are so many differences in other areas, e.g. impact on society and flow-through effects, responsiveness of behavior to expected treatment, reciprocity, past agreements, social connectedness, preferences, objective list welfare, even species itself...

The substance of the claim has to be about exactly which things need to be held equal, and which can freely vary without affecting desert.

comment by Larks · 2013-07-31T12:25:22.551Z · LW(p) · GW(p)

"The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

Any speciesist is happy to agree with that. She simply thinks that species is one of the things that has to be equal.

Replies from: davidpearce
comment by davidpearce · 2013-07-31T13:21:46.321Z · LW(p) · GW(p)

Larks, all humans, even anencephalic babies, are more sentient than all Anopheles mosquitoes. So when human interests conflict irreconcilably with the interests of Anopheles mosquitoes, there is no need to conduct a careful case-by-case study of their comparative sentience. Simply identifying species membership alone is enough. By contrast, most pigs are more sentient than some humans. Unlike the antispeciesist, the speciesist claims that the interests of the human take precedence over the interests of the pig simply in virtue of species membership. (cf. http://www.dailymail.co.uk/news/article-2226647/Nickolas-Coke-Boy-born-brain-dies-3-year-miracle-life.html :heart-warming yes, but irrational altruism - by antispeciesist criteria at any rate.) I try and say a bit more (without citing the Daily Mail) here: http://ieet.org/index.php/IEET/more/pearce20130726

Replies from: Larks
comment by Larks · 2013-07-31T15:34:55.868Z · LW(p) · GW(p)

I don't see how this is relevant to my argument. I'm just pointing out that your definition doesn't track the concept you (probably) have in mind; I wasn't saying anything empirical* at all.

*other than about the topology of concept-space.

Replies from: davidpearce
comment by davidpearce · 2013-07-31T16:21:42.106Z · LW(p) · GW(p)

Larks, by analogy, could a racist acknowledge that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect, but race is one of the things that has to be equal? If you think the "other things being equal" caveat dilutes the definition of speciesism so it's worthless, perhaps drop it - I was just trying to spike some guns.

Replies from: Larks
comment by Larks · 2013-08-01T11:52:19.548Z · LW(p) · GW(p)

If we drop the caveat, anti-speciesism is obviously false. For example, moral, successful people deserve more respect than immoral unsuccessful people, even if both are of equal sentience.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-08-01T12:40:59.254Z · LW(p) · GW(p)

If we drop the caveat, anti-speciesism is obviously false. For example, moral, successful people deserve more respect than immoral unsuccessful people, even if both are of equal sentience.

There are plenty of people who would disagree with that. But what do you mean by "respect", and on what grounds do you give it or withhold it?

comment by Said Achmiz (SaidAchmiz) · 2013-07-31T12:58:19.051Z · LW(p) · GW(p)

By the way... what the heck is "equivalent sentience", exactly?

comment by Richard_Kennaway · 2013-07-31T13:08:30.461Z · LW(p) · GW(p)

"The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

Surely the antispeciesist claims that nothing else needs to be equal?

comment by Pablo (Pablo_Stafforini) · 2013-07-28T18:51:59.175Z · LW(p) · GW(p)

A fine piece. I hope it triggers a high-quality, non-mindkilled debate about these important issues. Discussion about the ethical status of non-human animals has generally been quite heated in the past, though happily this trend seems to have reversed recently (see posts by Peter Hurford and Jeff Kaufman).

comment by Qiaochu_Yuan · 2013-07-29T19:08:26.676Z · LW(p) · GW(p)

Also, standard argument against a short, reasonable-looking list of ethical criteria: no such list will capture complexity of value. They constitute fake utility functions.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-30T03:02:23.241Z · LW(p) · GW(p)

My utility function feels quite real to me and I prefer simplicity and elegance over complexity. Besides, I think you can still have lots of terminal values and not discriminate against animals (in terms of suffering), I don't think that's mutually exclusive.

comment by Manfred · 2013-07-28T21:34:35.201Z · LW(p) · GW(p)

Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form.

The biggest improvement to this post I would like to see is the engagement with opposing arguments more realistic than "humans are a platonic form." Currently you just knock down a very weak argument or two and then rush to conclusion.

EDIT: whoops, I missed the point, which is to only argue against speciesm. My bad. Edited out a misplaced "argument from future potential," which is what Jabberslythe replied to.

However, you really do only knock down weak arguments. What if we simply define categories more robustly than "platonic forms," like philosophers have done just fine since at least Wittgenstein and as is covered on this very blog. Then there's no point in talking about platonic forms.

For the argument from "one will be human and the next will be not" how do you deal with the unreliability of the sorites paradox as a philosophical test? Or what if we use the more general continuous model of speciesm, thus eliminating sharp lines? You don't just have to avoid deliberately strawmanning, you have to actively steelman :)

Replies from: Lukas_Gloor, Xodarap, Jabberslythe
comment by Lukas_Gloor · 2013-07-28T22:22:00.428Z · LW(p) · GW(p)

The section you quote from is quite obvious and I could probably have cut it down to a minimum given that this is LW. You make a good point, one could for instance have a utility function that includes a gradual continuum downwards in evolutionary relatedness or relevant capabilities and so on. This would be consistent and not speciesist. But there would be infinite ways of defining how steeply moral relevance declines, or whether this is linear or not. I guess I could argue "if you're going for that amount of arbitrariness anyway, why even bother?" The function would not just depend on outward criteria like the capacity for suffering, but also on personal reasons for our judgments, which is very similar to what I have summarized under H.

Replies from: army1987
comment by Xodarap · 2013-07-28T21:59:17.992Z · LW(p) · GW(p)

I think the relevant point is the part about racism, sexism etc.. If allow moral value to depend on things other than the beings' relevant attributes, then sure we can be speciesist. But we also can be racist, sexist, ...

comment by Jabberslythe · 2013-07-28T21:48:58.669Z · LW(p) · GW(p)

Those two babies differ in that they have different futures so it would be wrong to treat them differently such that suffering is minimized (and you should). And it would not be speciesist to do so because there is that difference.

comment by Armok_GoB · 2013-07-29T21:47:51.778Z · LW(p) · GW(p)

DISCLAIMER: the following is not necessarily my own opinions or beliefs, but rather done more in the spirit of steelmaning:

There seems to be a number of signs that the deciding factor might be the ability to form long term memories, especially if we go into very near mode.

  • It seems that if we extrapolate volition for an individual that is made to suffer with or without memory blocking in various sequences, and allowing it to chose tradeofs, it'll repeatedly observe clicking a button labelled "suffer horrific torture with suppressed memory" followed blacking out, and clicking a button labelled "suffer average torture with functioning memory" followed by being tortured. It'd thus learn to value experiences without memory much less.

  • If I remember correctly some anaesthetics used for surgery basically paralyses you and disable memory formation, and this is not seen as an outrage or horrifying, even by those that have or will be experiencing it.

  • If we consider increasing the intelligence of various animals while directing them to become humanlike, then by empathic modelling it seems that those capable of forming long term memories beforehand would identify with their former selves, get angry at people who had harmed them, empathize strongly and prevent the suffering in beings similar to what they were before, etc. while for those that couldn't, the opposite of these things would be true.

  • If I am given the choice to have one type of cognitive functionality disabled before being tortured, in almost all circumstances it seems the ability to form long term memories would be the best choice.

Replies from: Allison_Smith
comment by Allison_Smith · 2013-07-31T04:41:53.356Z · LW(p) · GW(p)

some anaesthetics used for surgery basically paralyses you and disable memory formation

Without also functioning as pain control, or in addition to that role? In either case, I'd be interested to know which anaesthetics these are; it seems like there might be interesting literature on them. (For instance, I'm curious to know whether they are first-line choices, or just used when there is no viable alternative.)

Replies from: gwern, Armok_GoB
comment by Armok_GoB · 2013-07-31T21:07:59.454Z · LW(p) · GW(p)

I don't know, if you find out please tell me.

comment by DanArmak · 2013-07-28T22:22:30.413Z · LW(p) · GW(p)

While I was writing this comment, CarlShulman posted his, which makes essentially the same point. But since I already wrote it a longer comment, I'm posting mine too. (Writing quickly is hard!)

In practice we must have a quantitative model of how much "moral value" to assign an animal (or human). I think your position that:

x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads.

Is wrong, and the reasons for that fall out of your own arguments.

As you point out, there is a continuum between any two living things (common descent). Nevertheless we all think that at least some animals have zero, or nearly zero, moral weight: insects, perhaps, but you can go all the way to amoebas. You must either 1) assign gradually diminishing moral value to beings ranged from humans to amoebas; or 2) choose an arbitrary, precise point (or several points) at which the value decreases sharply, with modern species boundaries being an obvious Schelling point but not the only option. Similar arguments have of course been made about the continuum between a sperm and an egg, and an eventual human being.

Option 1 lets you assign non-human animals moral value. But then, you must specify the criteria you use to calculate that value, from your list A-G or otherwise. These same criteria will then tell you that some humans have less moral value then others: children, people with advanced dementia or other severe mental deficiencies, etc. Some biological humans may have much less value than, say, a chicken (babies), or none at all (fetuses). Also, at least some post-humans, aliens, and AIs would have far more moral value than any human - even to the point of becoming utility monsters for total utilitarians.

Option 2 is completely arbitrary in terms of what animals you value, so (among its other problems) people won't be able to agree about it. And if you don't determine moral value by measuring some underlying property, you won't be able to determine the value of radical new varieties, such as post-humans or AIs.

You seem to support option 2 (value everyone equally) but you don't say where you draw the line - and that's the crucial question.

My own position is option 1, open to modification against failure modes like utility monsters that would conflict too strongly with my other moral intuitions.

The claim is that there is no way to block this conclusion without:

  1. using reasoning that could analogically be used to justify racism or sexism or
  2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

My reasoning can't justify racism and sexism, because my moral criteria don't differ noticeably between sexes and races. This is an empirical fact. If it were true that e.g. some race was less sentient than other races, then that would be a valid reason to assign people of that race less moral value. But it's just not true.

I don't understand what you mean by (2), could you give an example? If a utilitarian calculation forbids you from doing something, then what could be your reason for doing it anyway? Your utility function can't be separate from your morals; on the contrary it must incorporate your morals. (Inconsistent morals are a problem, but without a single VNM-compliant utility function, utilitarianism can't tell you anything at all.)

Some other notes:

H: What I care about / feel sympathy or loyalty towards

I would like to note that this is actual basis of almost all human moral reasoning, and all the rest is post-facto rationalizations. When those rationalizations come in conflict with moral intuitions, they are labelled "repugnant conclusions". I think you dismiss this factor far too lightly.

those not willing to bite the bullet about torturing babies are forced by considerations of consistency to care about animal suffering just as much as they care about human suffering.

I am willing to bite the bullet about babies, quite easily in fact. I assign no more value to newborn human babies than I do to chickens. I only care about babies insofar other humans care about babies.

I do care about animal suffering - in proportion to some of the measures A-G on your list, so less than human suffering, but (for many animals) more than human baby suffering.

I wouldn't mind treating babies like we treat some farm animals; that is not because I value those animals as highly as I do humans, but because I value both babies and humans much less than I do adult humans. (Some farming methods are acceptable to me, and some are not.)

A sentient being is one for whom "it feels like something to be that being".

Please play rationalist's taboo here. What empirical test or physical fact tells you whether "it feels like something" to be a certain animal? And moreover, quantitatively so - "how much" it feels like something to be that animal?

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past.

Baby-ism and racism have nothing in common (except that you're against both). I don't assign human-level moral status to babies, but I'm not a racist. This is precisely because humans of all races have roughly the same distribution on A-G (and other relevant parameters), whereas newborn babies would test below adults of most (all?) mammalian species.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-28T22:44:56.398Z · LW(p) · GW(p)

x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads.

By this I meant literally the same amount (and intensity!) of suffering. So I agree with the point you and Carl Shulman make, if it is the case that some animals can only experience so much suffering, then it makes sense to value them accordingly.

You must either 1) assign gradually diminishing moral value to beings ranged from humans to amoebas; or 2) choose an arbitrary, precise point (or several points) at which the value decreases sharply, with modern species boundaries being an obvious Schelling point but not the only option.

I'm arguing for 1), but I would only do it by species in order to save time for calculations. If I had infinite computing power, I would do the calculation for each individual separately according to indicators of what constitutes capacity for suffering and its intensity. Incidentally, I would also assign at least a 20% chance that brain size doesn't matter, some people in fact have this view.

I don't understand what you mean by (2), could you give an example? If a utilitarian calculation forbids you from doing something, then what could be your reason for doing it anyway?

By "utilitarianism" I meant hedonistic utilitarianism in general, not your personal utility function that (in this scenario) differentiates between sapience and mere sentience. I added this qualifier because "you'd have to be okay with torturing babies" is not a reductio, since utilitarians would have to bite this bullet anyway if they could thereby prevent an even greater amount of suffering in the future.

Please play rationalist's taboo here. What empirical test or physical fact tells you whether "it feels like something" to be a certain animal? And moreover, quantitatively so - "how much" it feels like something to be that animal?>>

I only have my first-person evidence to go with. This bothers me a lot but I'm assuming that some day we will have solved all the problems in philosophy of mind and can then map out what we mean precisely by "sentience", having it correspond to specific implemented algorithms or brain states.

Baby-ism and racism have nothing in common (except that you're against both). I don't assign human-level moral status to babies, but I'm not a racist. This is precisely because humans of all races have roughly the same distribution on A-G (and other relevant parameters), whereas newborn babies would test below adults of most (all?) mammalian species.

I agree, those are simply the two premises the conclusion that we should value all suffering equally is based on. You end up with coherent positions by rejection one or both of the two.

Replies from: DanArmak
comment by DanArmak · 2013-07-28T23:03:31.260Z · LW(p) · GW(p)

I only have my first-person evidence to go with. This bothers me a lot but I'm assuming that some day we will have solved all the problems in philosophy of mind and can then map out what we mean precisely by "sentience", having it correspond to specific implemented algorithms or brain states.

What evidence do you have for thinking that your first-person intuitions about sentience "cut reality at its joints"? Maybe if you analyze what goes through your head when you think "sentience", and then try to apply that to other animals (never mind AIs or aliens), you'll just end up measuring how different those animals are from humans in a completely arbitrary and morally-unimportant implementation feature.

If after solving all the problems of philosophy you found out something like this, would you accept it, or would you say that "sentience" was no longer the basis of your morals? In other words, why might you prefer this particular intuition to other intuitions that judge how similar something is to a human?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-28T23:21:56.798Z · LW(p) · GW(p)

If I understand it correctly, this is the position endorsed here. I don't think realizing that this view is right would change much for me; I would still try to generalize criteria for why I care about a particular experience and then care about all instances of the same thing. However, I realize that this would make it much more difficult to convince others to draw the same lines. If the question of whether a given being is sentience translates into whether I have reasons to care about that being, then one part of my argument would fall away. This issue doesn't seem to be endemic to the treatment of non-human animals though, you'd have it with any kind of utility function that values well-being.

comment by Morendil · 2013-07-28T21:34:24.458Z · LW(p) · GW(p)

What properties do human beings possess that makes us think that it is wrong to torture them?

Does it have to be the case that "the properties that X possesses" is the only relevant input? It seems to me that the properties possessed by the would-be torturer or killer are also relevant.

For instance, if I came across a kid torturing a mouse (even a fly) I would be horrified, but I would respond differently to a cat torturing a mouse (or a fly).

Replies from: Lukas_Gloor, Xodarap
comment by Lukas_Gloor · 2013-07-28T21:49:26.955Z · LW(p) · GW(p)

What if it is done by a baby or a kid with mental impairments so she cannot follow moral/social norms? I see no reason to treat the situation differently in such a case. (Except that one might want to talk to the parents of the kid in order to have them consider a psychological check-up for their child.)

Replies from: DanArmak
comment by DanArmak · 2013-07-28T22:32:23.695Z · LW(p) · GW(p)

I see no reason to treat the situation differently in such a case.

Differently from a normal kid, or differently from a cat? (I share Morendil's moral intuitions regarding his example.)

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-28T22:55:19.541Z · LW(p) · GW(p)

From the cat. I would in fact press a magic button that turns all carnivores into vegans. The cat (or the kid) doesn't know what it is doing and cannot be meaningfully blamed, but I still consider this to be a harmful action and I would want to prevent it. Who commits the act makes no difference to me (or only for indirect reasons).

comment by Xodarap · 2013-07-28T22:17:28.387Z · LW(p) · GW(p)

It seems to me that the properties possessed by the would-be torturer or killer are also relevant.

Why?

It seems to me like the only (consequentialist) justification is that they will then go on to torture others who have the ability to feel pain, and so it's still only the victims' properties which are relevant.

Replies from: Morendil
comment by Morendil · 2013-07-29T20:47:11.639Z · LW(p) · GW(p)

The more I perceive the torturer to be "like me", the more seeing this undermines my confidence in my own moral intuitions - my sense of a shared identity.

The fly case is particularly puzzling, as I regard flies as not morally relevant.

Replies from: Nornagest
comment by Nornagest · 2013-07-29T21:06:16.539Z · LW(p) · GW(p)

I'd regard a kid pulling wings off a fly as worrying not because I particularly care about flies, but more because it indicates a propensity to do similar things to morally relevant agents. Not much chance of that becoming a problem for a cat.

comment by A1987dM (army1987) · 2013-07-29T14:07:40.388Z · LW(p) · GW(p)

If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!

People get anaesthesia before undergoing surgery and get drunk before risking social embarrassment all the time.

Replies from: Lukas_Gloor, Carinthium
comment by Lukas_Gloor · 2013-07-29T14:12:36.937Z · LW(p) · GW(p)

Animals are not walking around anaesthetized, and I don't think the primary reason why alcohol helps with pain is that it makes you dumber (I might be wrong about this).

comment by Carinthium · 2013-08-04T06:46:05.071Z · LW(p) · GW(p)

Anaethesia reduces pain, which is the primary reason people take it. Getting drunk reduces inhibitions (which is good if you're trying to do something despite embarrasment), plus you tend not to remember the events afterwards.

EDIT: Just trying to clarify ice9's point here, to be clear.

comment by Kawoomba · 2013-07-28T22:21:02.142Z · LW(p) · GW(p)

However, such factors can't apply for ethical reasoning at a theoretical/normative level, where all the relevant variables are looked at in isolation in order to come up with a consistent ethical framework that covers all possible cases.

Why should there be a "correct" solution for ethical reasoning? Is there a normative level regarding which color is the best? People function based on heuristics, which are calibrated on general cases, not on marginal cases. While I'm all for showing inconsistencies in one's statements, there is no inconsistency in saying "as a general rule, I value X, but in these cases, I value Y, which is different from X".

Why the impetus towards some one-size-fit-all solution? And more importantly, why disallow that marginal cases get special "if-clauses"?

Imagine forcing a programmer to treat all incoming data with the exact same rule. It would be a disaster. Adding a "as a general rule" solves the inconsistencies, and it's not cheating, and it's not something in need of fixing.

Replies from: DanArmak, MugaSofer
comment by DanArmak · 2013-07-28T22:44:58.610Z · LW(p) · GW(p)

If you want your choices to be consistent over time, you still need a meta-rule for choosing and modifying your rules. How do you know what exceptions to make?

Personally, I don't think my choices (as a human) can be consistent in this sense, and I'm pretty resigned to following my inconsistent moral intuitions. Others disagree with me on this.

Replies from: Kawoomba
comment by Kawoomba · 2013-07-28T22:51:58.168Z · LW(p) · GW(p)

Your choices won't be consistent over time anyways, because you won't be consistent over time. For your Centenarian self, the current you is a but a distant memory.

Replies from: DanArmak
comment by DanArmak · 2013-07-28T22:57:26.690Z · LW(p) · GW(p)

That my desires won't be consistent over very long periods of time, is no reason to make my choices inconsistent over short periods of time when my desires don't change much.

comment by MugaSofer · 2013-07-29T15:51:14.797Z · LW(p) · GW(p)

Why should there be a "correct" solution for ethical reasoning? Is there a normative level regarding which color is the best?

Well, obviously this wouldn't hold for, say, paperclippers ... but while I suspect you may disagree, most people seem to think human ethics are not mutually contradictory and are, in fact, part of the psychological unity of humankind (most include caveats for psychopaths, political enemies, and those possessed by demons.)

Imagine forcing a programmer to treat all incoming data with the exact same rule.

Such a (highly complex) rule is known as a "program".

Replies from: wedrifid, Jiro
comment by wedrifid · 2013-07-29T16:00:11.487Z · LW(p) · GW(p)

but while I suspect you may disagree, most people seem to think human ethics are not mutually contradictory and are, in fact, part of the psychological unity of humankind (most include caveats for psychopaths, political enemies, and those possessed by demons.)

As a bonus, the exception class of "enemies" and "immoral monsters" tends to be contrived to include anyone who has a sufficient degree of difference in ethical preferences. All True humans are ethically united...

Replies from: MugaSofer
comment by MugaSofer · 2013-07-31T14:51:22.927Z · LW(p) · GW(p)

I'm torn between grinning at how marvelously well-contrived it is on evolution's part and frustrated that, y'know, I have to live here, and I keep stepping in the mindkill.

Of course, I'll note they're usually wrong. Except about some of the psychopaths, I suppose, though even they seem to contain bits of it if I understand correctly.

comment by Jiro · 2013-07-29T18:37:36.969Z · LW(p) · GW(p)

In context here, a "rule" is shorthand for a general rule, not for any sort of algorithm whatsoever. A rule that describes a specific case by name is not a general rule.

most people seem to think human ethics are not mutually contradictory

Thought experiment: Go up to a random person and find out how they avoid the Repugnant Conclusion. Repeat with some other famous ethical paradoxes. Even if some of those have solutions, you can bet the average person 1) won't have thought about them, and 2) won't be able to come up with a solution that holds up to examination.

Most people have not thought about enough marginal cases involving human ethics to be able to determine whether human ethics is mutually contradictory.

Replies from: MugaSofer
comment by MugaSofer · 2013-07-29T22:50:40.005Z · LW(p) · GW(p)

In context here, a "rule" is shorthand for a general rule, not for any sort of algorithm whatsoever. A rule that describes a specific case by name is not a general rule.

That was mostly a joke :)

(My point, if you could call it such, was that morality need only be consistent, not simple - although most special cases turn out to be caused by bias, rather than actual special cases, so it was a rather weak point. And, apparently, a rather weak joke.)

Thought experiment: Go up to a random person and find out how they avoid the Repugnant Conclusion. Repeat with some other famous ethical paradoxes. Even if some of those have solutions, you can bet the average person 1) won't have thought about them, and 2) won't be able to come up with a solution that holds up to examination.

And yet, funnily enough,most people agree on most things, and the marginal cases are not unique for every person. Ethics, as far as I can tell, is a part of the psychological unity of mankind.

That said, there is the much more worrying prospect that these common values could be internally incoherent, but we seem to have intuitions for resolving conflicts between lower-level intuitions and I think - hope - it all works out in the end.

(Kawoomba has stated that he considers it ethical for a parent to destroy the earth rather than risk their family, though, so perhaps I'm being overly generous in this regard. pulls face)

comment by Said Achmiz (SaidAchmiz) · 2013-07-28T20:46:13.641Z · LW(p) · GW(p)

I've read the first part of the post ("What is Speciesism?"), and have a question.

Does your argument have any answer to applying modus tollens to the argument from marginal cases?

In other words, if I say: "Actually, I think it's ok to kill/torture human newborns/infants; I don't consider them to be morally relevant[1]" (likewise severely mentally disabled adults, likewise (some? most?) other marginal cases) — do you still expect your argument to sway me in any way? Or no?

[1] Note that I can still be in favor of laws that prohibit infanticide, for game-theoretic reasons. (For instance, because birth makes a good bright line, as does the species divide. The details of effective policy and optimal criminal justice laws are an interesting conversation to have but not of any great relevance to the moral debate.)

Edit: Having now read the rest of your post, I see that you... sort-of address this point. But to be honest, I don't think you take the opposing position very seriously; I get the sense that you've constructed arguments that you think someone on the opposite side would make, if they held exactly your views in everything except, inexplicably, this one area, and these arguments you then knock down. In short, while I am very much in favor of having this discussion and think that this post is a good idea... I don't think your argument passes the ideological Turing test. I would have preferred for you to, at least, directly address the challenges in this post.

Replies from: Lukas_Gloor, army1987, Lukas_Gloor, Xodarap, MugaSofer
comment by Lukas_Gloor · 2013-07-28T21:42:17.745Z · LW(p) · GW(p)

I don't think your argument passes the ideological Turing test. I would have preferred for you to, at least, directly address the challenges in this post.

The post you link to makes five points.

1) and 2) don't concern the arguments I'm making because I left out empirical issues on purpose.

3) is also an empirical issue that can be applied to some humans as well.

4) is the most interesting one.

Something About Sapience Is What Makes Suffering Bad

I sort of addressed this here. I must say I'm not very familiar with this position so I might be bad at steelmanning it, but so far I simply don't see why intelligence has anything to do with the badness of suffering.

As for 5), this is certainly a valid thing to point out when people are estimating whether a given being is sentient or not. Regarding the normative part of this argument: If there were cute robots that I have empathy for but was sure they aren't sentient, I genuinely wouldn't argue about giving them moral consideration.

comment by A1987dM (army1987) · 2013-07-29T13:59:01.976Z · LW(p) · GW(p)

bright line

Huh, a mainstream term for what LWers call a Schelling fence!

comment by Lukas_Gloor · 2013-07-28T20:59:19.191Z · LW(p) · GW(p)

No, this is indeed a common feature of coherentist reasoning, you can make it go both ways. I cannot logically show that you are making a mistake here. I may however appeal to shared intuitions or bring further arguments that could encourage you to reflect on your views.

And note that I was silent on the topic of killing, the point I made later in the article was only focused on caring about suffering. And there I think I can make a strong case that suffering is bad independently of where it happens.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-28T21:00:41.882Z · LW(p) · GW(p)

And here I think I can make a strong case that suffering is bad independently of where it happens.

I would very much like to see that case made!

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-28T21:17:12.845Z · LW(p) · GW(p)

It's in the article. If you're not impressed by it then I'm indeed out of arguments.

Furthermore, while D and E seem plausible candidates for reasons against killing a being with these properties (E is in fact Peter Singer's view on the matter), none of the criteria from A to E seem relevant to suffering, to whether a being can be harmed or benefitted. The case for these being bottom-up morally relevant criteria for the relevance of suffering (or happiness) is very weak, to say the least.

Maybe that's the speciesist's central confusion, that the rationality/sapience of a being is somehow relevant for whether its suffering matters morally. Clearly, for us ourselves, this does not seem to be the case. If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!

There's also a hyperlink in the first paragraph referring to section 6 of the linked paper.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-28T21:26:55.987Z · LW(p) · GW(p)

Ok. Yeah, I don't find any of those to be strong arguments. Again, I would like to urge you to consider and address the points brought up in this post.

comment by Xodarap · 2013-07-28T20:54:04.035Z · LW(p) · GW(p)

I think it's ok to kill human newborns/infants

I think the relevant response would be torturing human infants, and other marginal cases.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-28T21:01:20.190Z · LW(p) · GW(p)

Yep, fair enough. I've changed my post to include this.

comment by MugaSofer · 2013-07-29T16:04:43.835Z · LW(p) · GW(p)

In other words, if I say: "Actually, I think it's ok to kill/torture human newborns/infants; I don't consider them to be morally relevant[1]" (likewise severely mentally disabled adults, likewise (some? most?) other marginal cases) — do you still expect your argument to sway me in any way? Or no?

No, that would be when we fetch the pitchforks.

[1] Note that I can still be in favor of laws that prohibit infanticide, for game-theoretic reasons. (For instance, because birth makes a good bright line, as does the species divide. The details of effective policy and optimal criminal justice laws are an interesting conversation to have but not of any great relevance to the moral debate.)

The only time I heard such an argument, it wasn't their true rejection, and they invented several other such false rejections during the course of our discussion. So that would be my response on hearing someone actually make the unaddressed argument you outlined.

Do such game-theoretic reasons actually hold together, by the way? It seems unlikely, unless you suddenly start caring about children somewhere during their first few years.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-29T16:40:29.406Z · LW(p) · GW(p)

The only time I heard such an argument, it wasn't their true rejection, and they invented several other such false rejections during the course of our discussion. So that would be my response on hearing someone actually make the unaddressed argument you outlined.

Do such game-theoretic reasons actually hold together, by the way? It seems unlikely, unless you suddenly start caring about children somewhere during their first few years.

No, this is definitely my true rejection. To expand a bit, take the infanticide case as an example: I think infanticide should be illegal, but I don't think it should be considered murder or anything close to it, nor punished nearly as severely.

Basically, there's no "real" line between sapience and non-sapience, and humans, in the course of their development, start out as cognitively inert matter and end up as sapient beings. But since we don't think evaluating in every single case is feasible, or reliable in the "border region" cases, or likely to lead to consistently (morally) good outcomes in practice (due to assorted cognitive and institutional limitations), we want to draw the line way back in the development process, where we're sure there's no sapience and killing the developing human is morally ok. Where specifically? Well, since this is a pragmatic and not a moral consideration, there is no unique morally ordained line placement, but there is a natural "bright line": birth. Birth is more or less in the desired region of time, so that's where we draw it.

Now, since we drew the line for pragmatic reasons, we are perfectly aware that the person who commits infanticide has not really done anything morally wrong. But on the other hand, we want to discourage people from redrawing the line on an individual basis, from "taking line placement into their own hands", so to speak, because then we're back to the "evaluating in every case is not a good idea" issue. But on the third hand, such discouragement should not take the form of putting the poor person in jail for murder! The problem is not that important; the well-being and happiness of an adult human for a large chunk of their life is worth more than the (nonzero, but small) chance that line degradation will lead to bad outcomes! Make it a lesser offense, and you've more or less got the best of both worlds. (Equivalent to assault, perhaps? I don't know, this is a practical question, and best settled with the help of experts in criminal justice and public policy.)

comment by timtyler · 2013-07-28T22:31:27.060Z · LW(p) · GW(p)

Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others?

Typically human xenophobia doesn't single out one attribute. The similar are treated preferentially, the different are exiled, shunned, excluded or slaughtered. Nature builds organisms like that: to favour kin and creatures similar, and to give out-group members a very wide berth. So: it's no surprise to find that humans are often racist and speciesist.

comment by Sulo · 2021-01-17T18:09:58.859Z · LW(p) · GW(p)

“Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others? “

I am not sure if this is accurate answer but I feel like bringing this up: in some cases we should single out some properties over the others based on their function in regards to our interests. Obvious example: separating men and women in combat sports.

Another important detail is that in ideal world we could evaluate everything on case by case basis instead of generalize. So in general it wouldn’t be fair let men and women compete against each other and we should separate them but if we would evaluate every single potential fight, we might find some cases where it would be appropriate and fair to let some man fight against certain well trained women.

Not a great examples and not really criticism, but I think it is but of extensions of ideas that you presented here (which I mostly agree with). It was just something that came to my mind as I was reading this (perhaps because of my martial arts obssesion?) and wasn’t really addressed.

comment by Larks · 2013-07-29T12:59:35.708Z · LW(p) · GW(p)

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past.

In the past, the arguments against sexism and racism were things like "they're human too", "they can write poetry too", "God made all men equal" and "look how good they are at being governesses". None of these apply to animals; they're not human, they don't write human, God made them to serve us, and they're not very good governesses. Indeed, you seem to think all these are irrelevant criteria.

Speaking as a 21st century person in a liberal, western country, I believe sexism and racism are wrong basically because other people told me they were, who believed that because ... who believed that because they were convinced by argumentum ad governess. But now I've just discovered that argumentum ad governess is invalid. Should I not withdraw my belief that sexism and racism are wrong, which apparently I have in some sense been fooled into, and adopt the traditional, time-honoured view that they are not?

Replies from: PrometheanFaun
comment by PrometheanFaun · 2013-11-05T04:59:41.874Z · LW(p) · GW(p)

But now I've just discovered that argumentum ad governess is invalid

Where was the argument for that? Non-humans attaining rights by a different path does not erase all other paths.

comment by Nick_Beckstead · 2013-07-29T08:39:54.929Z · LW(p) · GW(p)

While H is an unlikely criterion for direct ethical consideration (it could justify genocide in specific circumstances!), it is an important indirect factor. Most humans have much more empathy for fellow humans than for nonhuman animals. While this is not a criterion for giving humans more ethical consideration per se, it is nevertheless a factor that strongly influences ethical decision-making in real-life.

This objection doesn't work if you rigidify over the beings you feel sympathy toward in the actual world, given your present mental capacities. And that is clearly the best version of this view, and the one that people probably really mean when they say this. On this version of the view, you don't say that if you didn't care about humans, human's wouldn't matter. You do have to say, "If it actually turns out that I don't care about humans, then humans don't matter." Of course, you might want to change the view if things (very unexpectedly!) don't turn out that way.

I don't think this version gives animals no weight, but I think it typically gives animals less weight than humans. (Disclaimer that should be unnecessary: I recognize that there are other objections to H. It is not necessary to respond to what I have said by raising a distinct objection to H.)

comment by bokov · 2013-08-12T23:10:45.981Z · LW(p) · GW(p)

I think this is runaway philosophizing where our desire to believe something coherent trumps what types of beliefs we have been selected for, and the types of beliefs that will continue to keep us alive.

Why should there be a normative ethics at all? What part of rationality requires normative ethics?

I, like you and everyone else, have a monkey-sphere. I only care about the monkeys in my tribe that are closest to me, and I might as well admit it because it's there. So, nevermind cows and pigs, if push came to shove I'll protect my friends and family in preference to strangers. However, it protects me and my monkey-sphere if we can all agree to keep expropriation and force to a bare minimum and within strictly prescribed guidelines.

So I recognize the rights of entities capable of retaliating and at the same time capable of being bound by an agreement not to. Them and their monkey spheres.

In short, the reason I'd rather have dinner with you than of you is some combination of me liking you and my pre-commitment to peaceful and civilized coexistence. It's not exactly something I feel like a nice person for admitting, but I don't see why that should be enough to make it a tough issue.

Replies from: Lukas_Gloor, None
comment by Lukas_Gloor · 2013-08-13T01:17:00.486Z · LW(p) · GW(p)

I think this is runaway philosophizing where our desire to believe something coherent trumps what types of beliefs we have been selected for, and the types of beliefs that will continue to keep us alive.

Why should I believe what humans have been selected for? Why would I want to keep "us" alive?

I think those two questions are at least as begging as the reasons for my view, if not more.

What I know for sure is that I dislike my own suffering, not because I'm sapient and have it happening to me, but because it is suffering. And I want to do something in life that is about more than just me. Ultimately, this might not be a "more true" reason than "what I have been selected for", but it does appeal to me more than anything else.

Why should there be a normative ethics at all? What part of rationality requires normative ethics?

All rationality requires is a goal. You may not share the same goals I have. I have noticed, however, that some people haven't thought through all the implications of their stated goals. Especially on LW, people are very quick to declare something to be of terminal value to them, which serves as a self-fulfilling prophecy unfortunately.

I, like you and everyone else, have a monkey-sphere. I only care about the monkeys in my tribe that are closest to me, and I might as well admit it because it's there.

I discovered that intuitions are easy to change. People definitely have stronger emotional reactions to things happening to those that are close, but do they really, on an abstract level, care less about those that are distant? Do they want to care less about those that are distant, or would they take a pill that turned them into universal altruists?

However, it protects me and my monkey-sphere if we can all agree to keep expropriation and force to a bare minimum and within strictly prescribed guidelines.

And how do you do that?

So I recognize the rights of entities capable of retaliating and at the same time capable of being bound by an agreement not to. Them and their monkey spheres.

If a situation arises where you can benefit your self-interest by defecting, the rational thing to do is to defect. Don't tell yourself that you're being a decent person only because of pure self-interest, you'd be deceiving yourself. Yes, if everyone followed some moral code written for societal interaction among moral agents, then everyone would be doing well (but not perfectly well). However, given that you cannot expect others to follow through, your decision to not "break the rules" is an altruistic decision for (at least) all the cases where you are unlikely enough to get caught.

You may also ask yourself whether you would press a button that inflicts suffering on a child (or a cow) far away, give you ten dollars, and makes you forget about all that happened. Would you want to self-modify to be the person who easily pushes the button? If not, just how much altruism is it going to be, and why not go for the (non-arbitrary) whole cake?

Replies from: bokov, bokov
comment by bokov · 2013-08-13T20:39:39.476Z · LW(p) · GW(p)

You may also ask yourself whether you would press a button that inflicts suffering on a child (or a cow) far away, give you ten dollars, and makes you forget about all that happened. Would you want to self-modify to be the person who easily pushes the button? If not, just how much altruism is it going to be, and why not go for the (non-arbitrary) whole cake?

I don't know, and I feel it's important that I admit that. My code of conduct is incomplete. It's better that it be clearly incomplete than have the illusion of completeness created by me deciding what a hypothetical me in a hypothetical situation ought to want.

It does seem to me the payoff for pushing the button should be equal to how much it would take to bribe you not to make all your purchasing decisions contingent on a thorough investigation of the human/animal rights practices of every company you buy from and all their upstream suppliers. Those who don't currently do this (me included) are apparently already being compensated sufficiently, however much that is.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-08-14T02:57:21.833Z · LW(p) · GW(p)

I appreciate the honest reply!

It does seem to me the payoff for pushing the button should be equal to how much it would take to bribe you not to make all your purchasing decisions contingent on a thorough investigation of the human/animal rights practices of every company you buy from and all their upstream suppliers. Those who don't currently do this (me included) are apparently already being compensated sufficiently, however much that is.

Perhaps you are setting the demands too high. I think the button scenario is relevantly different in the amount of sacrifice/inconvenience it requires. Making all-things-concerned ethical purchases is a lot more difficult than resisting the temptation of ten dollars (although the difference does become smaller the more you press it in some given timescale).

Maybe this is something you view as "cheating" or a rationalization of cognitive dissonance as you explain in the other comment, but I genuinely think that a highly altruistic life may still involve making lots of imperfect choices. The amount of money one donates, for instance, and where to, is probably more important in terms of suffering prevented than the effects of personal consumption.

Being an altruist makes you your own most important resource. Preventing loss of motivation or burnout is then a legitimate concern that warrants keeping a suitable amount of self-interested comfort. And it is also worth noting that people differ individually in how easily altruism comes to them. Some may simply enjoy doing it or may enjoy the signalling aspects, while others might have trouble motivating themselves or even be uncomfortable with talking to others about ethics. One's social circle is also a huge influence. These are all things to take into account; it would be unreasonable to compare yourself to a utility-maximizing robot.

Obviously this needn't be an all-or-nothing kind of thing. Pushing the button just once a week is already much better than never pushing it.

Replies from: bokov
comment by bokov · 2013-08-14T14:35:44.376Z · LW(p) · GW(p)

The amount of money one donates, for instance, and where to, is probably more important in terms of suffering prevented than the effects of personal consumption.

That's a testable assertion. How confident are you that you would follow the path of self consistency if upon being tested the assertion turned out to be false? Someone who chooses pragmatism only needs to fight their own ignorance to be self consistent while someone who does not has to fight both their own ignorance and all too often their own pragmatism in order to be slf-consistent.

Replies from: Lukas_Gloor, bokov
comment by Lukas_Gloor · 2013-08-14T15:24:31.019Z · LW(p) · GW(p)

Yes, it's testable and the estimates so far strongly support my claim. (I'm constantly on the lookout for data of this kind to improve my effectiveness.) I wouldn't have trouble adjusting because I'm already trying to reduce my unethical consumption through habit forming (which basically comes down to being vegan and avoiding expensive stuff). Even if its not very effective compared to other things, as long as I don't have opportunity costs, it is still something positive. I'm just saying that even for people who won't, for whatever reasons, make changes to the kind of stuff they buy, these people could still reduce a lot of suffering by donating to the most effective cause.

comment by bokov · 2013-08-14T14:47:39.360Z · LW(p) · GW(p)

I wonder if pragmatists are less likely to reject information they don't want to hear since their self interest is their terminal goal, so for example entertaining the possibility that Malthus can be right in some instances does not imply that they must unilaterally sacrifice themselves.

Perhaps the reason so many transhumanists are peak oil deniers and global warming deniers is that both of these are Malthusian scenarios that would put the immediate needs of those less fortunate in direct and obvious opposition to the costly, delayed-payoff projects we advocate.

comment by bokov · 2013-08-13T20:24:23.669Z · LW(p) · GW(p)

Ultimately, this might not be a "more true" reason than "what I have been selected for", but it does appeal to me more than anything else.

Experience and observation of others has taught me that when one tries to derive a normative code of behavior from the top-down, they often end up with something that is in subtle ways incompatible with selfish drives. They will therefore be tempted to cheat on their high-minded morals, and react to this cognitive dissonance either by coming up with reasons why it's not really cheating or working ever harder to suppress their temptations.

I've been down the egalitarian altruist route, it came crashing down (several times) until I finally learned to admit that I'm a bastard. Now instead of agonizing whether my right to FOO outweighs Bob's right to BAR, I have the simpler problem of optimizing my long-term FOO and trusting Bob to optimize his own BAR.

I still cheat, but I don't waste time on moral posturing. I try to treat it as a sign that perhaps I still don't fully understand my own utility function. Imagine how far off the mark I'd be if I was simultaneously trying to optimize Bob's!

comment by [deleted] · 2013-08-13T00:38:38.402Z · LW(p) · GW(p)

Nonhuman animals are integrated with human "monkey spheres" - e.g. people live with their pets, bond with them and give them names.

A second mistake is that you decry normative ethics, only to implicitly establish a norm in the next paragraph as if it were a fact:

I, like you and everyone else, have a monkey-sphere. I only care about the monkeys in my tribe that are closest to me, and I might as well admit it because it's there. So, nevermind cows and pigs...

Obviously, there are people whose preferences include the welfare of cows and pigs, hence this discussion and the well-funded existence of PETA etc. By prescribing to a monkey-sphere that "everyone" has and that doesn't include nonhuman animals, you are effectively telling us what we should care about, not what we actually care about.

Even if you don't care about animal welfare, the fact that others do has an influence on your "monkey-sphere", even if it's weak.

Btw, aren't humans apes rather than monkeys?

Replies from: AndHisHorse, bokov
comment by AndHisHorse · 2013-08-13T00:45:01.508Z · LW(p) · GW(p)

The term "monkeysphere", which is a nickname for Dunbar's Number, originates from this Cracked.com article. The term relates not only to the studies done on monkeys (and apes), but also the idea of there existing a limit on the number of named, cutely dressed monkeys about which a hypothetical person could really care.

Replies from: bokov
comment by bokov · 2013-08-13T20:56:53.923Z · LW(p) · GW(p)

Yes, precisely. Thanks for finding the link.

Although I think of mine as a density function rather than a fixed number. Everyone has a little bit of my monkey-sphere associated with them. hug

comment by bokov · 2013-08-13T20:54:11.004Z · LW(p) · GW(p)

Nonhuman animals are integrated with human "monkey spheres" - e.g. people live with their pets, bond with them and give them names.

Oh yeah, absolutely. I trust my friend's judgment how much members of her monkeysphere are worth to her, and utility to my friend is weighed against utility to others in my monkeysphere proportional to how close they are to me.

My monkeysphere has long tails extending by default to all members of my species whose interests are not at odds with my own or those closer to me in the monkeysphere. Since I would be willing to use force against a human to defend myself or others at the core of my monkeysphere, it seems that I should be even more willing to use force against such a human and save the lives of several cattle in the process.

Obviously, there are people whose preferences include the welfare of cows and pigs, hence this discussion and the well-funded existence of PETA etc.

Cults are well-funded too. I don't dispute that people care about both them and animal rights. What I dispute is whether supporting either of them offers enough benefits to the supporter that I would consider it a rational choice to make.

comment by Carinthium · 2013-08-04T06:53:16.103Z · LW(p) · GW(p)

For selfish reasons, if I had a say in policy I would want to influence the world greatly against this. Whether true or not, I could easily get a disease in the future or go senile (actually quite likely) to such an extent that my moral worth in this system is reduced greatly. Since I still want to be looked after when that happens, I would never support this.

This doesn't refute any of the arguments, but for those who have some percentage chance of losing a lot of brain capacity in the future without outright dying (i.e probably most of us) it may be a reason to argue against this idea anyway.

comment by A1987dM (army1987) · 2013-07-29T14:03:43.091Z · LW(p) · GW(p)

If there were no intrinsic reasons for giving moral consideration to babies, then a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it.

If there were no intrinsic reasons for a feather to fall slower than a rock, then in a vacuum a feather would fall just as fast as a rock as long as there's no air. But you don't neglect the viscosity of air when designing a parachute.

comment by blacktrance · 2014-01-07T00:26:10.099Z · LW(p) · GW(p)

Here's an argument for something that might be called speciesism. though it isn't strictly speciesism because moral consideration could be extended to hypothetical non-human beings (though no currently known ones) and not quite to all humans - contractarianism. We have reason to restrict ourselves in our dealings with a being when it fulfills three criteria: it can harm us, it can choose not to harm us, and it can agree not to harm us in exchange for us not harming it. When these criteria are fulfilled, a being has rights and should not be harmed, but otherwise, we have no reason to restrict ourselves in our dealings with it.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2014-01-08T20:08:59.323Z · LW(p) · GW(p)

Indeed, consistently applied, this view would deny rights to both non-human animals and some human individuals, so it wouldn't be speciesist. There is however another problem with contractarianism: I think the way it is usually presented is blatantly not thought through and non sequitur.

We have reason to restrict ourselves in our dealings with a being when it fulfills three criteria: it can harm us, it can choose not to harm us, and it can agree not to harm us in exchange for us not harming it.

What do you mean by "we have reason"? If you mean that it would be in our rational self-interest to grant rights to all such beings, then that does not follow. Just because a being could reciprocate doesn't mean it will, so granting rights to all such beings might well, in some empirical circumstances, go against your rational self-interest. So there seems to be a (crucial!) step missing here. And if all one is arguing for is to "do whatever is in your rational self-interest", why give it a misleading name like contractarianism?

There is always the option to say "I don't care about others". Apart from the ingenuous argument about personal identity that implies that your own future selves should also classify among "others", there is not much one can say to such a person. Such a person would refuse to act along with the outcome specified by the axiome of impartiality/altruism in the ethics game. You may play the ethics game intellectually and come to the conclusion that systematized altruism implies some variety of utilitarianism (and then define more terms and hash out details), but you can still choose to implement another utility function in your own actions. The two dimensions are separate, I think.

Replies from: blacktrance
comment by blacktrance · 2014-01-08T20:23:04.437Z · LW(p) · GW(p)

If you mean that it would be in our rational self-interest to grant rights to all such beings, then that does not follow. Just because a being could reciprocate doesn't mean it will, so granting rights to all such beings might well, in some empirical circumstances, go against your rational self-interest.

True, but it would be in their rational self-interest to retaliate if their rights aren't being respected, to create a credible threat so their rights would be respected.

if all one is arguing for is to "do whatever is in your rational self-interest", why give it a misleading name like contractarianism?

It's not a misleading name, it means that morality is based on contracts. It's more specific than "do whatever in your rational self-interest", as it suggests something that someone who is following their self-interest should do. Also, not everyone who advocates following one's rational self-interest is a contractarian.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2014-01-08T21:14:41.180Z · LW(p) · GW(p)

You'd need something like timeless decision theory here, and I feel like it is somehow cheating to bring in TDT/UDT when it comes to moral reasoning at the normative level... But I see what you mean. I am however not sure whether the view you defend here would on its own terms imply that humans have "rights".

It's more specific than "do whatever in your rational self-interest", as it suggests something that someone who is following their self-interest should do.

There are two plausible cases I can see here:

1) The suggestions collides with "do whatever is in your rational self-interest"; in which case it was misleading.

2) The suggestions deductively follows from "do whatever is in your rational self-interest"; in which case it is uninteresting (and misleading because it dresses up as some fancy claim).

You seem to mean:

3) The suggestions adds something of interest to "do whatever is in your rational self-interest"; here I don't see where this further claim would/could come from.

it means that morality is based on contracts.

What do you mean by "morality"? Unless you rigorously define such controversial and differently used terms at every step, you're likely to get caught up in equivocations.

Here are two plausible interpretations for "morality" in the partial sentence I quoted I can come up with:

1) people's desire to (sometimes) care about the interests of others / them following that desire

2) people's (system two) reasoning for why they end up doing nice/fair things to others

Both these claims are descriptive. It would be like justifying deontology by citing the findings from trolleyology, which would beg the question as to whether humans may have "moral biases", e.g. whether they are rationalising over inconsistencies in their positions, or defending positions they would not defend given more information and rationality.

In addition, even if the above sometimes applies, it would of course be overgeneralising to classify all of "morality" according to the above.

So likely you meant something else. There is a third plausible interpretation of your claim, namely something resembling what you wrote earlier:

as it suggests something that someone who is following their self-interest should do.

Perhaps you are claiming that people are somehow irrational if they don't do whatever is in their best self-interest. However, this seems to be a very dubious claim. It would require the hidden premise that it is irrational to have something other than self-interest as your goal. Here, by self-interest I of course don't mean the same thing as "utility function"! If you value the well-being of others just as much as your own well-being, you may act in ways that predictably make you worse off, and yet this would in some situations be rational conditional on an altruistic goal. I don't think we can talk about rational/irrational goals; something can only be rational/irrational according to a stated goal.

(Or well, we could talk about it, but then we'd be using "rational" in a different way than I'm using it now, and also in a different way than is common on LW, and in such a case, I suspect we'd end up arguing whether a tree falling in a forest really makes a sound.

Replies from: blacktrance
comment by blacktrance · 2014-01-09T03:33:52.565Z · LW(p) · GW(p)

The suggestions adds something of interest to "do whatever is in your rational self-interest"; here I don't see where this further claim would/could come from.

This makes specific what part of "acting in your rational self-interest" means. To use an admittedly imperfect analogy, the connection between egoism and contractarianism is a bit like the connection between utilitarianism and giving to charity (conditional on it being effective). The former implies the latter, but it takes some thinking to determine what it actually entails. Also, not all egoists are contractarians, and it's adding the claim that if you've decided to follow your rational self-interest, this is how you should act.

What do you mean by "morality"?

What one should do. I realize that this may be an imprecise definition, but it gets at what utilitarians, Kantians, Divine Command Theorists, and ethical egoists have in common with each other that they don't have in common with moral non-realists, such as nihilists. Of course, all the ethical theories disagree about the content of morality, but they agree that there is such a thing - it's sort of like agreeing that the moon exists, even if they don't agree what it's made of. Morality is not synonymous with "caring about the interests of others", nor does it even necessarily imply that (in the ethical-theory-neutral view I'm taking in this paragraph). Morality is what you should do, even if you think you should do something else.

As for your second-to-last paragraph (the one not in parentheses) -

Being an ethical egoist, I do think that people are irrational if they don't act in their self-interest. I agree that we can't have irrational goals, but we aren't free to set whatever goals we want - due to the nature of subjective experience and self-interest, rational self-interest is the only rational goal. What rational self-interest entails varies from person to person, but it's still the only rational goal. I can go into it more, but I think it's outside the scope of this thread.

comment by Angela · 2014-01-06T22:47:51.504Z · LW(p) · GW(p)

If some means could be found to estimate phi for various species, a variable claimed by this paper to be a measure of "intensity of sentience", it would the relative value of the lives of different animals to be estimated and would help solve many moral dilemmas. Intensity of suffering as a result of a particular action would be expected to be proportionate to the intensity of sentience, however whilst mammals and birds (the groups which possess neocortex, the parts of the brain where consciousness is believed to occur) can be assumed to experience suffering when doing activities that decrease their evolutionary fitness (natural beauty etc. also determine pleasure and pain and are as yet poorly understood, but they are likely to be less significant in other species anyway, extrapolating from the differences in aesthetics from humans with high vs low IQ). However for AI it is much harder to determine what makes it happy or whether or not it enjoys dying, for which we will need to find a simple generalisable definition of suffering that can apply to all possible AI rather than our current concept which is more of an unrigorous Wittgensteinian family resemblance.

comment by Zvi · 2013-07-29T13:20:15.183Z · LW(p) · GW(p)

Many arguments here seem to take the mindkilling form of "If we had to derive our entire system of moral value based on explicitly stated arguments, and follow those arguments ad absurdum, bad thing results."

Since bad thing is bad, and you say it is in some situation justified, clearly you are wrong, with the (reasonably explicit) accusation that if you use this line of reasoning you are (sexist! racist! in favor of killing babies! in favor of genocide! or worse, not being properly rational!)

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-29T13:38:11.096Z · LW(p) · GW(p)

That's common practice in ethics.

You need something to work with otherwise ethical reasoning couldn't get off the ground. But it doesn't necessarily imply that people are not being properly rational (irrational would have to be defined according to a goal, and ethics is about goals.)

Replies from: Zvi
comment by Zvi · 2013-07-29T14:17:16.573Z · LW(p) · GW(p)

One, do you believe that those five links also take a similarly mindkilling form and that mindkilling is justified because it is standard practice in ethics? If this is true, does the fact that it is standard practice justify it, and if so what determines what is and isn't justified by an appeal to standard practice?

Refuting counter-argument X by saying that if X was your full set of ethical principles you would reach repugnant conclusion Y is at its strongest an argument that X is not a complete and fully satisfactory set of ethical principles. I fail to see how it can be a strong argument that X is invalid as a subset of ethical principles, which is how it appears to have been used above.

In addition, when we use an argument of the form "X leads to some conclusion Y for which Y can be considered a subset of Z, and all Z are bad" we imply that for all such Z, you can (even in theory) create an internally consistent ethical system, even in theory, where for any given principle set P such that P is under some circumstance leading to an action in some such set Z, P is wrong. I would claim that if you include all your examples of such Z, it is fairly easy to construct situations such that the sets Z contain all possible actions and thus all ethical systems P, which would imply no such ethical systems can exist, and if you well-define all your terms, I would be happy to attempt to construct such a scenario.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-29T14:53:14.679Z · LW(p) · GW(p)

Many arguments here seem to take the mindkilling form of "If we had to derive our entire system of moral value based on explicitly stated arguments, and follow those arguments ad absurdum, bad thing results."

I don't think this form of argument is mindkilling. "Bad thing" needs to refer to something the person whose position you're criticizing considers unacceptable too. You'd be working with their own intuitions and assumptions. So I'm not advocating begging the question by postulating that some things are bad tout court (that would be mindkilling indeed).

One, do you believe that those five links also take a similarly mindkilling form and that mindkilling is justified because it is standard practice in ethics?

The first one is just a description of the most common ethical methodology. The other papers I'm linking too are excellent, with the exception of the third one which I do consider to be rather weak. But these are all great papers that use the procedure I quoted from you.

I fail to see how it can be a strong argument that X is invalid as a subset of ethical principles, which is how it appears to have been used above.

This doesn't necessarily follow, but if I discover that the set of principles I endorse lead to conclusions I definitely do not endorse, then I have reasons to fundamentally question some of the original principles. I could also go for modifications that leave the overall construct intact, but that usually comes with problems as well.

I'm not sure whether I understand your last paragraph. It seems like you're talking about impossibility theorems. This has indeed been done, for instance for population ethics (the second paper I linked to above). There are two ways to react to this: 1) Giving up, or 2) reconsidering which conclusions go under Z. Personally I think the second option makes more sense.

comment by MrMind · 2013-07-29T10:44:00.448Z · LW(p) · GW(p)

The claim is that there is no way to block this conclusion without:

  1. using reasoning that could analogically be used to justify racism or sexism or
  2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

But, on the other side, there's no way to reinforce the argument to prevent it from going the other extreme: what negates the interpretation of an amoeba retracting from a probe to call it "pain"? It is just the anatomical quality of the nerves involved or is the computation itself that matters? In either case, the argument is doomed.
The main problem to me it seems that caring as a basis for a moral argument is really not apt to be captured by a real number.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-29T13:45:58.533Z · LW(p) · GW(p)

I edited the very end of my post to account for this. I think the question whether a given organism is sentient is an empirical question, i.e. one that we can unambiguously figure out with enough knowledge and computing power. Some people do disagree with that and in this case, things would become more complicated

comment by pianoforte611 · 2013-07-29T04:08:54.862Z · LW(p) · GW(p)

Hmm, maybe I didn't read the argument carefully enough, but it seems that the argument from marginal cases proves too much ., non-US citizens should be allowed to serve in the army, some people without medical licenses should be allowed to practice as surgeons and many more things.

Replies from: Lukas_Gloor, wedrifid
comment by Lukas_Gloor · 2013-07-29T04:58:51.200Z · LW(p) · GW(p)

This would be mixing up the normative level with the empirical level. The argument from marginal cases seeks to establish that we have reasons against treating beings of different species differently, all else being equal. Under consequentialism, the best path of action (including motives, laws, societal norms to promote and so on) would already be specified. It would be misleading to apply the same basic moral reasoning again on the empirical level where we have institutions like the US army or the establishment of surgeons. Institutions like the US army are (for most people anyway and outside of political philosophy) not terminal values. Whether it increases overall utility if we enforce "non-discrimination" radically in all domains is an empirical question determined by the higher order goal of achieving as much utility as possible.

And whenever this is not the case (which it may well be, since there is no reason to assume that the empirical level perfectly mirrors the normative one), then "all else" is not equal. Because it might not be overall beneficial for society / in terms of your terminal values, it could be a bad idea to allow an otherwise well-qualified someone without a medical license to practice as a surgeon. There might be negative side-effects of such a practice.

A practical example of this would be animal testing. If enough people were consequentialists and unbiased, we could experiment on humans and thereby accelerate scientific progress. However, if you try to do this in the real world, there is the danger that it will go wrong because people lose track of altruistic goals and replace them with other things (altough this argument applies to animal testing as well almost as much), and there is a big likelihood of starting a civil war or worse if someone would actually start experimenting on humans (this one doesn't). So even though experimenting on animals is intrinsically on par with experimenting on humans with similar cognitive capacities, only the former even stands a chance at increasing overall utility rather than decreasing it. Here the indirect consequences are decisive.

(Edit: In this sense, my example about men and a right to abortion was misleading, because that would of course be a legal right, where empirical factors come into play. But I was using the example to show that being against some form of discrimination doesn't mean that all differences between beings ought to be ignored.)

Replies from: pianoforte611
comment by pianoforte611 · 2013-07-29T18:43:05.934Z · LW(p) · GW(p)

Thank you for the response, I think I get the argument now.

I don't have a good answer for why we allow animal testing but not human testing. If one is fine with animal experimentation then there doesn't seem to be any way to object to engineering human babies that would have human physiology but animal level cognition and conduct tests on them. While the idea does make me uncomfortable I think I would bite that bullet.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-04T06:30:06.267Z · LW(p) · GW(p)

If one is fine with animal experimentation then there doesn't seem to be any way to object to engineering human babies that would have human physiology but animal level cognition and conduct tests on them.

The problem is that it makes the Schelling points more awkward.

comment by wedrifid · 2013-07-29T07:22:10.111Z · LW(p) · GW(p)

but it seems that the argument from marginal cases proves too much . It proves that non-US citizens should be allowed to serve in the US army,

The argument from marginal cases may well prove too much, but this strikes me as a failed counter-example. Using non-citizens as part of a military force is a reasonably standard practice. Depending on the circumstances it can be the smart thing to do. (Conscripting citizens as cannon fodder tends to promote civil unrest.)

Replies from: pianoforte611
comment by pianoforte611 · 2013-07-29T11:53:53.125Z · LW(p) · GW(p)

Sure, I shouldn't have used the US military as an example - I retract it. Trying again the argument from marginal cases proves that some 12 year olds should be allowed to vote.

Replies from: wedrifid, MugaSofer, itaibn0, SaidAchmiz
comment by wedrifid · 2013-07-29T15:54:08.902Z · LW(p) · GW(p)

Sure, I shouldn't have used the US military as an example - I retract it. Trying again the argument from marginal cases proves that some 12 year olds should be allowed to vote.

This slippery scope really isn't sounding all that bad...

Replies from: pianoforte611
comment by MugaSofer · 2013-07-29T15:36:41.219Z · LW(p) · GW(p)

... what makes you think that's wrong? I remember being twelve, seems to me basing that sort of thing on numerical age is fairly daft, albeit relatively simple.

Replies from: Lukas_Gloor, army1987
comment by Lukas_Gloor · 2013-07-29T15:43:10.790Z · LW(p) · GW(p)

Indeed, I wouldn't object to this directly. One could however argue that it is bad for indirect reasons. It would acquire huge administrative efforts to test teens for their competence at voting, and the money and resources might be better spent on education or the US army (jk). In order to save administrative costs, using a Schelling point at the age of, say, 18, makes perfect sense, even though there certainly is no magical change taking place in people's brains the night of their 18th birthday.

Replies from: DanArmak, Eugine_Nier
comment by DanArmak · 2013-07-30T21:04:22.418Z · LW(p) · GW(p)

It would acquire huge administrative efforts to test teens for their competence at voting

(You meant require, not acquire)

It would also require huge administrative efforts to test 18-year-olds for competence. So we simply don't, and let them vote anyway. It's not clear to me that letting all 12-year-olds vote is so much terribly worse. They mostly differ from adults on age-relevant issues: they would probably vote school children more rights.

It may or may not be somewhat worse than the status quo, but (for comparison) we don't take away the vote from all convicted criminals, or all demented people, or all people with IQ below 60... Not giving teenagers civil rights is just a historical fact, like sexism and racism. It doesn't have a moral rationale, only rationalizations.

Replies from: army1987, Jiro
comment by A1987dM (army1987) · 2013-07-31T10:18:38.789Z · LW(p) · GW(p)

It would also require huge administrative efforts to test 18-year-olds for competence. So we simply don't, and let them vote anyway. It's not clear to me that letting all 12-year-olds vote is so much terribly worse.

A randomly chosen 18-year-old is more likely than a randomly chosen 12-year-old to be ready to vote -- though I agree that age isn't necessarily the best cheap proxy for that. (What about possession of a high-school diploma?)

we don't take away the vote from ... all people with IQ below 60

Many would argue we should.

Replies from: DanArmak
comment by DanArmak · 2013-07-31T13:03:40.981Z · LW(p) · GW(p)

A randomly chosen 18-year-old is more likely than a randomly chosen 12-year-old to be ready to vote

That's the same problem under a different name. What does "ready to vote" mean?

What about possession of a high-school diploma?

That excludes some people of all ages, but it still also excludes all people younger than 16-17 or so. You get a high school diploma more for X years of attendance than for any particular exam scores. There's no way for HJPEV to get one until he's old enough to have spent enough time in a high school.

we don't take away the vote from ... all people with IQ below 60

Many would argue we should.

We should be clear on what we're trying to optimize. If it's "voting for the right people", then it would be best to restrict voting rights to a very few people who know who would be right - myself and enough friends whom I trust to introduce the necessary diversity and make sure we don't overlook anything.

If on the other hand it's a moral ideal of letting everyone ruled by a government, give their consent to the government - then we should give the vote to anyone capable of informed consent, which surely includes people much younger than 18.

Replies from: army1987, MugaSofer
comment by A1987dM (army1987) · 2013-07-31T13:33:46.956Z · LW(p) · GW(p)

If it's "voting for the right people", then it would be best to restrict voting rights to a very few people who know who would be right - myself and enough friends whom I trust to introduce the necessary diversity and make sure we don't overlook anything.

Yes, that would probably have better results, but mine is a better Schelling point, and hence more likely to be achieved in practice, short of a coup d'état. :-)

comment by MugaSofer · 2013-07-31T14:24:51.006Z · LW(p) · GW(p)

If it's "voting for the right people", then it would be best to restrict voting rights to a very few people who know who would be right - myself and enough friends whom I trust to introduce the necessary diversity and make sure we don't overlook anything.

I think it works out better if you ignore your own political affiliations, which makes sense because mindkilling.

Replies from: DanArmak
comment by DanArmak · 2013-07-31T14:53:28.287Z · LW(p) · GW(p)

Even ignoring affiliations, if I really believe I can make better voting choices than the average vote of minority X, then optimizing purely for voting outcomes means not giving the vote to minority X. And there are in fact minorities where almost all of the majority believes this, such as, indeed, children. (I do not believe this with respect to children, but I believe that most other adults do.)

Replies from: MugaSofer
comment by MugaSofer · 2013-07-31T15:28:44.275Z · LW(p) · GW(p)

Ah, but everyone thinks they know better ... or something ... I dunno :p

Replies from: DanArmak
comment by DanArmak · 2013-07-31T15:53:14.057Z · LW(p) · GW(p)

That's just like saying "never act on your beliefs because you might be wrong".

Replies from: MugaSofer, MugaSofer
comment by MugaSofer · 2013-08-04T19:30:50.812Z · LW(p) · GW(p)

To be fair, that's truer in politics than, say, physics.

comment by MugaSofer · 2013-07-31T16:29:43.956Z · LW(p) · GW(p)

Well, you want larger margins of error when setting up a near-singleton than while using it, because if you set it up correctly then it'll hopefully catch your errors when attempting to use it. Case in point: FAI.

EDIT: If someone is downvoting this whole discussion, could they comment with the issue? Because I really have no idea why so I can't adjust my behaviour.

comment by Jiro · 2013-07-31T00:09:37.981Z · LW(p) · GW(p)

12 year olds are also highly influenced by their parents. It's easy for a parent to threaten a kid to make him vote one way, or bribe him, or just force him to stay in the house on election day if he ever lets his political views slip out. (In theory, a kid could lie in the first two scenarios, since voting is done in secret, but I would bet that a statistically significant portion of kids will be unable to lie well enough to pull it off.)

Also, 12 year olds are less mature than 18 year olds. It may be that the level of immaturity in voters you'll get from adding people ages 12-17 is just too large to be acceptable. (Exercise for the reader: why is 'well, some 18 year olds are immature anyway' not a good response?)

And taking away the vote from demented people and people with low IQ has the problem that the tests may not be perfect. Imagine a test that is slightly biased and unfairly tests black people at 5 points lower IQ. So white people get to vote down to IQ 60 but black people get to vote down to IQ 65. Even though each individual black person of IQ 65 is still pretty stupid, allowing a greater proportion of stupid people from one race than another to vote is bad.

Replies from: wedrifid, army1987, MugaSofer, DanArmak, linkhyrule5, MugaSofer, Eugine_Nier, Eugine_Nier
comment by wedrifid · 2013-07-31T03:15:15.199Z · LW(p) · GW(p)

Also, 12 year olds are less mature than 18 year olds. It may be that the level of immaturity in voters you'll" get from adding people ages 12-17 is just too large to be acceptable.

"Maturity" isn't obviously a desirable thing. What people tend to describe as 'maturity' seems to be a developed ability to signal conformity and if anything is negative causal influence on the application of reasoned judgement. People learn that it is 'mature' to not ask (or even think to ask) questions about why the cherished beliefs are obviously self-contradicting nonsense, for example.

I do not expect a country that allows 12-17 year olds to vote to have worse outcomes than a country that does not. Particularly given that it would almost certainly result in more voting-relevant education being given to children and so slightly less ignorance even among adults.

Replies from: Nornagest, OnTheOtherHandle, army1987, Eugine_Nier, Jiro
comment by Nornagest · 2013-07-31T03:54:42.060Z · LW(p) · GW(p)

I might be a little more generous than that. The term casts a pretty broad net, but it also includes some factors I'd consider instrumentally advantageous, like self-control and emotional resilience.

I'm not sure how relevant those are in this context, though.

Replies from: wedrifid
comment by wedrifid · 2013-07-31T06:15:11.791Z · LW(p) · GW(p)

The term casts a pretty broad net, but it also includes some factors I'd consider instrumentally advantageous, like self-control and emotional resilience.

I certainly recommend maturity. I also note that the aforementioned signalling skill is also significantly instrumentally advantageous. I just don't expect the immaturity of younger voters to result in significantly worse voting outcomes.

comment by OnTheOtherHandle · 2013-07-31T19:47:58.228Z · LW(p) · GW(p)

"Maturity" is pretty much a stand-in for "desirable characteristics that adults usually have and children usually don't," so it's almost by definition an argument in favor of adults. But to be fair, characteristics like the willingness to sit through/read boring informational pieces in order to be a more educated voter, the ability to accurately detect deception and false promises, and the ability to use past evidence to determine what is likely to actually happen (as opposed to what people say will happen) are useful traits and are much more common in 18-year-olds than 12-year-olds.

comment by A1987dM (army1987) · 2013-07-31T09:55:22.319Z · LW(p) · GW(p)

Particularly given that it would almost certainly result in more voting-relevant education being given to children

Interesting argument, I had never thought of that. I'm still sceptical about what the quality of such voting-relevant education would be.

and so slightly less ignorance even among adults.

On timescales much longer than politicians usually think about.

comment by Eugine_Nier · 2013-08-10T04:30:10.750Z · LW(p) · GW(p)

I do not expect a country that allows 12-17 year olds to vote to have worse outcomes than a country that does not. Particularly given that it would almost certainly result in more voting-relevant education being given to children and so slightly less ignorance even among adults.

In my experience "voting-relevant education" tends to mean indoctrination, so no.

Replies from: wedrifid
comment by wedrifid · 2013-08-10T04:39:25.609Z · LW(p) · GW(p)

In my experience "voting-relevant education" tends to mean indoctrination, so no.

Or sometimes "economics" and "critical thinking.

comment by Jiro · 2013-08-09T19:55:41.703Z · LW(p) · GW(p)

I do not expect a country that allows 12-17 year olds to vote to have worse outcomes than a country that does not.

That's a trick statement, because the biggest reason that a country that allows 12-17 year olds to vote won't have worse outcomes is that the number of such people voting isn't enough to have much of an influence on the outcome at all. I don;t expect a country that adds a few hundred votes chosen by throwing darts at ballots to have worse outcomes, either.

The proper question is whether you expect a country that allows them to vote to have worse outcomes to the extent that letting them vote affects the outcome at all.

Replies from: Lumifer, wedrifid
comment by Lumifer · 2013-08-09T20:11:06.395Z · LW(p) · GW(p)

the number of such people voting isn't enough to have much of an influence on the outcome at all

In the US there are about 25m 12-17-year-olds.

In the last (2012) presidential election the popular vote gap between the two candidates was 5m people.

comment by wedrifid · 2013-08-10T01:56:23.916Z · LW(p) · GW(p)

That's a trick statement

There is no trick. For it to be a trick of the kind you suggest would require that the meaning people take from it is different from the meaning I intend to convey. I do not limit the claim to "statistically insignificant worse outcomes because the 25 million people added are somehow negligible". I mean it like it sounds. I have not particular expectation that the marginal change to the system will be in the negative direction.

comment by A1987dM (army1987) · 2013-07-31T09:58:33.376Z · LW(p) · GW(p)

12 year olds are also highly influenced by their parents.

And 75-year-olds are highly influenced by their children. (And 22-year-olds are highly influenced by their friends, for that matter.)

(I'm not saying we should allow 12-year-olds to vote, but just that I don't find that particular argument convincing.)

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2013-07-31T19:40:46.120Z · LW(p) · GW(p)

I don't find arguments against letting children vote very convincing either, except the argument that 18 is a defensible Schelling point and it would become way too vulnerable to abuse if we changed it to a more complicated criterion like "anyone who can give informed consent, as measured by X." After all, if we accept the argument that 12-17 year olds should vote (and I'm not saying it's a bad argument), then the simplest and most effective way to enforce that is to draw another arbitrary line based on age, at some lower age. Anything more complex would again be politicized and gamed.

But I think you're misrepresenting the "influenced by parents" argument. 22-year-olds are influenced by their friends, yes, but they influence their friends to roughly the same degree. Their friends do not have total power over their life, from basic survival to sources of information. A physical/emotional threat from a friend is a lot less credible than a threat from your parents, especially considering most people have more than one circle of friends. The same goes for the 75-year-old - they may be frail and physically dependent on their children, but society doesn't condone a live-in grandparent being bossed around and controlled the way a live-in child is, so that is not as big a concern.

Replies from: army1987
comment by A1987dM (army1987) · 2013-08-01T12:43:19.401Z · LW(p) · GW(p)

The same goes for the 75-year-old - they may be frail and physically dependent on their children, but society doesn't condone a live-in grandparent being bossed around and controlled the way a live-in child is

Indeed, we outsource the job to nursing homes instead.

comment by MugaSofer · 2013-07-31T15:08:44.296Z · LW(p) · GW(p)

And taking away the vote from demented people and people with low IQ has the problem that the tests may not be perfect. Imagine a test that is slightly biased and unfairly tests black people at 5 points lower IQ. So white people get to vote down to IQ 60 but black people get to vote down to IQ 65. Even though each individual black person of IQ 65 is still pretty stupid, allowing a greater proportion of stupid people from one race than another to vote is bad.

You know, I can think of a worse test than that ... eh, I'm not even going to bother working out a complex "age test" metaphor, I'm just gonna say it: age is a worse criterion than that test.

Replies from: Jiro
comment by Jiro · 2013-08-01T00:41:47.435Z · LW(p) · GW(p)

You might be able to argue that since people of different races don't live to the exact same age, an age test is still biased, but I'd like to see some calculations to show just how bad it is. Also, even though an age test may be racially biased, there aren't really better and worse age tests--it's easy to get (either by negligence or by malice) an IQ test which is biased by multiple times the amount of a similar but better IQ test, but pretty much impossible to get that for age.

There's also the historical record to consider. It's particularly bad for IQ tests.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-04T14:18:51.047Z · LW(p) · GW(p)

No, sorry, I mean it's worse overall, not worse because racist.

Replies from: Jiro
comment by Jiro · 2013-08-05T20:49:54.721Z · LW(p) · GW(p)

It's not hard to come up with a scenario where having all voters be incompetents who choose the candidate at random is better for the population at large than just holding a racially biased election.

For instance, consider 100 people, 90 white and 10 black; candidate A is best for 46 whites and 0 blacks while candidate B is best for 44 whites and 10 blacks. For the population as a whole, B is the best and A is the worst. If the blacks are excluded from the franchise and the whites vote their own interests, the worst candidate (A) is always elected, while if everyone is incompetent and votes at random, there's only a 50% chance of the worst candidate being elected

Replies from: MugaSofer, Lumifer
comment by MugaSofer · 2013-08-06T14:27:49.026Z · LW(p) · GW(p)

You realize there's more to politics than race, right?

That said, you would definitely have to be careful to ensure the test was as good as possible.

Replies from: Jiro
comment by Jiro · 2013-08-08T14:50:42.683Z · LW(p) · GW(p)

Although there's more to politics than race, race is an important part of it, and we're obligated to treat other people fairly with respect to race. The argument that it doesn't matter how racially biased a test is because it's good in other ways isn't something I am inclined to accept.

Replies from: MugaSofer, Lumifer, Eugine_Nier
comment by MugaSofer · 2013-08-18T19:56:33.000Z · LW(p) · GW(p)

The argument that it doesn't matter how racially biased a test is because it's good in other ways isn't something I am inclined to accept.

I assume this is hyperbole, since obviously a truly perfect test could draw from any subset of the population, as long as it was large enough to contain near-perfect individuals.

With that said, I agree, we should attempt to avoid any bias in such a test, including that of race (I would not, however, single this possibility out.) That is what I meant by

That said, you would definitely have to be careful to ensure the test was as good as possible.

However, beyond a certain level of conscientiousness, demanding perfectly unbiased tests becomes counterproductive; especially when one focuses on one possible bias to the exclusion of others. In truth, even age is a racially biased criterion.

comment by Lumifer · 2013-08-09T15:44:39.484Z · LW(p) · GW(p)

how racially biased a test is

Do you define racial bias by how the test works or by which outcomes it produces?

Replies from: Jiro
comment by Jiro · 2013-08-09T17:37:29.334Z · LW(p) · GW(p)

In context, MugaSofer had claimed that if a test that allows young people to vote based on IQ tests black people of equal intelligence as 5 points lower IQ, that's okay because an age test is worse than that. I was, therefore, referring to that kind of bias. I'm not sure whether you would call "gives a number 5 points lower for black people of equal intelligence" 'how the test works' or 'which outcomes it produces'.

Replies from: Lumifer
comment by Lumifer · 2013-08-09T17:51:06.522Z · LW(p) · GW(p)

In this context, MugaSofer's test is clearly "how it works" because the test explicitly looks at the color of skin and subtracts 5 from the score if the skin is dark enough.

On the other hand, "which outcomes it produces" is the more or less standard racial bias test applied by government agencies to all kinds of businesses and organizations.

Replies from: Jiro
comment by Jiro · 2013-08-09T19:37:27.651Z · LW(p) · GW(p)

I didn't describe a test which looks at the color of skin and subtracts 5; I described a test which produces results 5 points lower for people with a certain color of skin. Whether it does that by looking at the color of skin explicitly, or by being an imperfect measure of intelligence where the imperfection is correlated to skin color, I didn't specify, and I was in fact thinking of the latter case.

Replies from: Lumifer
comment by Lumifer · 2013-08-09T19:58:01.190Z · LW(p) · GW(p)

These are two rather different things. I am not sure how the latter case works -- if the test is blinded to the skin color but you believe it discriminates against blacks, (1) How do you know the "true" IQ which the test understates; and (2) what is it, then, that the test picks up as a proxy or correlate to the skin color?

Standard IQ tests show dependency on race -- generally the mean IQ of blacks is about one standard deviation below the mean IQ of whites.

Replies from: AndHisHorse
comment by AndHisHorse · 2013-08-09T20:21:37.642Z · LW(p) · GW(p)

In my experience, if someone is claiming that a test is racially biased, they are claiming that properly understanding the question requires cultural context which is more or less common in one race than another.

An example I found here is a multiple-choice question which asks the student to select the pair of words with a relationship similar to the relationship between a runner and a marathon. The correct answer there was "oarsman" and "regatta". Clearly, there was a cultural context required to correctly answer this question; examining the correlations between socioeconomic status and race, I would expect to find that the cultural context is more common among rich caucasians.

Replies from: Lumifer, SaidAchmiz
comment by Lumifer · 2013-08-09T20:32:38.886Z · LW(p) · GW(p)

In my experience, if someone is claiming that a test is racially biased, they are claiming that properly understanding the question requires cultural context which is more or less common in one race than another.

In my experience if someone is claiming that a test is racially biased, they just don't like the test results. Not always, of course, but often enough.

is more common among rich caucasians

Then the fact that East Asian people show mean IQ noticeably higher than that of caucasians would be a bit inconvenient, wouldn't it?

Replies from: AndHisHorse
comment by AndHisHorse · 2013-08-10T22:24:29.121Z · LW(p) · GW(p)

I'd like to quote you twice:

In my experience if someone is claiming that a test is racially biased, they just don't like the test results. Not always, of course, but often enough.

and

Steelman this.

What exactly do you mean by "often enough"? Do you mean to say that there is such a large number of false positives in claims of racial bias that none of them should be investigated? I am confused by your dismissal of this phenomenon.

Regarding the fact that East Asians tend to score higher than Caucasians on IQ tests (I am familiar with this difference in the US; I do not know if it applies to comparison between East Asian and majority-Caucasian countries), I would attribute it to culture and self-selection.

In the case of the United States, it is my understanding that immigration from Europe dominated immigration to the US during the Industrial Revolution - when the US was looking for, and presumably attracting, manual laborers - while recently, immigrants from Asia have made up a far larger share of the total immigrants to the US. I would guess that relative to European-Americans*, Asian-Americans' immigrant ancestors are more likely to have self-selected for the ability to compete in an intelligence-based trade. This selection bias, propagating through to descendants (intelligent people tend to have intelligent children), would seem to at least partially explain why Asian-Americans score higher.

I do not have any information on Caucasians in their ancestral homelands vs. East Asians in their ancestral homelands.

*Based on recollection of stories told to me and verified only by a quick check online, so if others could chime in with supporting/opposing evidence, that would be appreciated.

Replies from: Lumifer
comment by Lumifer · 2013-08-11T01:58:19.809Z · LW(p) · GW(p)

What exactly do you mean by "often enough"?

I mean that a large number of different studies over several decades using different methodologies in various countries came up with the same results: the average IQ of people belonging to different gene pools (some of which match the usual idea of race and some do not) is not the same.

That finding happens to be ideologically or morally unacceptable to a large number of people. Normally they just ignore it, but when when they have to confront it the typical reaction -- one that happens "often enough" -- is denial: the test is racially biased and so invalid. Example: you.

Do you mean to say that there is such a large number of false positives in claims of racial bias that none of them should be investigated?

I do not believe I have said anything even remotely resembling this.

I am familiar with this difference in the US; I do not know if it applies to comparison between East Asian and majority-Caucasian countries

Yes, it does apply.

I would attribute it to culture and self-selection

Before you commit to defending a position, it's useful to do a quick check to see whether it's defensible. You think no one ran any IQ studies in China?

Replies from: AndHisHorse
comment by AndHisHorse · 2013-08-11T02:13:53.431Z · LW(p) · GW(p)

Thank you for clarifying your points. I mistakenly interpreted "often enough" as indicating some threshold of frequency of false positives beyond which it would not be appropriate to take the problem seriously. I apologize for arguing a straw man.

I was considering mostly the difference among people of different races in the United States, as I assumed that would minimize the effects of cultural difference (though not eliminate it) on the intelligence of the participants and their test results. I would anticipate that cultural influences do affect a person's intelligence - the hypothetical quality which we imperfectly measure, not the impact that quality leaves on a test - as it can motivate certain avenues of self-improvement through its values, or simply allow access to different resources.

I am not surprised that there are IQ differences among racial groups. In fact, I would be shocked to learn that every culture and every natural environment and every historical happening in the entirety of human civilization happened to produce the exact same level of average intelligence. I would be surprised, but not shocked, to learn that there existed a strong, direct causation between race (as a genetic difference rather than a social phenomenon) and intelligence.

I did not mean to imply that because a test outputs different results for different racial groups, that it must be biased. I merely meant to say that bias can exist, though I am not certain whether or not it does, or to what degree. All in all, I seem to have made rather a fool of myself, jumping at shadows, and for that I am sorry.

comment by Said Achmiz (SaidAchmiz) · 2013-08-09T20:53:27.727Z · LW(p) · GW(p)

An example I found here is a multiple-choice question which asks the student to select the pair of words with a relationship similar to the relationship between a runner and a marathon. The correct answer there was "oarsman" and "regatta". Clearly, there was a cultural context required to correctly answer this question; examining the correlations between socioeconomic status and race, I would expect to find that the cultural context is more common among rich caucasians.

I've never seen any question resembling this on any IQ test I've ever taken. Have you? (Note that your link refers to the SAT I, which is not an IQ test.)

Is anyone claiming that the WAIS, for instance, is culturally biased in a similar way?

comment by Eugine_Nier · 2013-08-09T05:54:47.729Z · LW(p) · GW(p)

What's your counter-argument?

Replies from: Jiro
comment by Jiro · 2013-08-09T14:26:01.540Z · LW(p) · GW(p)

It's not an argument, it's a premise.

Feel free to propose that in fact it doesn't matter how racially biased a test is because it's good in other ways. I don't know how many people will agree with you, though.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-10T04:47:43.363Z · LW(p) · GW(p)

You said you weren't willing to accept the argument. Do you have any better reason than "I don't feel like it"?

Replies from: Jiro
comment by Jiro · 2013-08-10T12:32:07.769Z · LW(p) · GW(p)

Wasn't willing to accept what argument?

He claimed that a test that is bad overall is worse than a racially biased test. That might be a nontrivial argument if it he could show that it is worse by some fairly universal criterion. I pointed out that that he can't show this, because I can come up with a scenario where the racially biased test is clearly worse than the overall bad test.

His reply to that was "there is more to politics than race". In context (rather than by taking the literal words), he's telling me that I shouldn't emphasize race so much when talking politics. His argument for that? Um... none, really. There's no argument to respond to or accept. All I can do is say "no, I don't accept that premise. I think my emphasis on race is appropriate".

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-11T22:33:44.155Z · LW(p) · GW(p)

Wasn't willing to accept what argument?

Why is bias on the test that happens to correlate with race worse than any other bias?

Replies from: Jiro
comment by Jiro · 2013-08-11T23:58:20.135Z · LW(p) · GW(p)

I don't see any argument in that.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-18T21:05:12.589Z · LW(p) · GW(p)

If I may jump in here ... Eugene seems to be asking if you consider non-racism inherently, terminally important or purely instrumental in the great war against sucky tests.

You seem to be agreeing that yes, racism really is more important than, say, conservative bias.

I'm not certain if you actually believe that ... I would guess you do ... but you seemed somewhat confused by the question, so I thought I'd ask.

comment by Lumifer · 2013-08-09T15:43:37.422Z · LW(p) · GW(p)

That's not an argument about race, that's a generic argument about excluding any kind of people from an election -- kids, mentally ill, felons, immigrants, etc.

Replies from: Jiro
comment by Jiro · 2013-08-09T17:46:22.787Z · LW(p) · GW(p)

It's not an argument at all in that sense, it's a counter-argument, to the claim that it doesn't matter if a test is racist since the alternative is "worse overall". I was pointing out that having a test be racist can be equivalent to being worse overall.

It also assumes that people will vote their own interests. Kids and the mentally ill presumably will not, so it doesn't apply to them. And it assumes we care about benefiting them (and therefore that we care when a candidate is worse for the whole population including them); in the case of immigrants and possibly felons, we don't.

comment by DanArmak · 2013-07-31T10:22:27.356Z · LW(p) · GW(p)

I'd like to add this to the other posters' responses:

Also, 12 year olds are less mature than 18 year olds.

Please taboo "immaturity" for me. After all, if taken literally it just means "not the same as mature, adult people". But the whole point of letting a minority vote is that they will not vote the same way as the majority.

And taking away the vote from demented people and people with low IQ has the problem that the tests may not be perfect.

How is this different from saying that no test of 12-year-olds for "maturity" is perfect and therefore we do not give the vote to any 12 year olds at all?

Replies from: Jiro
comment by Jiro · 2013-07-31T22:48:20.215Z · LW(p) · GW(p)

How is this different from saying that no test of 12-year-olds for "maturity" is perfect and therefore we do not give the vote to any 12 year olds at all?

It isn't all that different, but all that that proves is that we shouldn't decide who votes based on maturity tests any more than we should on IQ tests.

comment by linkhyrule5 · 2013-07-31T01:06:39.230Z · LW(p) · GW(p)

"Well, some 18 year olds are immature anyway" is not a good response, but "show me your data that places 12-17 yo people significantly more immature then the rest of humanity, and taboo "immaturity" while you're at it" is.

The first two, sadly, do make more sense, but then emancipation should become qualification to vote.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2013-07-31T19:53:01.244Z · LW(p) · GW(p)

One thing that hasn't been mentioned yet is that pure experience - just raw data in your long-term memory - is a plausible criterion for a good voter. It's not that intelligence and rationality is unimportant, since rational, intelligent people may well draw more accurate conclusions from a smaller amount of data.

What does matter is that everyone, no matter how intelligent or unintelligent, would be better off if they have a few elections and a few media scandals and a few internet flame wars and a few nationally significant policy debates stored in their long-term memory. Even HJPEV needs something to go on. The argument is not just that 18-year-olds as a group are better voters than 12-year-olds as a group, but that any given 12-year-old would be a better voter in 6 years, even if they're already pretty good.

Replies from: DanArmak
comment by DanArmak · 2013-08-01T07:52:29.662Z · LW(p) · GW(p)

The argument is not just that 18-year-olds as a group are better voters than 12-year-olds as a group, but that any given 12-year-old would be a better voter in 6 years, even if they're already pretty good.

By the same argument, they'd be even better voters 10 years later. Why not give the vote at 30 years of age, say?

Replies from: OnTheOtherHandle, Eugine_Nier
comment by OnTheOtherHandle · 2013-08-01T23:19:31.002Z · LW(p) · GW(p)

Because any experience requirement draws an arbitrary line somewhere, and 18 is a useful line because it's also the arbitrary line society has drawn for a lot of other milestones, like moving out of the house and finishing high school. Voting goes hand-in-hand with the transition out of mandatory formal education and the start of a new "adult life." I think it makes sense that the voting age should be set to whatever age formal education ends and most people move out, but what age those things should happen at is again debatable.

Replies from: DanArmak
comment by DanArmak · 2013-08-02T11:10:26.931Z · LW(p) · GW(p)

One reason why those lines are drawn together is that, if voting age was much lower than the other lines, then young people would vote the other lines lower too: legal emancipation from their parents, legal rights to have sex and to work, and end of mandatory legally-enforced schooling.

People are unwilling to give the vote to 12 year olds because they're afraid that they'll vote for giving all other rights to 12 year olds as well. And most people would rather keep teenagers without rights.

ETA: on consideration I changed my opinion, see below. I now think it's unlikely that 12 to 18 year olds would be a large and monolithic enough voting block to literally vote themselves more rights.

Replies from: OnTheOtherHandle, Jiro
comment by OnTheOtherHandle · 2013-08-11T07:06:26.074Z · LW(p) · GW(p)

There's actually a gradualist solution that never occurred to me before, and probably wouldn't destroy the Schelling point. It may or may not work, but why not treat voting like driving, and dispense the rights piecemeal?

Say when you enter high school you get the option to vote for school board elections, provided you attend a school board meeting first and read the candidate bios. Then maybe a year later you can vote for mayor if you choose to attend a city council meeting. A year after that, representatives, and then senators, and perhaps each milestone could come with an associated requirement like shadowing an aide or something.

The key to these prerequisites IMO, is that they cannot involve passing any test designed by anyone - they must simply involve experience. Reading something, going somewhere - no one is evaluating you to see if you gained the "right" opinions from that experience.

When they're 18 they get full voting rights. Those people who chose not to go through this "voter training" process also get full voting rights at 18, no questions asked - kind of like how getting a driver's license at 16 is a longer process than getting one at 18 starting from the same driving experience.

This way, only the most motivated teens would get voting rights early, and everyone else would get them guaranteed at 18. There is likely potential for abuse that I may not have considered, but I believe with this system any prejudices or biases introduced in teens would be local, rather than the potentially national-scale abuses possible with standardized voter-testing.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-14T07:55:11.256Z · LW(p) · GW(p)

We already let 12 year olds vote for student council. The results are not encouraging.

Replies from: wedrifid, OnTheOtherHandle
comment by wedrifid · 2013-08-14T09:17:35.351Z · LW(p) · GW(p)

We already let 12 year olds vote for student council. The results are not encouraging.

We let adults vote in federal elections. I'm not especially impressed with those results either.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-15T00:48:44.450Z · LW(p) · GW(p)

We let adults vote in federal elections. I'm not especially impressed with those results either.

Compared to what?

Replies from: wedrifid
comment by wedrifid · 2013-08-15T01:53:47.949Z · LW(p) · GW(p)

Compared to what?

Compared to, since you ask, the members of the student council that I elected when I was 12. Maybe you have had worse experiences for than I with elected student council representatives (my country has a different school culture and my grade happened to be one of the best to go through my school.) Or perhaps you have more respect for your current elected representatives. But for my personal experience the difference between national elections and school council elections is largely that the former has a larger body of sociopaths to select from so has stronger selection effects in that direction.

More generally the comparison I make is similar to Churchill's:

"Democracy is the worst form of government, except for all those other forms that have been tried from time to time."

comment by OnTheOtherHandle · 2013-08-14T08:11:05.991Z · LW(p) · GW(p)

But they don't need to be. The point of starting off very small is that the damage they can do is proportionally small. When we let teens learn to drive, we expect them to be significantly worse than the average driver, and they are, but they have to start at some point.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-15T00:49:27.019Z · LW(p) · GW(p)

The upside of letting teens drive is that it's easier for them to get from place to place. Whereas expanding the vote is purely zero sum.

comment by Jiro · 2013-08-02T14:49:28.578Z · LW(p) · GW(p)

There aren't enough 12 year olds who would vote that they can vote in things which adults nearly universally disagree on.

Also, people under 18 are already permitted to have sex (though not necessary with people who are much older).

Replies from: DanArmak
comment by DanArmak · 2013-08-02T19:36:34.311Z · LW(p) · GW(p)

There aren't enough 12 year olds who would vote that they can vote in things which adults nearly universally disagree on.

That's true. Although, if they formed a voting block, it would be a significant one. But that's not the real reason why people don't want teenagers to vote.

I think it's more of a feeling of what it means to be a full citizen with voting rights. People wouldn't want to make teenagers into an oppressed minority that was denied full rights because it kept getting outvoted; it would feel unpleasant, scary and antagonistic.

Also, people under 18 are already permitted to have sex

That varies a lot between countries. Very few places have age of consent as low as 12-14 (puberty).

I also would like to note that it would be odd to apply a phrase like permitted to have sex to someone who was otherwise a full, voting citizen.

Replies from: Lumifer, Jiro
comment by Lumifer · 2013-08-02T19:58:53.375Z · LW(p) · GW(p)

it would be odd to apply a phrase like permitted to have sex to someone who was otherwise a full, voting citizen.

How about applying a phrase permitted to have a beer to someone who is a full, voting citizen?

Replies from: Jiro, OnTheOtherHandle
comment by Jiro · 2013-08-02T20:03:41.513Z · LW(p) · GW(p)

I won't argue for the 21 year drinking age. For one thing, it was passed by federal governmental overreach (taking money from the states and not giving it back unless they passed a drinking age law).

comment by OnTheOtherHandle · 2013-08-07T07:02:12.365Z · LW(p) · GW(p)

The supposed reason for the 21 year old drinking age is that the prefrontal cortex, which is in charge of impulse control, doesn't fully mature until the early twenties, and therefore alcohol use before 21 would a) result in more mishaps like car accidents than alcohol use after 21, and b) harm brain development during a critical period. Which would be perfectly sound reasoning if it applied to voting, military service, cigarettes, lottery tickets, etc. If alcohol use is too risky because of an underdeveloped prefrontal cortex, then surely voting is too? But if you raised the voting age to 21 you'd have to raise the draft age, too, because it would be barbaric to send people off to die without even a nominal say in the decision to go to war. It's far more practical to lower the drinking age.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-07T07:57:46.982Z · LW(p) · GW(p)

Which would be perfectly sound reasoning if it applied to voting, military service, cigarettes, lottery tickets, etc. If alcohol use is too risky because of an underdeveloped prefrontal cortex, then surely voting is too?

Well for one thing alcohol's effect is to further impair the prefrontal cortex.

But if you raised the voting age to 21 you'd have to raise the draft age, too, because it would be barbaric to send people off to die without even a nominal say in the decision to go to war.

Taboo "barbaric".

comment by Jiro · 2013-08-02T20:00:11.580Z · LW(p) · GW(p)

But that's not the real reason why people don't want teenagers to vote.

Combined with your previous statement, that means that adults don't want teenagers to vote because they would vote for other rights, but not out of fear they would actually get them. Which is decidedly odd.

Here's something else to consider: perhaps adults think teenagers shouldn't get those rights for a reason. Furthermore, perhaps most teenagers can't comprehend that reason.

Of course, in making such a statement I need to avoid poisoning the well (I don't want to say that any teenager who disagrees is ipso facto unable to comprehend), but even then, I think it's pretty close to the truth.

(And a smart teenager is likely to think 'I am smart enough and competent enough at making decisions to vote. But I know what a lot of other people my age are like, and they're certainly not like that. I would overall be better off if I couldn't vote as long as it kept them from voting.')

Replies from: None, DanArmak, SaidAchmiz, MugaSofer
comment by [deleted] · 2013-08-02T20:13:26.512Z · LW(p) · GW(p)

(And a smart teenager is likely to think 'I am smart enough and competent enough at making decisions to vote. But I know what a lot of other people my age are like, and they're certainly not like that. I would overall be better off if I couldn't vote as long as it kept them from voting.')

What happens if we extend that reasoning to most adults as well? Is there some reason that most people become magically competent at 18? Perhaps things would be even better if voting were restricted further to some competent class of people?

(Of course that's politically impossible, but it's an interesting thought experiment)

Replies from: Nornagest, Jiro, shminux, Eugine_Nier
comment by Nornagest · 2013-08-02T21:02:48.620Z · LW(p) · GW(p)

Historically, the usual problem with that is that empowering competent people to make political decisions also empowers them to decide the meaning of "competent", and the meritocracy slowly turns into an aristocracy. The purest example I can think of offhand is the civil service examinations gating bureaucratic positions in imperial China, although that wasn't a democratic system.

comment by Jiro · 2013-08-02T21:03:24.235Z · LW(p) · GW(p)

Is there some reason that most people become magically competent at 18?

If you're asking what the difference is between 18 - 1 day and 18, then that's already been answered: whenever we need to make a distinction based on a trait that gradually changes, we're going to have to set up some arbitrary boundary where the examples on one side are not very different from the examples on the other. The fact that the two sides are not very different is not a reason not to set the boundary.

Perhaps things would be even better if voting were restricted further to some competent class of people?

In most cases, we have no way to determine who is in such a class of people, that is not susceptible to gaming the system, abuse, and/or incompetent and reckless testing. It's pretty hard to screw up figuring what someone's age is.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-04T15:40:47.430Z · LW(p) · GW(p)

In most cases, we have no way to determine who is in such a class of people

So why do we treat age as if it functions as one? Genuinely asking.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-06T01:36:02.221Z · LW(p) · GW(p)

Because it's a proxy that deals with the problems I mentioned here much better than attempting to measure competence directly.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-06T13:27:47.876Z · LW(p) · GW(p)

So, to be clear, you're not saying that there's no test of competency, but that age is the best test of competency we have?

I guess we're starting to run into the limits of theorizing in the absence of experimentation

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-07T00:36:55.639Z · LW(p) · GW(p)

So you agree that in the absence of other tests having an age-based cutoff at 18 is better than having no cutoff or a lower cutoff?

So, to be clear, you're not saying that there's no test of competency, but that age is the best test of competency we have?

If you have a proposal that deals with the problems I've mentioned here and here, I'm willing to consider it.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-18T20:29:32.138Z · LW(p) · GW(p)

So you agree that in the absence of other tests having an age-based cutoff at 18 is better than having no cutoff or a lower cutoff?

Not really, but in the absence of spare countries to run controlled trials on...

comment by Shmi (shminux) · 2013-08-02T20:25:01.031Z · LW(p) · GW(p)

Of course that's politically impossible, but it's an interesting thought experiment

Is my sarcasm detector broken or something? This experiment has been performed many times in many places, rich white males usually being the prime example of "some competent class".

Actually, I find the Heinlein's idea in Starship Troopers intriguing, where only ex-military are given citizenship.

Replies from: None
comment by [deleted] · 2013-08-02T20:35:29.503Z · LW(p) · GW(p)

Excuse me; politically impossible within the current political climate.

If you know of some way to restrict voting to some more competent reference class than "adults", please do so.

History is against you, though, the power-holding reference class has been expanding rather than contracting (non-landholders, women's suffrage, civil rights, etc).

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2013-08-07T07:08:28.365Z · LW(p) · GW(p)

One relatively simple (but also easily gameable) criteria is education and/or intelligence. Only 18-year-olds with a high school/college/postgraduate degree, only 18-year-olds with an IQ score/SAT score >= X, etc. We don't want to try that because we know how quickly the tests and measurements would be twisted with ideology, and we worry that we would end up systematically discriminating against a class of people based on some hidden criterion other than intelligence/education, such as political views.

Replies from: Peterdjones
comment by Peterdjones · 2013-08-18T20:55:43.574Z · LW(p) · GW(p)

And to be fair, you'd have to give ten or a hundred votes to people with PhD's in political science.

comment by Eugine_Nier · 2013-08-04T06:58:05.981Z · LW(p) · GW(p)

The problems with restricting the vote by some criterion of competence are:

1) the criterion will get subject to Goodhart's law, this can be mitigated by using straightforward criteria, e.g., age.

2) the people meeting the criterion will act in ways that are in their interest but not in the interest of the people who do not fit the criteria, this is less of a problem with age because children already have adults, namely their parents, who have an interest in their children's well-being.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-04T14:42:02.598Z · LW(p) · GW(p)

the people meeting the criterion will act in ways that are in their interest but not in the interest of the people who do not fit the criteria, this is less of a problem with age because children already have adults, namely their parents, who have an interest in their children's well-being.

That seems like a really serious problem. How much better off would children be if they were a special interest group and not their parents?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-06T01:32:57.026Z · LW(p) · GW(p)

How much better off would children be if they were a special interest group and not their parents?

Probably a lot worse since they generally don't have the experience to know what policies are actually in their interest.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-06T14:37:27.061Z · LW(p) · GW(p)

Well, at a certain point they're just pushing buttons at random ... but assuming a degree of filtering*, I would expect them to have at the very least a net positive effect. Although I suppose it's possible (probable?) you have a lower opinion of children than me.

Come to think, even the interface could be enough to ensure this.

*Possibilities:

  • Not allowed to vote until they decide they want to.
  • Not allowed to vote until their parents say so.
  • Not allowed to vote unless they convince a panel of experts, judges, or random people off the street.
  • Not allowed to vote until they take a simple course on how to vote.
  • Not allowed to vote until a certain age (significantly lower than 18.)
  • Not allowed to vote unless passed by a qualified professional (Doctor? Psychiatrist?)
  • Not allowed to vote until they pass an exam (Politics? General knowledge? IQ? English?)

Most of these can also be combined in various ways, of course.

Replies from: TimS, Eugine_Nier
comment by TimS · 2013-08-07T02:03:32.157Z · LW(p) · GW(p)

Many of your proposed filters do not really address Eugine_Nier's point about Goodhart's law.

If there is any structural bias in the first generation of vote filters, there are many reasons to be concerned that those who do not like the measure will not be sufficiently powerful to cause changes to the vote filters going forward.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-18T20:00:23.265Z · LW(p) · GW(p)

Wait, I thought Goodhart's law was the one about "teaching to the test"?

Yeah, not all of those are equally good. I suspect they may all be better than the current criteria, but don't hold me to that.

comment by Eugine_Nier · 2013-08-07T00:55:24.667Z · LW(p) · GW(p)

I would expect them to have at the very least a net positive effect.

Evidence?

The way student counsel elections tend to play out is not encouraging to your case.

Also the problem with most of your proposed tests is that in practice they're likely to degenerate into the test writer or administrator attempting to determine how they'd vote.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-18T20:09:04.333Z · LW(p) · GW(p)

The way student counsel elections tend to play out is not encouraging to your case.

Having considered this further, I would no longer endorse that statement. Rather, I would expect a net positive effect relative to other types of voter.

Also the problem with most of your proposed tests is that in practice they're likely to degenerate into the test writer or administrator attempting to determine how they'd vote.

While this is a problem - and one that, I suspect, rests on trying to control future government's decisions - I'm going to go through the different ideas there, just for fun.

Not allowed to vote until they decide they want to.

Obviously, does not apply.

Not allowed to vote until their parents say so.

Almost certainly applies, but then, if you trust democracy anyway...

Not allowed to vote unless they convince a panel of experts, judges, or random people off the street.

Applies, barring certain safeguards, or the "random people" option if you like democracy and juries.

Not allowed to vote until they take a simple course on how to vote.

Technically applies, but we already have schools, so...

Not allowed to vote until a certain age (significantly lower than 18.)

Probably doesn't apply ... I guess someone who gets disproportionately old or young votes might try to change the age limit, for that reason.

Not allowed to vote unless passed by a qualified professional (Doctor? Psychiatrist?)

Depends on how much you trust doctors.

Not allowed to vote until they pass an exam (Politics? General knowledge? IQ? English?)

Since this was the original and "default" proposal, obviously, this applies. Although it might be hard to sneak into an English exam.

Replies from: Lumifer, Eugine_Nier
comment by Lumifer · 2013-08-20T01:45:13.625Z · LW(p) · GW(p)

Depends on how much you trust doctors.

Don't forget that it's a piece of paper issued by the state that makes you a doctor as opposed to someone illegally practicing medicine.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-24T12:40:03.900Z · LW(p) · GW(p)

I think there would be knock-on effects for deliberately allowing incompetent doctors to qualify in order to indirectly mess with their ability to competently assess voters.

That's not to say it might not be tried, I suppose ...

comment by Eugine_Nier · 2013-08-20T01:14:38.834Z · LW(p) · GW(p)

Not allowed to vote until they take a simple course on how to vote.

Technically applies, but we already have schools, so...

Yes, I've attended one of those schools. The social science curriculum included some extremely blatant propaganda.

Although it might be hard to sneak into an English exam.

Not really.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-24T13:06:15.468Z · LW(p) · GW(p)

Yes, I've attended one of those schools. The social science curriculum included some extremely blatant propaganda.

True. This is not usually considered a good argument against voting or schools, although perhaps it should be.

Although it might be hard to sneak into an English exam.

Not really.

"Explain, in your own words, why The Party is a glorious protector of our freedoms ..."

Just kidding, I suppose you could make it easier to grade answers that agree with you higher - "why (or why not" questions where you're expected to go for "why", that sort of thing. And biased correctors would find biased answers more persuasive, I guess ... it would be a lot harder than sneaking bias into a social science exam, though. (And that's a stupid idea anyway :p)

comment by DanArmak · 2013-08-02T21:41:02.154Z · LW(p) · GW(p)

Combined with your previous statement, that means that adults don't want teenagers to vote because they would vote for other rights, but not out of fear they would actually get them. Which is decidedly odd.

I've thought about it, and I think this is more correct, and I was wrong before. (Or perhaps both reasons are correct, but this one is much stronger).

Historically, minorities who had voting rights but were otherwise legally discriminated against - blacks, gays, etc. - didn't abolish that discrimination by simply voting themselves more rights. They had to fight for those rights by much more direct means.

Acquiring the vote is usually, historically, a relatively early step in the enfranchisement of a minority, and it doesn't help directly in acquiring other rights. (I may be wrong about this; I'm not an expert.) When adults imagine the scenario of teenagers gaining the vote, it pattern-matches the narrative of an oppressed minority more or less violently fighting to gain other legal rights. Adults don't want to give teenagers the vote because it would acknowledge them as an adversary, an independent force. It would give them some (perhaps symbolic) power and simultaneously frame them as opponents.

comment by Said Achmiz (SaidAchmiz) · 2013-08-02T20:05:32.597Z · LW(p) · GW(p)

(And a smart teenager is likely to think 'I am smart enough and competent enough at making decisions to vote. But I know what a lot of other people my age are like, and they're certainly not like that. I would overall be better off if I couldn't vote as long as it kept them from voting.')

To go further in this vein: the smart teenager might realize that if only smart teenagers were allowed to vote, they would never have the numbers to influence the political state of affairs in their preferred direction; but if all teenagers were allowed to vote, the majority of them would vote in directions not aligned with the interests of the smart teenagers; therefore the smart teenagers would gain nothing from having voting rights, either way.

Replies from: Kawoomba, Jiro
comment by Kawoomba · 2013-08-02T20:13:51.628Z · LW(p) · GW(p)

To go further in this vein (...)

That's when you meet the venous valve: using the same argument, a smart adult might object to adults having voting rights, no?

Replies from: SaidAchmiz, Document, Jiro
comment by Said Achmiz (SaidAchmiz) · 2013-08-02T20:24:53.726Z · LW(p) · GW(p)

Yes indeed. Of course, note that the argument does not apply to only smart adults having voting rights.

Things also change if we think that "smart adults" are a less monolithic bloc of interests than "smart teenagers", which, it seems to me, is the case.

comment by Document · 2013-08-09T08:17:24.753Z · LW(p) · GW(p)

"Might"?

comment by Jiro · 2013-08-02T21:25:10.162Z · LW(p) · GW(p)

using the same argument, a smart adult might object to adults having voting rights, no?

The argument doesn't just require that someone think they're in a category containing a lot of bad voters. The argument requires that they think they're in a category with voters who are comparatively bad, in contrast to people who are outside the category. A lot of smart adults would say "most voters are stupid". But not very many would say "most voters like me are particularly stupid".

Replies from: DanArmak
comment by DanArmak · 2013-08-02T21:33:24.707Z · LW(p) · GW(p)

A lot of smart adults would say "most voters are stupid". But not very many would say "most voters like me are particularly stupid".

That entirely depends on what the category of "voters like me" is - the category that may lose their votes. Very old people, mentally ill people, low IQ people, illiterate people, people with drug addictions... Within such category, an exceptionally (for the category) smart person may well think most other people "like them" are particularly stupid.

comment by Jiro · 2013-08-02T21:07:23.880Z · LW(p) · GW(p)

Well, if only smart teenagers were allowed to vote, they would be able to influence politics the same as any other minority--they can have an influence at the margins proportional to their size. The problem is that there's no good way to say that only smart teenagers can vote just like there's no good way to say that only smart adults can vote.

comment by MugaSofer · 2013-08-04T15:38:42.051Z · LW(p) · GW(p)

And a smart teenager is likely to think 'I am smart enough and competent enough at making decisions to vote. But I know what a lot of other people my age are like, and they're certainly not like that.

You're implying there's supposed to be an age at which this stops being true?

Replies from: Jiro
comment by Jiro · 2013-08-05T04:03:38.303Z · LW(p) · GW(p)

It's not logically consistent to believe that for all ages X people of age X are worse voters than average. There must be at least one age where the people of that age are better than average--it's a logical necessity, because of how averages work!

I think you are confusing "most other voters my age are stupid" (which people can and do say at any age) and "most other voters in my group are particularly stupid, compared to the average voter".

Replies from: MugaSofer
comment by MugaSofer · 2013-08-06T13:46:27.268Z · LW(p) · GW(p)

Actually, I was trying to make a joke, on the basis that the quoted section seems to imply the former.

Clearly I failed.

As an aside, have you considered applying that argument to other groups that were once disenfranchised? I'm not going to say it's wrong, but that particular exercise certainly produces a worrying number of parrallels (similar to applying spaciest arguments to racist ones, as per the parent article.)

Replies from: Jiro
comment by Jiro · 2013-08-06T15:34:00.288Z · LW(p) · GW(p)

have you considered applying that argument to other groups that were once disenfranchised?

The argument is that a smart person in such a group would agree that the rest of them are too stupid to vote. It doesn't apply to other disenfranchised groups until they actually would believe this too. I doubt that the other groups you are referring to would believe this.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-18T22:43:23.618Z · LW(p) · GW(p)

I was thinking of women. Y'know, back in Ye Olden Days.

In general, I think, there is a tendency not to disenfranchise groups even if they are in some sense "below average", because, y'know, representation be good. Again, imagine the racist pointing out that n**s have, on average, less education than we* do. Or maybe your model of Terrible People is less convincing than mine?

*he's a racist, he aint talking to Them, he's talking to Us White Guys.

comment by Eugine_Nier · 2013-08-04T06:47:51.134Z · LW(p) · GW(p)

Why not give the vote at 30 years of age

This would probably actually not be a bad idea.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-04T15:36:33.175Z · LW(p) · GW(p)

... 40? 60?

I'm guessing you're over 30 years old :P

EDIT: to be clear, I'm aware that those don't necessarily follow, I'm just curious where Eugine draws the line and why.

FURTHER EDIT: If more experience = better, and you want the best possible pool of voters, then a "village elders" model springs to mind ... that's a pretty simplistic model, though.

Replies from: army1987
comment by A1987dM (army1987) · 2013-08-04T16:17:17.405Z · LW(p) · GW(p)

FWIW, I'm under 30 and I still agree with him. (I'm not sure that unilaterally putting my proverbial money where my mouth is and refraining from voting until then when other people my age still vote would be a sane idea, though.)

Replies from: MugaSofer
comment by MugaSofer · 2013-08-06T14:24:20.753Z · LW(p) · GW(p)

In the interests of updating my model, did you believe this before reading his argument?

(I'm not sure that unilaterally putting my proverbial money where my mouth is and refraining from voting until then when other people my age still vote would be a sane idea, though.)

Nah, that just makes your age group even less sane.

Replies from: army1987
comment by A1987dM (army1987) · 2013-08-06T14:55:31.465Z · LW(p) · GW(p)

his argument?

Which one? I hadn't read this comment until now.

(I've long suspected that, if we have to decide whom to allow to vote based on age alone, 18 is likely to be a lower threshold than optimal, but I have no strong opinion on what exactly the optimal threshold would be. Probably around the age at which the median youngster becomes economically independent from their parents, give or take half a decade.)

Replies from: MugaSofer
comment by MugaSofer · 2013-08-06T15:08:43.982Z · LW(p) · GW(p)

(I've long suspected that, if we have to decide whom to allow to vote based on age alone, 18 is likely to be a lower threshold than optimal, but I have no strong opinion on what exactly the optimal threshold would be. Probably around the age at which the median youngster becomes economically independent from their parents, give or take half a decade.)

OK, that answers my question. Thanks.

comment by MugaSofer · 2013-08-04T16:30:45.589Z · LW(p) · GW(p)

12 year olds are also highly influenced by their parents. It's easy for a parent to threaten a kid to make him vote one way, or bribe him, or just force him to stay in the house on election day if he ever lets his political views slip out. (In theory, a kid could lie in the first two scenarios, since voting is done in secret, but I would bet that a statistically significant portion of kids will be unable to lie well enough to pull it off.)

Also, 12 year olds are less mature than 18 year olds. It may be that the level of immaturity in voters you'll get from adding people ages 12-17 is just too large to be acceptable. (Exercise for the reader: why is 'well, some 18 year olds are immature anyway' not a good response?)

Don't these two arguments cancel each other out? How can you simultaneously be concerned that children will vote immaturely and vote the same way as their parents?

And taking away the vote from demented people and people with low IQ has the problem that the tests may not be perfect. Imagine a test that is slightly biased and unfairly tests black people at 5 points lower IQ. So white people get to vote down to IQ 60 but black people get to vote down to IQ 65. Even though each individual black person of IQ 65 is still pretty stupid, allowing a greater proportion of stupid people from one race than another to vote is bad.

My favourite response to this is to retain the "everyone gets to vote at 18" aspect regardless of child enfranchisement. At least until you have tests people find acceptable or whatever.

Replies from: Jiro
comment by Jiro · 2013-08-05T03:50:02.574Z · LW(p) · GW(p)

How can you simultaneously be concerned that children will vote immaturely and vote the same way as their parents?

I have described two separate failure modes. I see no reason to believe that the two failure modes would cancel each other out.

My favourite response to this is to retain the "everyone gets to vote at 18" aspect regardless of child enfranchisement.

That doesn't work. If everyone above age 18 can vote, black children can vote down to IQ 65, and white children can vote down to IQ 60, the result will still be skewed, although not by as much as if the IQ test was applied to everyone.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-06T14:13:32.432Z · LW(p) · GW(p)

I see no reason to believe that the two failure modes would cancel each other out.

... you don't? Could you explain your reasoning on this?

That doesn't work. If everyone above age 18 can vote, black children can vote down to IQ 65, and white children can vote down to IQ 60, the result will still be skewed, although not by as much as if the IQ test was applied to everyone.

It doesn't work perfectly. That's far from the same thing as not working at all.

Replies from: Jiro
comment by Jiro · 2013-08-06T15:42:33.764Z · LW(p) · GW(p)

... you don't? Could you explain your reasoning on this?

Yes. First of all, having two independent failure modes cancel each other out would be an astonishing coincidence. If you think that an astonishing coincidence has happened, you had better show some reason to believe it other than just saying "perhaps there will be an astonishing coincidence". Second, it doesn't follow that the two failure modes will always produce opposite results anyway. For instance, suppose that immature parents are more likely to pressure their kids into voting with the parents than mature parents are; then both failure modes increase the amount of immaturity-based votes.

It doesn't work perfectly. That's far from the same thing as not working at all.

It works worse, as far as racial bias goes, than having the 18 year old age limit and nothing else.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-18T22:36:49.399Z · LW(p) · GW(p)

If you think that an astonishing coincidence has happened, you had better show some reason to believe it other than just saying "perhaps there will be an astonishing coincidence".

I didn't mean it as a coincidence. I meant that if you're OK with adult voters, then you should be OK with kids parroting adult voters.

However, you have a good point about the possibility that poor voters might affect their children disproportionately. I can only respond that the same might be true of adult voters, but ... yeah, there is definitely something to think about there.

It doesn't work perfectly. That's far from the same thing as not working at all.

It works worse, as far as racial bias goes, than having the 18 year old age limit and nothing else.

As I believe I pointed out elsewhere, there is more to life than racism. We are, in reality, talking about a tiny bias here. What kind of distortions are ageist biases producing?

Not to mention, in a racist world, oppressed minorities have lower life expectancy.

(Also, well ...I feel uncomfortable just typing this, but the thought occurs that if the best test you can produce is racist, then maybe you should be updating the possibility that racists were onto something.)

Replies from: Jiro
comment by Jiro · 2013-08-27T14:41:35.182Z · LW(p) · GW(p)

I am okay with adult voters to the extent that any cure for poor voting by adults is going to be worse than the disease. Voting tests create incentives for corruption and mismanagement and historically have been associated with corruption and mismanagement pretty much whenever they have been used.

comment by Eugine_Nier · 2013-08-04T06:46:01.171Z · LW(p) · GW(p)

The problem with letting 12 year olds vote is not that they'd be overly influenced by their parents, it's that they they're worse at seeing through the various dark arts techniques people routinely employ and this would have the result of making politics even more of a dark arts contest than it already is.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-04T14:43:17.356Z · LW(p) · GW(p)

So we should test for resistance to Dark Arts Techniques, rather than base it on age? Excellent idea!

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-06T01:25:36.687Z · LW(p) · GW(p)

And how exactly to you propose doing testing in a way that doesn't run into the problems with Goodhart's law I mentioned here?

Replies from: MugaSofer
comment by MugaSofer · 2013-08-06T13:39:26.571Z · LW(p) · GW(p)

Same way the driver’s-ed test or the citizenship test given to immigrants manage it? Or perhaps you think they don't ... I find it unlikely this design problem should be simply dismissed as unsolvable but it certainly needs to be borne in mind ... point, I guess.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-07T00:19:40.468Z · LW(p) · GW(p)

The driver's-ed test and to a certain extent the citizenship test have different incentives then a voting test. In particular with a voting test the incentive is to turn it into a test of whether the person agrees with the test writers' political beliefs.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-18T20:13:32.054Z · LW(p) · GW(p)

I have to admit, I'm just assuming you would arrange better incentives for the designers. Say, have independent reviews and connect them to salary, or only recruit those with a strong desire for neutrality (and give them access to domain experts). Then again, I have no idea if the incentives actually align for the creators of other tests ... everyone is crazy and the world is mad, etc, etc.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-20T01:18:43.606Z · LW(p) · GW(p)

I have to admit, I'm just assuming you would arrange better incentives for the designers.

You seem to be massively underestimating how hard this is. You can't simply wave this problem away by invoking words like "independent", "neutrality", and "domain expert" as if they're some kind of magic spell.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-24T12:49:31.795Z · LW(p) · GW(p)

... I wasn't. I was sketching out, off the top of my head, the basic precautions I would take on attempting something like this. You seem to be estimating the difficulty - the impossibility - on the basis of a model where you take no precautions whatsoever.

comment by Eugine_Nier · 2013-08-07T00:44:22.331Z · LW(p) · GW(p)

Even though each individual black person of IQ 65 is still pretty stupid, allowing a greater proportion of stupid people from one race than another to vote is bad.

This is not a priori obvious. In any case why are imperfections with the test that happen to be correlated with race worse than imperfections correlated with occupation, social class, or any other trait that could act as a proxy for political beliefs?

comment by Eugine_Nier · 2013-08-04T06:33:03.192Z · LW(p) · GW(p)

Not to mention the temptation to sneak political biases into the competency tests.

comment by A1987dM (army1987) · 2013-07-31T10:14:34.754Z · LW(p) · GW(p)

Your memories of being twelve must be very different from mine.

Replies from: MugaSofer
comment by MugaSofer · 2013-07-31T13:43:26.629Z · LW(p) · GW(p)

Quite possibly. But then, that's rather the point, isn't it?

comment by itaibn0 · 2013-07-31T19:30:32.714Z · LW(p) · GW(p)

This is relevant

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-04T06:31:33.663Z · LW(p) · GW(p)

Not one of Scott's better ideas.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-04T15:55:12.569Z · LW(p) · GW(p)

You mean his other ideas are even better!? My God... (But seriously, folks ... what exactly are your counterarguments?)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-06T01:28:06.703Z · LW(p) · GW(p)

I brought them up elsewhere in the thread.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-06T13:37:23.339Z · LW(p) · GW(p)

I'll address my replies there then. Still, it's slightly ... off to just post "not a good idea" on it's own?

comment by Said Achmiz (SaidAchmiz) · 2013-07-29T13:17:26.766Z · LW(p) · GW(p)

This seems like a reasonable thing to prove!

comment by Remi Temmos (remi-temmos) · 2019-12-03T06:17:52.972Z · LW(p) · GW(p)

I think at the core of the debate is a misunderstanding of what life is, the example you use of the broken chain of ancestors is a good illustration of this. Life is struggle for perpetuation of self, the whole point of evolution and why there is a chain and not only one species is because we each fight as groups to survive amongst or against each other. this philosophy of life where violence and struggle could disappear is utter non-sense to me. life is death, without it, without constant struggle at all levels, there is no evolution and no life.

so yes it's a blur boundary that each group has to define, it doesn't have to be absolute and is moving, so we can take care of that mutated pig and still eat all the others, so what?

and yes we can not be racist but still find reasons to fight and kill other humans or whatever will stand in the way of the group's survival...

this naive, to me, view of what life is or should be is the most ridiculous.

now if you really want/need a selection criteria, leaving aside babies as it's a fallacy to split people's life into chunks of arbitraty time (by that token you can kill any life-form while they are sleeping!), I'll give you one.

you have the right to be recognized or considered as part of humans if you fight for it. there you have it.

women and people from all "races" fought for their rights, when pigs and cattle will stand and let us know they've had enough, it'll be time to consider the question.

All that being said it has nothing to do really with the moral or utilitarian argument to stop factory farming, if we can do it in ways that don't shock ourselves and will make us feel better, we should. but certainly not based on some flawed utilitarian argument.


my 2 cents.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2019-12-03T04:50:49.385Z · LW(p) · GW(p)
women and people from all "races" fought for their rights, when pigs and cattle will stand and let us know they've had enough, it'll be time to consider the question.

To generalize the principle you have described, should we never give a group moral consideration unless they can advocate for themselves? If we adopted this principle, then young children would immediately lose moral consideration, as would profoundly mentally disabled people.

comment by Lumifer · 2013-07-29T20:15:49.341Z · LW(p) · GW(p)

Some readers may still feel that there is something special about being a member of the human species.

LOL. I certainly do.