The Ethical Status of Non-human Animals
post by syllogism · 2012-01-09T12:07:15.339Z · LW · GW · Legacy · 88 commentsContents
Front-loaded definitions and summary: Self-aware: A self-aware mind is one that understands that it exists and that it persists through time. Sentience: A sentient mind is one that has subjective experiences, such as pleasure and pain. I assume that self-awareness subsumes sentience (i.e. all self... Personhood and Sentience Consequentialism/Utilitarianism How much do sentient non-persons count? The problem with speciesism None 88 comments
There's been some discussion on this site about vegetarianism previously, although less than I expected. It's a complicated topic, so I want to focus on a critical sub-issue: within a consequentialist/utilitarian framework, what should be the status of non-human animals? Do only humans matter? If non-human animals matter only a little, just how much do they matter?
I argue that species-specific weighting factors have no place in our moral calculus. If two minds experience the same sort of stimulus, the species of those minds shouldn't affect how good or bad we believe that to be. I owe the line of argument I'll be sketching to Peter Singer's work. His book Practical Ethics is the best statement of the case that I'm aware of.
Front-loaded definitions and summary:
- Self-aware: A self-aware mind is one that understands that it exists and that it persists through time.
- Sentience: A sentient mind is one that has subjective experiences, such as pleasure and pain. I assume that self-awareness subsumes sentience (i.e. all self-aware minds are also sentient, but not vice versa).
- Person: A self-aware mind.
- A human may be alive but non-sentient, due to injury or birth defects.
- Humans may be sentient but not self-aware, due to injury, birth defect or infancy.
- Non-human persons are possible: hypothetically, aliens and AIs; controversially, non-human great apes.
- Many non-human animals are sentient, many are not.
- Utilitarian ethics involve moral calculus: summing the impacts of an action (or some proxy for them, such as preferences) on all minds.
- When performing this calculus, do sentient (but non-self aware) minds count at all? If so, do they count as much as persons?
- If they count for zero, there's no ethical problem with secretly torturing puppies, just for fun.
- We're tempted to believe that sentient minds count for something, but less than persons.
- I think this is just a cover for what we're really tempted to believe: humans count for more than non-humans, not because of the character of our minds, but simply because of the species we belong to.
- Historically, allowing your ethical system to arbitrarily promote the interests of those similar to you has led to very bad results.
Personhood and Sentience
Cognitively healthy mature humans have minds that differ in many ways from the other species on Earth. The most striking is probably the level of abstraction we are able to think at. A related ability is that we are able to form detailed plans far into the future. We also have a sense of self that persists through time.
Let's call a mind that is fully self-aware a person. Now, whether or not there are any non-human persons on Earth today, non-human persons are certainly possible. They might include aliens, artificial intelligences, or extinct ancestral species. There are also humans that are not persons; due to brain damage, birth defects, or perhaps simply infancy[1]. Minds that are not self-aware in this way, but are able to have subjective experiences, let's call sentient.
Consequentialism/Utilitarianism
This is an abridged summary of consequentialism/utilitarianism, included for completeness. It's designed to tell you what I'm on about if you've never heard of this before. For a full argument in support of this framework, see elsewhere.
A consequentialist ethical framework is one in which the ethical status of an action is judged by the "goodness" of the possible worlds it creates, weighted by the probability of those outcomes[2]. Nailing down a "goodness function" (usually called a utility function) that returns an answer [0,1] for the desirability of a possible world is understandably difficult. But the parts that are most difficult also seldom matter. The basics are easy to agree upon. Many of our subjective experiences are either sharply good or sharply bad. Roughly, a world in which minds experience lots of good things and few bad things should be preferable to a world in which minds have lots of negative experiences and few positive experiences.
In particular, it's obvious that pain is bad, all else being equal. A little pain can be a worthwhile price for good experiences later, but it's considered a price precisely because we'd prefer not to pay it. It's a negative on the ledger. So, an action which reduces the amount of pain in the world, without doing sufficient other harms to balance it out, would be judged "ethical".
The question is: should we only consider the minds of persons -- self-conscious minds that understand they are a mind with a past, present, and future? Or should we also consider merely sentient minds? And if we do consider sentient minds, should we down-weight them in our utility calculation?
Do the experiences of merely sentient minds receive a weight of 0, 1, or somewhere in between?
How much do sentient non-persons count?
Be careful before answering "0". This implies that a person can never treat a merely sentient mind unethically, except in violation of the preferences of other persons. Torturing puppies for passing amusement would be ethically A-OK, so long as you keep it quiet in front of other persons who might mind. I'm not a moral realist -- I don't believe that when I say "X is unethical", I'm describing a property of objective reality. I think it's more like deduction given axioms. So if your utility function really is such that you ascribe 0 weight to the suffering of merely sentient minds, I can't say you're objectively correct or incorrect. I doubt many people can honestly claim this, though.
Is a 1.0 weight not equally ridiculous, though? Let's take a simple negative stimulus, pain. Imagine you had to choose between possible worlds in which either a cognitively normal adult human or a cognitively normal pig received a small shallow cut that crossed a section of skin connected to approximately the same number of nerves. The wound will be delivered with a sterile instrument and promptly cleaned and covered, so the only relevant thing here is the pain. The pig will also feel some fear, but let's ignore that.
You might claim that a utility function that didn't prefer that the pig feel the pain was hopelessly broken. But remember that the weight we're talking about applies to kinds of minds, not members of species. If you had to decide between a cognitively normal adult human, and a human that had experienced some brain damage such that they were merely sentient, would the decision be so easy? How about if you had to decide between a cognitively normal adult human, and a human infant?
The problem with speciesism
If you want to claim that causing the pig pain is preferable to causing a sentient but not self-aware human pain, you're going to have to make your utility function species-sensitive. You're going to have to claim that humans deserve special moral consideration, and not because of any characteristics of their minds. Simply because they're human.
It's easy to go wild with hypotheticals here. What about an alien race that was (for some unimaginable reason) just like us? What about humanoid robots with minds indistinguishable from ours?
To me it's quite obvious that species-membership, by itself, shouldn't be morally relevant. But it's plain that this idea is unintuitive, and I don't think it's a huge mystery why.
We have an emotional knee-jerk reaction to consider harm done to beings similar to ourselves as much worse than harm done to beings different from us. That's why the idea that a pig's pain might matter just as much as a human's makes you twitch. But you mustn't let that twitch be the deciding factor.
Well, that's not precisely correct: again, there's no ethical realism. There's nothing in observable reality that says that one utility function is better than another. So you could just throw in a weighting for non-human animals, satisfy your emotional knee-jerk reaction, and be done with it. However, that similarity metric once made people twitch at the idea that the pain of a person with a different skin pigmentation mattered as much as theirs.
If you listen to that twitch, that instinct that those similar to you matter more, you're following an ethical algorithm that would have led you to the wrong answer on most of the major ethical questions through history. Or at least, the ones we've since changed our minds about.
If I'm happy to arbitrarily weight non-human animals lower, just because I don't like the implications of considering their interests equal, I would have been free to do the same when considering how much the experiences of out-group persons should matter. When deciding my values, I want to be using an algorithm that would've gotten the right answer on slavery, even given 19th century inputs.
Now, having said that the experiences of merely sentient minds matter, I should reiterate that there are lots of kinds of joys and sufferings not relevant to them. Because a rabbit doesn't understand its continued existence, it's not wrong to kill it suddenly and painlessly, out of sight/smell/earshot of other rabbits. There are no circumstances in which killing a person doesn't involve serious negative utility. Persons have plans and aspirations. When I consider what would be bad about being murdered, the momentary fear and pain barely rank. Similarly, I think it's possible to please a person more deeply than a merely sentient mind. But when it comes to a simple stimulus like pain, which both minds feel similarly, it's just as bad for both of them.
When I changed my mind about this, I hadn't yet decided to particularly care about how ethical I was. This kept me from having to say "well, I'm not allowed to believe this, because then I'd have to be vegetarian, and hell no!". I later did decide to be more ethical, but doing it in two stages like that seemed to make changing my mind less traumatic.
[1] I haven't really studied the evidence about infant cognition. It's possible infants are fully self-conscious (as in, have an understanding that they are a mind plus a body that persists through time), but it seems unlikely to me.
[2] Actually I seldom see it stated probabilistically like this. I think this is surely just an oversight? If you have to choose between pushing a button that will save a life with probability 0.99, and cost a life with probability 0.01, surely it's not unethical after the fact if you got unlucky.
88 comments
Comments sorted by top scores.
comment by [deleted] · 2012-01-09T14:33:17.401Z · LW(p) · GW(p)
Historically, allowing your ethical system to arbitrarily promote the interests of those similar to you has led to very bad results.
No, historically speaking it is how the human species survived and is still a core principle around which humans organize.
Arbitrarily promoting the interests of those more similar to yourself is the basis of families, nations, religions and even ideologies, while all of these go wrong occasionally (especially religion and ideology are unlikely to have many friend here), I think that overall they are a net gain. People help those more similar to themselves practically all the time, it is just that when this leads to hurting others it is far more attention grabbing than say the feeling of solidarity that lets the Swedes run their cosy welfare state or the solidarity of a tribe somewhere in the New Guinea sharing their food so other tribe members don't starve.
I am pretty sure that if you removed the urge to help those more similar to oneself from humans right now by pressing a reality modification button(TM), the actual number of people helping each other and being helped would be reduced drastically.
Replies from: bogus, syllogism↑ comment by bogus · 2012-01-09T23:52:56.418Z · LW(p) · GW(p)
Arbitrarily promoting the interests of those more similar to yourself is the basis of families, nations, religions and even ideologies, while all of these go wrong occasionally (especially religion and ideology are unlikely to have many friend here), I think that overall they are a net gain. People help those more similar to themselves practically all the time
Yes, this makes a lot of sense due to coordination costs; we are less informed about folks dissimilar to ourselves, so helping them effectively is harder. Reflexive consistency may also play a role: folks similar to myself are probably facing similar strategic situations and expressing similar algorithms, so a TDT decision rule will lead to me promoting their interests more. It is easier to avoid hurting others when such situations arise than to try and help everyone equally.
↑ comment by syllogism · 2012-01-09T15:32:42.470Z · LW(p) · GW(p)
I think you need to be careful about describing human behaviour, and defining what you mean by "ethical".
Obviously the most self-similar person to help is actually yourself (100% gene match), then your identical twin, parent/children/sibling/etc. It's no surprise that this is the hierarchy of our preferences. The evolutionary reason for that is plain.
But evolution doesn't make it "right". For whatever reason, we also have this sense of generalised morality. Undoubtedly, the evolutionary history of this sense is closely linked to those urges to protect our kin. And equally, when we talk about what's "moral", in this general sense, we're not actually describing anything in objective reality.
If you're comfortable ignoring this idea of morality as evolutionary baggage that's been hijacked by our society's non-resemblance to our ancestral environment, then okay. If you can make that peace with yourself stick, then go ahead. Personally though, I find the concept much harder to get rid of. In other words, I need to believe I'm making the world a better place -- using a standard of "better" that doesn't depend on my own individual perspective.
Replies from: None↑ comment by [deleted] · 2012-01-09T15:42:48.262Z · LW(p) · GW(p)
Obviously the most self-similar person to help is actually yourself (100% gene match), then your identical twin, parent/children/sibling/etc. It's no surprise that this is the hierarchy of our preferences. The evolutionary reason for that is plain.
I'm (mostly) not my genes. Remember I don't see a big difference between flesh-me and brain-emulation-me. These two entities share no DNA molecules. But sure I probably do somewhat imperfectly, by proxy, weight genes in themselves, kin selection and all that has probably made sure of it.
But evolution doesn't make it "right".
No, it being in my brain makes it right. Whether it got there by the process of evolution or from the semen of an angry philandering storm god dosen't really make a difference to me.
If you're comfortable ignoring this idea of morality as evolutionary baggage that's been hijacked by our society's non-resemblance to our ancestral environment, then okay. If you can make that peace with yourself stick, then go ahead. Personally though, I find the concept much harder to get rid of. In other words, I need to believe I'm making the world a better place -- using a standard of "better" that doesn't depend on my own individual perspective
I don't think you are getting that a universe that dosen't contain me or my nephews or my brain ems might still be sufficiently better according to my preferences that I'd pick it over us. There is nothing beyond your own preferences, "ethics" of any kind are preferences too.
I think you yearn for absolute morality. That's ok, we all do to varying extents. Like I said somewhere else on LessWrong my current preferences are set up in such a way that if I received mathematical proof that the universe actually does have a "objectively right" ethical system that is centred on making giant cheesecakes, I think I'd probably dedicate a day or so during the weekends for that purpose. Ditto for paper-clips.
Maybe I'd even organize with like-minded people for a two hour communal baking or materials gathering event. But I'd probably spend the rest of the week much like I do now.
Replies from: syllogism, Ghatanathoah, duckduckMOO↑ comment by syllogism · 2012-01-09T17:14:00.608Z · LW(p) · GW(p)
I'm (mostly) not my genes. Remember I don't see a big difference between flesh-me and brain-emulation-me. These two entities share no DNA molecules. But sure I probably do somewhat imperfectly, by proxy, weight genes in themselves, kin selection and all that has probably made sure of it.
Right, sorry. The genes tack was in error, I should've read more closely.
I think I've understood the problem a bit better, and I'm trying to explain where I think we differ in reply to the "taboo" comment.
↑ comment by Ghatanathoah · 2012-10-12T08:43:07.459Z · LW(p) · GW(p)
There is nothing beyond your own preferences, "ethics" of any kind are preferences too.
I don't think I believe this, although I suspect the source of our disagreement is over terminology rather than facts.
I tend to think of ethics as a complex set of facts about the well-being of yourself and others. So something is ethical if it makes people happy, helps them achieve their aspirations, treats them fairly, etc. So if, when ranking your preferences, you find that the universe you have the greatest preference for isn't one in which peoples' well-being, along certain measures, is as high as possible, that doesn't mean that improving people's well-being along these various measures isn't ethical. It just means that you don't prefer to act 100% ethically.
To make an analogy, the fact that an object is the color green is a fact about what wavelengths of light it reflects and absorbs. You may prefer that certain green-colored objects not be colored green. Your preference does not change the objective and absolute fact that these objects are green. It just means you don't prefer them to be green.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-10-12T16:17:50.844Z · LW(p) · GW(p)
To make an analogy, the fact that an object is the color green is a fact about what wavelengths of light it reflects and absorbs.
...and also about how certain cells in your eye function. Which doesn't change your analogy at all, but it's sometims a useful thing to remember.
↑ comment by duckduckMOO · 2012-01-09T17:38:53.629Z · LW(p) · GW(p)
"I think you yearn for absolute morality. That's ok, we all do to varying extents"
I think syllogism's preference is for unbiased morality.
Yearns is in quotes because he decided on his ethics before deciding he cared. his reasoning probably has nothing to do with yearning or similiar, as you seem to be implying.
also "That's ok, we all do to varying extents" I don't think it is. i think it's silly, and there are almost certainly people who don't (and they count). "absolute morality" in the sense "objectively (universally) right" shouldn't even parse
Replies from: Nonecomment by MinibearRex · 2012-01-10T04:37:28.668Z · LW(p) · GW(p)
I'm still considering the main point of your article, but one paragraph got me thinking about something.
If I'm happy to arbitrarily weight non-human animals lower, just because I don't like the implications of considering their interests equal, I would have been free to do the same when considering how much the experiences of out-group persons should matter. When deciding my values, I want to be using an algorithm that would've gotten the right answer on slavery, even given 19th century inputs.
Could it be that slavery was wrong, not because the ethical intuition "it is ok to force creatures less intelligent than you to serve you" is incorrect, but because we were putting in the wrong input? Your paragraph made me think of this quote by C.S. Lewis:
But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbours or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did. There is no difference of moral principle here: the difference is simply about matter of fact. It may be a great advance in knowledge not to believe in witches: there is no moral advance in not executing them when you do not think they are there. You would not call a man humane for ceasing to set mousetraps if he did so because he believed there were no mice in the house.
In the 19th century, people believed that African humans were less intelligent than white men, and had the intelligence of mere animals. This factual statement was incorrect. Is that, rather than the then-accepted belief that coercion was acceptable, the root of the evil of slavery?
Although it also has occurred to me that this reason why slavery was acceptable is quite likely to be rationalization. Which makes me suspicious of my own arguments.
Replies from: syllogism, MugaSofer, DavidAgain↑ comment by syllogism · 2012-01-10T05:42:13.711Z · LW(p) · GW(p)
Even if there were a race that was actually inferior, would it be okay to enslave them, or otherwise mistreat them? Or let's say instead that we had excellent genetic tests that predicted someone's intelligence. How should the identifiably stupid people be treated?
I think their preferences should still receive equal weighting. Peter Singer's fond of this quote by Sojourner Truth in relation to this point:
Replies from: DavidAgain"They talk about this thing in the head; what do they call it?" ["Intellect," whispered some one loudly] "That's it, honey. What's that got to do with women's rights or Negroes' rights? If my cup won't hold but a pint, and yours holds a quart, wouldn't you be mean not to let me have my little half-measure full?"
↑ comment by DavidAgain · 2012-01-10T18:50:48.162Z · LW(p) · GW(p)
This rather depends on whether intelligence relates to the sorts of feelings involved. I'm not sure we can have an absolute divide between 'sentient' and 'self-aware' by your measurement, and I think there might be some other meaningful question about whether more intelligent species have a wider/deeper range of preferences. So neither I nor a sheep likes having a leg crushed, but the pain I feeled is combined with a sense of regret about what I'll miss out in in life, shock to my sense of identity, whatever.
I'm not sure if this is just a way to try to justify my inuitive presumption in favour of humans and then more intelligent animals, though.
↑ comment by MugaSofer · 2012-10-26T09:50:46.473Z · LW(p) · GW(p)
There was more to slavery than estimations of intelligence - the justifications varied wildly, were usually absurdly simple to disprove, and often contradicted each other ("they were designed by God to be enslaved by superior races" vs "they have weaker self-control and would kill/rape us if left unchecked", for example.)
However, the point that it was a failure of rationality, not ethics, is still valid. Unfortunately that was the OP's point as well.
↑ comment by DavidAgain · 2012-01-10T18:47:26.070Z · LW(p) · GW(p)
Good argument: and I've always liked that Lewis quote. It's frustrating when people use moral criticism on people who are actually simply working off different factual preferences.
In terms of rationalisation, I would expect there to be an element of both. It's not very surprising that 19th century Westerners would honestly believe that the Africans they came across were less intelligent: people often mistake people from another culture as lacking intelligence, and that culture being less scientifically and socially (if that can be meaningfuly measured) developed wouldn't help. On the other hand, you have to look down on people when you mistreat them, or else hate them. In fact, I think I remember reading about a psychological experiment where people disliked people simply because they'd victimised them in some constructed game. I'll try to root it out...
comment by [deleted] · 2012-01-09T14:27:00.068Z · LW(p) · GW(p)
I think this is just a cover for what we're really tempted to believe: humans count for more than non-humans, not because of the character of our minds, but simply because of the species we belong to.
I pretty much fine with a measure of speciesism. I don't at all mind explicitly valuing human minds over human minds just based on them being human (though I don't think I care that much about substrate so ems are human to me).
I don't think I'm alone.
Replies from: syllogism, peter_hurford↑ comment by syllogism · 2012-01-09T15:05:41.337Z · LW(p) · GW(p)
Well, take the alien hypothetical. We make contact with this alien race, and somehow they have almost the same values as us. They too have a sense of fun, and aesthetics, and they care about the interests of others. Are their interests still worth less than a human interests? And do we have any right to object if they feel that our interests are worth less than their own?
I can't take seriously an ethical system that says, "Humans are more morally considerable, simply because I am human". I need my ethical system to be blind to the facts of who I am. I could never expect a non-human to agree that humans were ethically special, and that failure to convince them becomes a failure to convince myself.
I feel like there's a more fundamental objection here that I'm missing.
Replies from: None↑ comment by [deleted] · 2012-01-09T15:11:31.356Z · LW(p) · GW(p)
I need my ethical system to be blind to the facts of who I am.
Well sure, knock yourself out. I don't feel that need though.
Edit: Wow this is awkward, I've even got an old draft titled "Rawls and probability" where I explain why I think he's wrong. I should really get back to working on that, even if I just publish an obsolete version.
Are their interests still worth less than a human interests?
Personally to me? Yes.
But if you make them close enough some alien's interest might be worth more than some human's interest. If you make them similar enough the difference in my actions practically disappear. You can also obviously make a alien species that I would prefer to humans, perhaps even all humans. This dosen't change they take a small hit for not being me and extended me.
I'm somewhat selfish. But I'm not just somewhat selfish for me, I'm somewhat selfish for my mother, my sister, my girlfriend, my cousin, my best friend ect.
I could never expect a non-human to agree that humans were ethically special, and that failure to convince them becomes a failure to convince myself.
You could never expect a non-you to agree that you is ethically special. Does that failure to convince them become a failure to convince yourself?
Replies from: TheOtherDave, syllogism, peter_hurford↑ comment by TheOtherDave · 2012-01-09T17:24:02.704Z · LW(p) · GW(p)
You can also obviously make a alien species that I would prefer to humans, perhaps even all humans.
Huh. I can make a non-you that you would prefer to you?
That is not in fact obvious.
Can you say more about what properties that non-you would have?
Replies from: None↑ comment by [deleted] · 2012-01-09T17:49:11.930Z · LW(p) · GW(p)
Sure you can make a non-me that I prefer to me. I'm somewhat selfish, but I think I put weight on future universe states in themselves.
I can make a non-you that you would prefer to you?
Sure, I may try to stop you from making one though. Depends on the non-me I guess.
Replies from: TheOtherDave, EmileKonkvistador wakes up in a dimly lit perfectly square white room sitting on a chair staring at Omega
Omega: "Either you or your daughter can die. Here I have this neat existence machine when I turn it on in a few minutes, I can set the lever to you or Anna, the one that's selected is reimplemented in atoms back on Earth, the other isn't and his pattern is deleted. "
Me: "I can't let Anna die. Take me!"
Omega: "Ok, but before we do that. You can keep on existing or I make a daughter exist in your place."
Me: "What Anna?
Omega: "No no a new daughter."
Me: "Eh no thanks."
Omega: "Are you sure?"
Me: "Pretty much."
Omega: "Ok what if I gave you all the memories of several years spent with her?"
Me: "That would make her the same to me as Anna."
Omega: "It would. Indeed I may have done that already. Anna may or may not exist currently. So about the memories of an extra daughter ..."
Me: "No thanks."
Omega: "Ok ok, would you like your memories of Anna taken away too?"
Me: "No."
Omega: "Anna is totally made up, I swear!"
Me: "It seemed probable, the answer is no regardless. And yes to saving Anna's life."
Konkvistador dies and somewhere on Earth in a warm bed Anna awakes and takes her first breath, except no one knows it is her first time.
↑ comment by TheOtherDave · 2012-01-09T19:02:59.473Z · LW(p) · GW(p)
Just to make sure I understood: you can value the existence of a nonexistent person of whom you have memories that you know are delusional more than you value your own continued existence, as long as those memories contain certain properties. Yes?
So, same question: can you say more about what those properties are? (I gather from your example that being your daughter can be one of them, for example.)
Also... is it important that they be memories? That is, if instead of delusional memories of your time with Anna, you had been given daydreams about Anna's imagined future life, or been given a book about a fictional daughter named Anna, might you make the same choice?
Replies from: None↑ comment by [deleted] · 2012-01-09T19:11:32.989Z · LW(p) · GW(p)
Just to make sure I understood: you can value the existence of a nonexistent person of whom you have memories that you know are delusional more than you value your own continued existence, as long as those memories contain certain properties. Yes?
Yes.
So, same question: can you say more about what those properties are? (I gather from your example that being your daughter can be one of them, for example.)
I discovered the daughter example purely empirically when doing thought experiments. It seems plausible there are other examples.
Also... is it important that they be memories? That is, if instead of delusional memories of your time with Anna, you had been given daydreams about Anna's imagined future life, or been given a book about a fictional daughter named Anna, might you make the same choice?
Both of these would have significantly increased the probability that I would choose Anna over myself, but I think the more likley course of action is that I would choose myself.
If I have memories of Anna and my life with her, I find basically find myself in the "wrong universe" so to speak. The universe where Anna and my life for the past few years didn't happen. I have the possibility to save either Anna or myself by putting one of us back in the right universe (turning this one into the one in my memory).
In any case I'm pretty sure that Omega can write sufficiently good books to want you to value Anna or Alice or Bob above your life. He could even probably make a good enough picture of a "Anna" or "Alice" or "Bob" for you to want her/him to live even at the expense of your own life.
Suppose one day you are playing around with some math and you discover a description of ... I hope you can see where I'm going with this. Not knowing the relevant data set about the theoretical object, Anna, Bob or Cthulhu, you may not want to learn of them if you think it will make you want to prefer their existence to your own. But once you know them by definition you value their existence above your own.
This brings up some interesting associations not just with Basilisks but also with CEV in my mind.
↑ comment by syllogism · 2012-01-09T15:18:25.528Z · LW(p) · GW(p)
You could never expect a non-you to agree that you is ethically special. Does that failure to convince them become a failure to convince yourself?
Yes, definitely. I don't think I'm ethically special, and I don't expect anyone else to believe that either. If you're wondering why I still act in self-interest, see reply on other thread.
Replies from: CaveJohnson↑ comment by CaveJohnson · 2012-01-09T16:57:36.329Z · LW(p) · GW(p)
Reading this I wonder why you think pursuing systematized enlightened self-interest or whatever Konkvistador would call the philosophy as not ethics rather than different ethics?
↑ comment by Peter Wildeford (peter_hurford) · 2012-01-09T15:14:44.177Z · LW(p) · GW(p)
What is your opinion on things like racism and sexism?
Replies from: None↑ comment by [deleted] · 2012-01-09T15:56:45.203Z · LW(p) · GW(p)
They are bad when they on net hurt people? What kind of an answer where you expecting?
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2012-01-15T07:06:00.915Z · LW(p) · GW(p)
Both racism and sexism are an unfair and inequal preference for a specific group, typically the one the racist or sexist is a member of. If you're fine with preferring your personal species above and beyond what an equal consideration of interests would call for, I was interested in if you also were fine with others making other calls about what interests they would give inequal weight to.
It seems inconsistent to be a speciesist, but not permit others to be racist or sexist.
Replies from: None, None↑ comment by [deleted] · 2012-01-15T08:01:34.778Z · LW(p) · GW(p)
Both racism and sexism are an unfair and inequal preference for a specific group, typically the one the racist or sexist is a member of.
A racist or sexist may or may not dispute the unequal bit, I'm pretty sure they would dispute the unfair bit. Because duh, ze dosen't consider it unfair. I'm not too sure what you mean by fair, can we taboo it or at least define it?
Also would you say there exist fair and equal preferences for a specific group that one belongs to? Note, I'm not asking you about racism or sexism specifically, but for any group (which I assume are all based on a characteristic the people in it share, be it a mole under their right eye, a love for a particular kind of music or a piece of paper saying "citizen").
I was interested in if you also were fine with others making other calls about what interests they would give inequal weight to.
Sure why not, they should think about this stuff and come up with their own answers. Its not like there is an "objectively right morality" function floating around and I do think human values differ. I couldn't honestly say I wouldn't be too biased when trying to make a "fixed" version of someone's morality, I think I would probably just end up with a custom tailored batch of rationalizations that would increment their morality towards my own, no matter what my intentions.
Though obviously if they come up with a different value system than my own, our goals may no longer be complimentary and we may indeed become enemies. But we may be enemies even if we have identical values, for example valuing survival in itself can easily pit you against other entities valuing the same, same goes for pure selfishness. Indeed sometimes fruitful cooperation is possible precisely because we have different values.
It isn't like Omega ever told us that all humanity really does have coherent complimentary goals or that we are supposed to. Even if that's what we are "supposed" to do, why bother?
↑ comment by [deleted] · 2012-01-15T07:37:00.918Z · LW(p) · GW(p)
You forgot classist and ableist. In any case it seems to be equally inconsistent to be opposed to specisim on those grounds and say not be opposed to being seflist, familyist and friendist. Or are you?
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2012-02-19T03:40:48.769Z · LW(p) · GW(p)
I don't think so. Just because we're more willing to help out our friends than random strangers doesn't imply we should be fine with people going around shooting random strangers in their legs. Likewise, we could favor our species compared to nonhuman animals and still not be fine with some of their harsh farming conditions.
How much value do you place on nonhuman animal welfare?
Replies from: None↑ comment by [deleted] · 2012-02-19T08:16:47.671Z · LW(p) · GW(p)
I don't think so.
I do think so. The last few exchanges we had where about "I was interested in if you also were fine with others making other calls about what interests they would give inequal weight to.".
I demonstrated that I'm fine with "inequal" and supposedly "unfair" (can we define that word?) preferences. While it may simple to separate the two. Unwillingness to help out and harming people are in many circumstances (due to opportunity costs for starters) the same thing.
Just because we're more willing to help out our friends than random strangers doesn't imply we should be fine with people going around shooting random strangers in their legs.
What if I shoot a stranger who is attacking my friends, my family or myself in the legs? Or choose to run over strangers rather than my daughter in a trolley problem?
I'm more fine with the suffering of random strangers than I am with the suffering of my friends or family. I don't think that I'm exceptional in this regard. Does this mean that their suffering has no value to me? No, obviously not, I would never torture someone to get my friend a car or make my elderly mother a sandwich.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2012-02-19T09:02:53.109Z · LW(p) · GW(p)
Put aside my earlier notions of "inequal" and "unfair"... I don't think they're necessary for us to proceed on this issue.
You said these things were "bad when they on net hurt people". I noticed you said people, and not non-human animals, but you have said that you put at least some value on non-human animals.
Likewise, you've agreed that the pro-friend, pro-family preference only carries so far. But how far does the pro-human preference go? Assuming we agree on (1) the quality of life of certain nonhuman animals as they are made for food, (2) the capabilities for these nonhuman animals to feel a range of pain, and (3) the change in your personal quality of life by adopting habits to avoid most to all of this food (three big assumptions), then it seems like you're fine with a significant measure of spiecieism.
I guess if you're reaction is "so what", we might just have rather different terminal values, though I'm kind of surprised that would be the case.
Replies from: None↑ comment by [deleted] · 2012-02-19T09:08:48.994Z · LW(p) · GW(p)
You said these things were "bad when they on net hurt people". I noticed you said people, and not non-human animals, but you have said that you put at least some value on non-human animals.
That was in the context of thinking about sexism and racism. I assumed they have little impact on non-humans.
But how far does the pro-human preference go? Assuming we agree on (1) the quality of life of certain nonhuman animals as they are made for food, (2) the capabilities for these nonhuman animals to feel a range of pain, and (3) the change in your personal quality of life by adopting habits to avoid most to all of this food (three big assumptions), then it seems like you're fine with a significant measure of spiecieism.
I guess if you're reaction is "so what", we might just have rather different terminal values, though I'm kind of surprised that would be the case.
I could be underestimating how much animals suffer (I almost certainly am to a certain existent simply because it is not something I have researched, and less suffering is the comforting default answer), you could be overestimating how much you care about animals being in pain due to anthropomorphizing them somewhat.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2012-02-19T18:46:02.097Z · LW(p) · GW(p)
you could be overestimating how much you care about animals being in pain due to anthropomorphizing them somewhat.
Definitely a possibility, though I try to eliminate it.
↑ comment by Peter Wildeford (peter_hurford) · 2012-01-09T15:13:19.806Z · LW(p) · GW(p)
Rejecting specieism doesn't mean you have to value nonhuman animals and animals equally, you just have to value their suffering equally.
comment by [deleted] · 2012-01-09T14:57:04.285Z · LW(p) · GW(p)
Do the experiences of merely sentient minds receive a weight of 0, 1, or somewhere in between?
Why should all sentiment self-aware minds or persons have a weight of 1 in personal moral calculation? If two people have to die, me or a random human, I think I'll pick me every time. If I have to pick between me and a really awesome person I think I would consider it might be better for me to die. Now you might say "aha you only picked the more awesome person because that person is giving more awesomeness to other people!", but this dosen't really address the point at all. Say my hypothetical daughter, who may or may not become a really awesome person, wouldn't really have to wonder who I'd pick. In fact even if I had data that she would do less good for other people in the remainder of her life than I would if I picked me, I think I'd probably still pick her unless she was likley to become a successful serial killer or something.
Does that make me unethical? Dunno. Maybe. But do I have any reason to care about a XML tag that reads "unethical" floating above my head? As you imply in later paragraph, not really.
But I do agree "non-persons" don't seem to have a value of 0 in my preferences.
Replies from: syllogism↑ comment by syllogism · 2012-01-09T15:15:49.166Z · LW(p) · GW(p)
Why should all sentiment self-aware minds or persons should have a weight of 1 in personal moral calculation? If two people have to die, me or random human, I think I'll pick me every time.
Right, we have to draw a distinction here. I'm talking about how we define what's more ethical. That doesn't mean you're going to live up to that perfect ethical standard. You can say, in general, people's lives are equally valuable, and that knowing nothing about the two groups, you'd prefer two people died instead of three. Of course, in reality, we're not perfectly ethical, so we're always going to be choosing the set of three if we're in it. That doesn't change our definition, though.
Does that make me unethical? Dunno. Maybe. But do I have any reason to care about a XML tag that reads "unethical" floating above my head? As you imply in later paragraph, not really.
"Unethical" isn't a binary tag, though. Personally, I think in terms of a self-interest multiplier. It's more ethical to save 10 people instead of 1, but if I wouldn't do that if I were the one, then my self-interest multiplier is 10x.
So just what is my self-interest multiplier at? Well, I don't know exactly how great a bastard I am. But I do try to keep it a bit consistent. For instance, if I'm deciding whether to buy bacon, I try to remember that causing pain to pigs is as bad as causing pain to humans, all else being equal, and I'm being fooled by a lack of emotional connection to them. So that means that buying factory-farmed bacon implies a far, far greater self-interest multiplier than I'm comfortable with. I'd really rather not be that much of a bastard, so I don't buy it.
Replies from: None, None, duckduckMOO↑ comment by [deleted] · 2012-01-09T15:38:06.767Z · LW(p) · GW(p)
Right, we have to draw a distinction here. I'm talking about how we define what's more ethical. That doesn't mean you're going to live up to that perfect ethical standard. You can say, in general, people's lives are equally valuable, and that knowing nothing about the two groups, you'd prefer two people died instead of three. Of course, in reality, we're not perfectly ethical, so we're always going to be choosing the set of three if we're in it. That doesn't change our definition, though.
So if this "ethics" thing dosen't describe our preference properly... uh, what's it for then?
↑ comment by [deleted] · 2012-01-09T15:32:35.082Z · LW(p) · GW(p)
So just what is my self-interest multiplier at?
I think in a information theoretic definition of self. My self-interest multiplier dosen't rely on me being a single meat-bag body. It works for my ems or my perfect copies too. And the more imperfect copies and even many of the botched copies (with an additional modifier that's somewhat below 1) and ... do you see where I'm going with this?
Replies from: syllogism↑ comment by syllogism · 2012-01-09T15:38:17.657Z · LW(p) · GW(p)
Yeah, I do, but what I don't see is how this is ethics, and not mere self-interest.
If you don't draw any distinction between what you personally want and what counts as a better world in a more universalised way, I don't see how the concept of "ethics" comes in at all.
Replies from: None↑ comment by [deleted] · 2012-01-09T15:42:06.651Z · LW(p) · GW(p)
Can we taboo the words ethics and self-interest?
Replies from: syllogism↑ comment by syllogism · 2012-01-09T17:08:06.701Z · LW(p) · GW(p)
Okay. "Morality"'s banned too, as I use it as a synonym for ethics.
As a sub-component of my total preferences, which are predictors of my actions, I consider a kind of "averaged preferences" where I get no more stake in deciding what constitutes a better world than any other mind. The result of this calculation then feeds into my personal preferences, such that I have a weak but not inconsiderable desire to maximise this second measure, which I weigh against other things I want.
It seems to me that you don't do this second loop through. You have your own desires, which are empathically sensitive to some more than others, and you maximise those.
Replies from: None↑ comment by [deleted] · 2012-01-09T17:30:37.514Z · LW(p) · GW(p)
I think our positions may not be that different.
As a sub-component of my total preferences, which are predictors of my actions, I consider a kind of "averaged preferences" where I get no more stake in deciding what constitutes a better world than any other mind. The result of this calculation then feeds into my personal preferences, such that I have a weak but not inconsiderable desire to maximise this second measure, which I weigh against other things I want.
It seems to me that you don't do this second loop through.
Oh I do that too. The difference is that I apply a appropriately reduced selfish factor for how much I weigh minds that are similar or dissimilar from my own in various ways.
You can implement the same thing in your total preferences algorithm by using an extended definition of "me" for finding the value of "my personal preferences".
Edit: I'm not quite sure why this is getting down voted. But I'll add three clarifications:
- I obviously somewhat care about minds that are completely alien to my own too
- When I said not that different I meant it, I didn't mean identical, I just mean the output may not be that different. It really depends on which definition of self one is using running his algorithm, it also depends what our "selfish constant" is (it is unlikely we have a the same one).
- By "extended LWish dentition of "me", I meant the attitude where if you make a perfect copy of you, they are both obviously you, and while they do diverge, and neither can meaningfully call itself the "original".
↑ comment by syllogism · 2012-01-09T17:46:14.756Z · LW(p) · GW(p)
To me, that second loop through only has value to the extent that I can buy into the idea that it's non-partisan -- that it's "objective" in that weaker sense of not being me-specific.
This is why I was confused. I assumed that the problem was, when you talked about "making the world a better place", "better" was synonymous with your own preferences (the ones which are predictors of your actions). In other words, you're making the kind of world you want. In this sense, "making the world a better place" might mean you being global dictator in a palace of gold, well stocked harems, etc.
To me, putting that similarity factor into your better world definition is just a lesser version of this same problem. Your definition of "better world" is coloured by the fact that you're doing the defining. You/ve given yourself most of the stake in the definition, by saying minds count to the extent that they are similar to your own.
Replies from: None, None↑ comment by [deleted] · 2012-01-09T18:41:03.979Z · LW(p) · GW(p)
To me, that second loop through only has value to the extent that I can buy into the idea that it's non-partisan -- that it's "objective" in that weaker sense of not being me-specific.
The components that don't have any additional weight of you are still there in my implementation. If you feel like calling something objective you may as well call that part of the function that. When I said somewhat under 1, that was in the context of the modifier I give to people/entities that are "part-me" when applying the selfish multiplier.
Konkvistador:
1 Me selfish multiplier + 0.5 Me 0.5 * selfish multiplier + .... + 0 Me + 0 Me + 0 Me
syllogism
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + .... 0 Me + 0 Me + 0 Me
As you can probably see there are trivial ways to make these two equivalent.
Replies from: syllogism↑ comment by syllogism · 2012-01-10T00:25:45.339Z · LW(p) · GW(p)
Hmm either I don't understand or you don't.
Define P[i][j] as the preference-weight for some outcome j of some mind i. P[me][j] is my preference weight for j.
To decide my top-level preference for j -- ie in practice whether I want to do it, I consider
S P[me][j] + E sum(P[i][j] for i in minds)
Where S is the selfishness constant, E is the ethics constant, and S+E=1 for the convenience of having my preferences normalised [0,1].
In other words, I try to estimate the result of an unweighted sum of every mind's preferences, and call the result of that what a disinterested observer would decide I should do. I take that into account, but not absolutely. Note that this makes my preference function recursive, but I don't see that this matters.
I don't think your calculation is equivalent, because you don't estimate sum(P[i][j] for i in minds). To me this means you're not really thinking about what would be preferable to a disinterested observer, and so it feels like the playing of a different game.
PS In terms of FAI, P[i][j] is the hard part -- getting some accurate anticipation of what minds actually prefer. I unapologetically wave my hands on this issue. I have this belief that a pig really, really, really doesn't enjoy its life in a factory farm, and that I get much less out of eating bacon that it's losing. I'm pretty confident I'm correct on that, but I have no idea how to formalise it into anything implementable.
Replies from: None↑ comment by [deleted] · 2012-01-10T07:05:56.860Z · LW(p) · GW(p)
Hmm either I don't understand or you don't.
I don't think I'm misunderstanding since while we used different notation to describe this:
S P[me][j] + E sum(P[i][j] for i in minds)
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + .... 0 Me + 0 Me + 0 Me
We both described your preferences the same way. Though I neglected to explicitly normalize mine. To demonstrate I'm going to change the notation of my formulation to match yours.
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + .... 0 Me + 0 Me + 0 Me
P[me][j] * S + P[1] + P[2] + .... P[i-2] + P[i-1] + P[i]
S P[me][j] + E sum(P[i][j] for i in minds)
My notation may have been misleading in this regard, 0.5 Me isn't 0.5 Me it is just the mark I'd use for a mind that is ... well 0.5 Me*. In your model the "me content" dosen't matter when tallying minds, except when it hits 1 in your own, so there is no need to mark it, but the reason I still used the fraction-of-me notation to describe certain minds was to give an intuition of what your described algorithm and my described algorithm would do with the same data set.
Konkvistador:
1 Me selfish multiplier + 0.5 Me 0.5 * selfish multiplier + .... + 0 Me + 0 Me + 0 Me
syllogism
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + .... 0 Me + 0 Me + 0 Me
So if syllogism and Konkvistador where using the same selfish multiplier (let us call it S for short as you do) the difference between their systems would be the following
0.5 Me 0.5 (S-1) + 0.3 Me 0.3 (S-1) + .... really small fraction of Me really tiny number (S-1)
This may be a lot or it may not be very much, it really depends on how big it is compared to:
1 Me * S + 0 Me + 0 Me + 0 Me + ... 0 Me
In other words if "Me" is very concentrated in a universe, say you drop me in a completely alien one, my algorithm wouldn't produce an output measurably different from your algorithm. Your algorithm can also consistently give the same result if your S and Me embrace an extended self-identify, rather than just your local feeling of self. Now this of course boils down to the S factor and me being different for the same person when using this algorithm (we are after all talking about how something is or isn't implemented rather than having silly sloppy math for fun), but I think people really do have a different S factor when thinking of such issues.
In other words if for S * P[me][j] you use dosen't force your P[me][j] to necessarily a value of one. To help you understand a bit more by that imagine there is a universe that you can arrange to your pleasure and it contains P[you] but not just any P[you] it contains P[you] minus the last two weeks of memory. Does he still deserve the S factor boost? Or at least part of it?
Readers may be wondering that if the two things can be made mathematically equivalent, why I prefer my implementation to his (which is probably more standard among utilitarians who don't embrace an extended self). Why not just adopt the same model but use a different value of Me or a different S to capture your preferences? This is because in practice I think it makes the better heuristic for me:
The more similar a mind is to mine, the less harm is done by my human tendency towards anthropomorphizing (mind projection fallacy is less an issue when the slime monster really does want our women). In other words I can be more sure of my estimation of their interests, goals and desires is are likley to be influenced by subconscious rigging "their" preferences in my favour because they are now explicitly partially determined by the algorithm in my brain that presumably wants to really find the best option for an individual (the ones that runs when I say "What do I want?"). Most rationalist corrections made for 0.5 Me * 0.5 also have to be used in Me and vice versa.
I find it easier to help most people, because most people are pretty darn similar to me when comparing them with non-human or non-living processes. And it dosen't feel like a grand act selflessness or something that changes my self-image, signals anything or burns "willpower" but more like common sense.
It captures my intuition that I don't just care about my preferences and some averaged thing, but I care about specific people's preferences independent of "my own personal desires" more than others. This puts me in the right frame of mind when interacting with people I care about.
Edit: Down-voted already? Ok, can someone tell me what I'm doing wrong here?
↑ comment by duckduckMOO · 2012-01-09T18:13:47.499Z · LW(p) · GW(p)
how does you personally buying bacon hurt pigs? Is it because you wouldn't eat factory farmed non person human (and if so why not?) or an object level calculation of bastardliness of buying factory farmed bacon (presumably via your impact on pigs dying?)
I ask because I personally can't see the chain from me personally buying pig to pigs dying and I like having an easy source of protein. My brain tells me 0 extra animals die as a result of my eating meat.
I say my brain rather than I because I suspect this may be rationalisation: I don't think i'd react the same way to the idea of eating babies or decide eating meat is fine because no extra animals die if I wasn't already eating meat.
It's starting to look like rationalisation to me. But I still don't see any object level cost to eating meat.
edit: TL:DR eating meat is proof of my unethicalness but not actually unethical. Oh and for the record i reserve the right to be unethical.
Replies from: TheOtherDave, syllogism↑ comment by TheOtherDave · 2012-01-09T18:55:14.746Z · LW(p) · GW(p)
When pig farmers decide how many pigs to slaughter for bacon, they do so based on (among other things) current sales figures for bacon. When I buy bacon, I change those figures in such a way as to trigger a (negligibly) higher number of pigs being slaughtered. So, yeah, my current purchases of bacon contribute to the future death of pigs.
Of course, when pig farmers decide how many sows to impregnate, they do so based on (among other things) current sales figures for bacon. So my current purchases of bacon contribute to the future birth of pigs as well.
So if I want to judge the ethical costs of my purchasing bacon, I need to decide if I value pigs' lives (in which case purchasing bacon might be a good thing, since it might lead to more pig-lives), as well as decide if I negatively value pigs' deaths (in which case purchasing bacon might be a bad thing, since it leads to more pig-deaths). If it turns out that both are true, things get complicated, but basically I need to decide how much I value each of those things and run an expected value calculation on "eat bacon" and "don't eat bacon" (as well as "eat more bacon than I did last month," which might encourage pig farmers to always create more pigs than they kill, which I might want if I value pig lives more than I negatively value pig deaths).
Personally, I don't seem to value pig lives in any particularly additive way... that is, I value there being some pigs rather than no pigs, but beyond some hazy threshold number of "some" (significantly fewer than the actual number of pigs in the world), I don't seem to care how many there are. I don't seem to negatively value pig deaths very much either, and again I don't do so in any particularly additive way. (This is sometimes called "scope insensitivity" around here and labeled a sign of irrational thinking, though I'm not really clear what's wrong with it in this case.)
Replies from: Pablo_Stafforini, syllogism, duckduckMOO↑ comment by Pablo (Pablo_Stafforini) · 2012-08-18T18:16:48.489Z · LW(p) · GW(p)
You don't need to value pig lives as such to conclude that eating pigs would be against your values. You just need to value (negatively) certain mental states that the pigs can experience, such as the state of being in agony.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-08-18T18:59:04.337Z · LW(p) · GW(p)
I agree that I can conclude that eating pigs is against my values in various different ways, not all of which require that I value pig lives. (For example, I could value keeping kosher.)
But negatively valuing pig agony, period-full-stop, doesn't get me there. All that does is lead me to conclude that if I'm going to eat pigs, I should do so in ways that don't result in pigs experiencing agony. (It also leads me to conclude that if I'm going to refuse to eat pigs, I should do that in a way that doesn't result in pigs experiencing agony.
If I'm at all efficient, it probably leads me to painlessly exterminate pigs... after all, that guarantees there won't be any pigs-in-agony mental states. And, heck, now that there are all these dead pigs lying around, why not eat them?
More generally, valuing only one thing would lead me to behave in inhuman ways.
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2012-08-18T22:31:01.935Z · LW(p) · GW(p)
You claimed that you didn't value pig lives presumably as a justification for your decision to eat pigs. You then acknowledged that, if you valued the absence of agony, this would provide you with a reason to abstain from eating pigs not raised humanely. Do you value the absence of agony? If so, what animal products do you eat?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-08-19T01:59:50.806Z · LW(p) · GW(p)
First of all, I didn't claim I don't value pig lives. I claimed that the way I value pig lives doesn't seem to be additive... that I don't seem to value a thousand pig lives more than a hundred pig lives, for example. Second of all, the extent to which I value pig lives is almost completely unrelated to my decision to eat pigs. I didn't eat pigs for the first fifteen years of my life or so, and then I started eating pigs, and the extent to which I value pig lives did not significantly change between those two periods of my life.
All of that said... I value the absence of agony. I value other things as well, including my own convenience and the flavor of yummy meat. Judging from my behavior, I seem to value those things more than I negatively value a few dozen suffering cows or a few thousand suffering chickens. (Or pigs, in principle, though I'm not sure I've actually eaten a whole pig in my life thus far... I don't much care for pork.)
Anyway, to answer your question: I eat pretty much all the animal products that are conveniently available in my area. I also wear some, and use some for decoration.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-08-29T02:12:08.883Z · LW(p) · GW(p)
If anyone is inclined to explain their downvotes here, either publicly or privately, I'd be appreciative... I'm not sure what I'm being asked to provide less of.
↑ comment by syllogism · 2012-01-10T00:12:24.651Z · LW(p) · GW(p)
Hmm this business of valuing pig lives doesn't sit right with me.
My idea of utilitarianism is that everybody gets an equal vote. So you can feel free to include your weak preferences for more pigs in your self-interested vote, the same way you can vote for the near super-stimulus of crisp, flavoursome bacon. But each individual pig, when casting their vote, is completely apathetic about the continuation of their line.
So if you follow a utilitarian definition of what's ethical, you can't use "it's good that there are pigs" as an argument for eating them being ethical. It's what you want to happen, not what everyone on average wants to happen. I want to be king of the world, but I can't claim that everyone else is unethical for not crowning me.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-10T00:49:33.525Z · LW(p) · GW(p)
Leaving label definitions aside, I agree with you that IF there's a uniquely ethical choice that can somehow be derived by aggregating the preferences of some group of preference-havers, then I can't derive that choice from what I happen to prefer, so in that case if I want to judge the ethical costs of purchasing bacon I need to identify what everybody else prefers as part of that judgment. (I also, in that case, need to know who "everybody else" is before I can make that determination.)
Can you say more about why you find that premise compelling?
Replies from: syllogism↑ comment by syllogism · 2012-01-10T01:09:57.304Z · LW(p) · GW(p)
I find that premise compelling because I have a psychological need to believe I'm motivated by more than self-interest, and my powers of self-deception are limited by my ability to check my beliefs for self-consistency.
What this amounts to is the need to ask not just what I want, but how to make the world "better" in some more impartial way. The most self-convincing way I've found to define "better" is that it improves the net lived experience of other minds.
In other words, if I maximise that measure, I very comfortably feel that I'm doing good.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-10T15:06:50.733Z · LW(p) · GW(p)
Fair enough.
Personally I reject that premise, though in some contexts I endorse behaving as though it were true for pragmatic social reasons. But I have no problem with you continuing to believe it if that makes you feel good... it seems like a relatively harmless form of self-gratification, and it probably won't grow hair on your utility function.
↑ comment by duckduckMOO · 2012-01-09T20:10:08.337Z · LW(p) · GW(p)
Accidentally hit the comment button with a line of text written. Hit the retract button so i could start again. AND IT FUCKING JUST PUT LINES THROUGH IT WHAT THE FUCK.
That is to say, How do I unretract?
"When I buy bacon, I change those figures in such a way as to trigger a (negligibly) higher number of pigs being slaughtered."
But is my buying bacon actually recorded? It's possible that those calculations are done on sufficiently large scales that my personally eating meat causes no pig suffering. As in, were i to stop, would any fewer pigs suffer.
It's not lives and deaths i'm particularly concerned with. The trade off I'm currently thinking about is pig suffering vs bacon. And if pig suffering is the same whether i eat already dead pigs I'll probably feel better about it.
Scope insensitivity would be not taking into account personal impact on pig suffering I suppose. Seeing as there's already vast amounts I'm tempted to label anything I could do about it "pointless" or similiar.
Which bypasses the actual utility calculation (otherwise known as actual thinking, shutting up and multiplying.)
The point is I need to have a think about the expected consequences of buying meat, eating meat someone else has bought, eating animals i find by the road etc. Do supermarkets record how much meat is eaten? How important is eating meat for my nutrition (and or convenience?) etc.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-09T20:52:41.891Z · LW(p) · GW(p)
You can edit the comment. Conversely, you can simply create a new comment.
↑ comment by syllogism · 2012-01-10T00:00:42.287Z · LW(p) · GW(p)
It's hard to visualise, yeah.
Let's say that demand for bacon fell by 50%. It seems obvious that the market would soon respond and supply of bacon (in number of pigs raised for slaughter) would also fall by 50%, right? Okay, so re-visualise for other values -- 40%, 90%, 10%, 5%, etc.
You should now be convinced that there's a linear relationship between bacon supply and bacon demand. At a fine enough granularity, it's probably actually going to be a step-function, because individual businesses will succeed or fail based on pricing, or individual farmers might switch to a different crop. But every non-consumer of meat is just as responsible for that.
In other words, let's say 5% of people are vegetarian, causing 5% less meat production. We're all equally responsible for that decline, so we all get to say that, on average, we caused fewer pigs to die.
comment by Psychosmurf · 2012-01-10T07:30:02.191Z · LW(p) · GW(p)
I don't think self-awareness and sentience are the only dimensions along which minds can differ. The kinds of goals a mind tries to attain are much more relevant. I wouldn't want to ensure the survival of a mind that would make it more difficult for me to carry out my own goals. For example, let's say a self-aware and sentient paperclip maximizer were to be built. Can killing it be said to be unethical?
I think the minds of most non-human animals (with maybe the exception of some species of hominids) and human sociopaths are so different from ours that treating them unequally is justified in many situations.
Replies from: MugaSofer, syllogism↑ comment by syllogism · 2012-01-10T09:53:55.965Z · LW(p) · GW(p)
For example, let's say a self-aware and sentient paperclip maximizer were to be built. Can killing it be said to be unethical?
The maximiser would get a "vote" in the utility calculation, just as every other mind would get a vote. i.e., its preferences are fully considerable. We're performing an expected utility calculation that weighs each mind's preferences equally.
So the maximiser has a strong preference not to die, which is a negative on killing it. But assuming it's going to tile the universe with paperclips, its vote would get out-weighed by the votes of all the other minds.
Replies from: None, None↑ comment by [deleted] · 2012-01-10T11:29:23.597Z · LW(p) · GW(p)
So the maximiser has a strong preference not to die, which is a negative on killing it. But assuming it's going to tile the universe with paperclips, its vote would get out-weighed by the votes of all the other minds.
A less convenient possible world: There are a trillion paper-clip maximizers. They prefer the matter we are made out of to be used for paper-clips.
Replies from: syllogism, MugaSofer↑ comment by syllogism · 2012-01-10T12:38:09.668Z · LW(p) · GW(p)
Then it's more ethical to give the maximisers what they want.
Replies from: Mass_Driver↑ comment by Mass_Driver · 2012-08-23T22:42:27.424Z · LW(p) · GW(p)
Even though there's no moral realism, it still seems wrong that such an important ethical question turns out to hinge on whether humans or paper-clip-maximizers started breeding first. One way of not biting that bullet is to say that we shouldn't be "voting" at all. The only good reason to vote is when there are scarce, poorly divisible resources. For example, it makes sense to vote on what audio tracks to put on the Pioneer satellite; we can only afford to launch, e.g. 100 short sound clips, and making the clips even shorter to accommodate everyone's preferred tracks would just ruin them for everyone. On the other hand, if five people want to play jump rope and two people want to play hopscotch, the solution isn't to hold a vote and make everyone play jump rope -- the solution is for five people to play jump rope and two people to play hopscotch. Similarly, if 999 billion Clippys want to make paperclips and a billion humans want to build underground volcano lairs, and they both need the same matter to do it, and Clippies experience roughly the same amount of pleasure and pain as humans, then let the Clippies use 99.9% of the galaxy's matter to build paper clips, and let the humans use 0.1% of the galaxy's matter to build underground volcano lairs. There's no need to hold a vote or even to attempt to compare the absolute value of human utility with the absolute value of Clippy utility.
The interesting question is what to do about so-called "utility monsters" -- people who, for whatever reason, experience pleasure and pain much more deeply than average. Should their preferences count more? What if they self-modified into utility monsters specifically in order to have their preferences count more? What if they did so in an overtly strategic way, e.g., +20 utility if all demands are met, and -1,000,000 utility if any demands are even slightly unmet? More mundanely, if I credibly pre-commit to being tortured unless I get to pick what kind of pizza we all order, should you give in?
↑ comment by MugaSofer · 2012-10-26T09:45:43.824Z · LW(p) · GW(p)
I think Eliezer addressed that at one point (using a cake-making intelligence, I believe) - it would be more ethical, from a human perspective, to allow the paperclippers to make paperclips. However, it would be unethical to change the world from it's current state to one containing trillions of paperclippers, since the CEV of current people don't want that.
↑ comment by [deleted] · 2012-01-10T11:29:57.130Z · LW(p) · GW(p)
The maximiser would get a "vote" in the utility calculation, just as every other mind would get a vote. i.e., its preferences are fully considerable. We're performing an expected utility calculation that weighs each mind's preferences equally.
(Trying to understand your intuitions better. I'm considering positions similar to yours and so maybe I can let you do some of the thinking for me.)
Is there a way for a mind A to prefer something more in an ethical sense than mind B? Is there some kind of Occam's Razor of preferences, say based on complexity (wanting everything to be turned into paperclips is better than wanting everything to be turned into Japanese cast productions of Shakespeare in the original Klingon) or some kind of resource (wanting one more paperclip is better than wanting a universe full)?
How do you handle different preferences in one agent? Say a human wants to both eat chocolate ice cream and ban it. Do you treat this as two agents? (Based on what? Temporal separation? Spatial? Logical contradictions?)
Let's say a paperclipper comes along and wants to turn us all into paperclips. Let's limit it to the paperclipper vs. us humans for simplicity's sake, and let's assume we are all perfectly ethical. So we do what? Have a vote? Which kind of voting system (and why)? What if we don't trust some party - who does the counting? (In other words, how do you determine the preferences of an uncooperative agent?)
(Doesn't unconditionally including other minds into your moral calculus open you up to all kinds of exploits? Say I want to do X and you want NOT-X. So I breed and make a hundred more agents that want X. No matter how strong and unwilling you are, I can force you to cooperate with me, as long as there's enough minds on my side and you are bound to act ethically.)
Similarly, how do you deal with a hive-mind? Let's say Anonymous states tomorrow that it wants otters to be removed from the universe. Is this one vote? As many as Anonymous has members? (... that have been participating in the formulation or in general?) (Analogously for brains.)
comment by [deleted] · 2012-01-09T15:59:50.693Z · LW(p) · GW(p)
I want to be using an algorithm that would've gotten the right answer on slavery, even given 19th century inputs.
This dosen't constrain algorithm space as much as you may think. There are plenty of algorithms that would get the wrong answer to 19th century inputs, the right answer for 12th century inputs and the right answer for 20th century inputs. Also lets remember what 12th and 19th century inputs are, these are basically partially reconstructed (by interested parties even!) incomplete input sets. When evaluating 21st century inputs and which algorithm to use, sure I'd probably put some weight in favour of an algorithm that gives me the right result for the input sets mentioned, but I'm not sure this is a necessary precondition.
I mean remember it's not like we have a time machine. Sure TDT may complicate stuff somewhat, but it isn't magic.
BTW My post on moral progress seems somewhat relevant.
comment by mwengler · 2012-01-09T15:44:57.081Z · LW(p) · GW(p)
If ethics is simply the logical deductions from a set of axioms that you think you feel (or you feel you think, I find this confusing), then is ethics really any different from aesthetics? Could we have a similar post and discussion about the optimal policy of the metropolitan museum of art in admitting and excluding various works of art? On whether it is better to paint a room green or blue?
I'm as happy as the next person to feel righteous anger at someone who "wrongs" me in certain ways, and to kill the person and feel good about it. But my opinon that my actions and desires are the result of natural selection keeps me from thinking they have some status higher than aesthetics does.
Replies from: syllogism↑ comment by syllogism · 2012-01-09T16:59:22.712Z · LW(p) · GW(p)
It's not really any different, no. I suppose the aesthetic analogue would be trying to generalise from my own preferences to what I thought other people would like. There would be a difference between my own tastes, and how I was defining "good". Mapping back to ethics, there's a distinction between my own preferences (i.e. the predictors of my actual actions), and my sense of how the best world might be defined if I gave myself no more stake in deciding things than anyone else.
There's no objective reason for caring about the latter more than the former. I just do, and other people seem to care a lot too.
comment by Eugine_Nier · 2012-01-10T04:15:12.191Z · LW(p) · GW(p)
The reason for valuing all humans as opposed to what you call "persons", its much easier to tell if something is a human, then to tell if something is a "person". And in any case valuing all humans makes a much better Schelling point.
comment by MugaSofer · 2012-10-26T09:39:14.757Z · LW(p) · GW(p)
Because a rabbit doesn't understand its continued existence, it's not wrong to kill it suddenly and painlessly, out of sight/smell/earshot of other rabbits.
That doesn't follow. I understand it's continued existence.
Great post, though. Well written, accurate and so on.
comment by Shmi (shminux) · 2012-01-09T20:18:01.328Z · LW(p) · GW(p)
In my ethical system farm animals are food. One should provide them with proper care and minimize their suffering, but that's as far as it goes. (Also, a happy organic free-range chicken tastes better than a pen-confined one.) Hopefully some day we will be able to grow tasty brainless meat in vats, just like we grow crops in the field, and the whole issue of ethics-based vegetarianism will be moot.
I can imagine a society where (some) humans are raised for food. In that case I would apply the same ethics: minimize suffering and work toward replacing them with a less controversial food source.
Replies from: syllogism, TheOtherDave↑ comment by syllogism · 2012-01-10T00:44:46.415Z · LW(p) · GW(p)
Why are animals food, though -- just because that's how we currently treat them? I think the status quo bias is obvious here. After all, you'd never want people to start farming humans, right? So why agree that it's okay once it starts?
Could your argument have been used to justify slavery?
Replies from: wedrifid, shminux, wedrifidIn my ethical system black people are slaves. One should provide them with proper care and minimize their suffering, but that's as far as it goes. (Also, a happy well-treated black person picks more cotton than a beaten malnourished one.) Hopefully some day we will be able to develop automatic cotton pickers, just like we automate other tasks, and the whole issue of slavery will be moot.
↑ comment by wedrifid · 2012-01-15T08:15:13.197Z · LW(p) · GW(p)
Why are animals food, though
What flavor of 'why?' are you after? The 'flavor' one seems most significant to me! It is also sufficient.
There is no problem with just not having an ethical problem with a behavior simply because you don't currently have any problem with a behavior, like the behavior, do not wish to self modify your ethics and are not persuaded by the ability of someone else to find some similarity between the behavior in question and some other behavior that you do have an ethical problem with.
↑ comment by Shmi (shminux) · 2012-01-10T02:51:27.258Z · LW(p) · GW(p)
After all, you'd never want people to start farming humans, right? So why agree that it's okay once it starts?
There are perfectly good circumstances to start farming animals, like when your survival depends on it. I suspect that there could be a similar situation with farming humans (or at least process them into Soylent Green). Other that that, I agree on the status quo bias.
Re slavery:
Yes, this obvious analogy occurred to me. I would feel more urgency to reevaluate my ethical system if I considered farm animals my equals. Your reasons for doing so may differ. Presumably the emancipation was in part based on that reason, in part on compassion or other reasons, I am not an expert in the subject matter.
Replies from: Solitaire↑ comment by Solitaire · 2014-01-06T16:12:43.455Z · LW(p) · GW(p)
Ethical/moral objections aside, initiating the practice of human farming wouldn't be a logical or practical choice, as presumably farm-rearing humans would be just as energy-inefficient as farm-rearing livestock:
Animal protein production requires more than eight times as much fossil-fuel energy than production of plant protein while yielding animal protein that is only 1.4 times more nutritious for humans than the comparable amount of plant protein, according to the Cornell ecologist's analysis.
Killing and eating excess humans in the process of reducing the world's population to a sustainable level, on the other hand, might qualify as a logical use of resources.
↑ comment by wedrifid · 2012-01-15T07:55:23.254Z · LW(p) · GW(p)
Why are animals food, though -- just because that's how we currently treat them? I think the status quo bias is obvious here. After all, you'd never want people to start farming humans, right? So why agree that it's okay once it starts?
Could your argument have been used to justify slavery?
Yours certainly could. It's a fully general moral counterargument.
You have ethics that aren't mine. You want to keep your ethics and not convert to mine. Other people have different ethics to me and also different ethics to you. You don't approve of those other people keeping their evil ethics because they puppy kicking baby eating status quo loving slavers. Therefore, you should convert to my ethics.
Having a preference and keeping that preference isn't a status quo bias. It's the status quo.
↑ comment by TheOtherDave · 2012-01-09T20:21:34.913Z · LW(p) · GW(p)
Hm.
It sounds like you would prefer growing brainless meat in vats to raising farm animals (e.g., you say "hopefully"). Can you clarify why?
Replies from: shminux↑ comment by Shmi (shminux) · 2012-01-09T20:26:06.213Z · LW(p) · GW(p)
Note the "minimize suffering" part.
comment by Manfred · 2012-01-09T15:36:34.323Z · LW(p) · GW(p)
When deciding my values, I want to be using an algorithm that would've gotten the right answer on slavery, even given 19th century inputs.
Even assuming that "19th century inputs" contained enough misinformation that this distinction makes sense, are you saying this independence of inputs should be a general principle? This actually seems sorta bad. A set of ethics that would give the same answer if I was raised in 1500 as it would if I was raised in 2000 also seems likely to give the same answer if I'm raised in 2500.
If you have to choose between pushing a button that will save a life with probability 0.99, and cost a life with probability 0.01, surely it's not unethical after the fact if you got unlucky.
Nope, still unethical. I mean, you can define ethics as a property of actions rather than as something people do if you want. But surely, at least, that "surely" has to go.
comment by DubiousTwizzler · 2012-03-19T05:07:24.827Z · LW(p) · GW(p)
Is there any relevant research on the subject of animal sentience, animal "persons", etc?
I've read quite a few arguments from different points of view, but haven't found any actual science on the subject.