Why Eat Less Meat?

post by Peter Wildeford (peter_hurford) · 2013-07-23T21:30:21.234Z · LW · GW · Legacy · 505 comments

Contents

  Introduction
  Animals Can Suffer
    The Science
  Factory Farming Causes Considerable Suffering
  Vegetarianism Can Make a Difference
    How Many Would Be Saved?
    Collective Action
  Vegetarianism Is Easier Than You Think
    A Challenge
None
505 comments

Previously, I wrote on LessWrong about the preliminary evidence in favor of using leaflets to promote veganism as a way of cost-effectively reducing suffering.  In response, there was a large discussion with 530+ comments.   In this discussion, I found that a lot of people wanted me to write about why I think nonhuman animals deserve our concern anyway.

Therefore, I wrote this essay with an attempt to defend the view that if one cares about suffering, one should also care about nonhuman animals, since (1) they are capable of suffering, (2) they do suffer quite a lot, and (3) we can prevent their suffering.   I hope that we can have a sober, non mind-killing discussion about this topic, since it’s possibly quite important.

 

Introduction

For the past two years, the only place I ate meat was at home with my family.  As of October 2012, I've finally stopped eating meat altogether and can't see a reason why I would want to go back to eating meat.  This kind of attitude toward eating is commonly classified as "vegetarianism" where one refrains from eating the flesh of all animals, including fish, but still will consume animal products like eggs and milk (though I try to avoid egg as best I can).

Why might I want to do this?  And why might I see it as a serious issue?  It's because I'm very concerned about the reality of suffering done to our "food animals" in the process of making them into meat, because I see vegetarianism as a way to reduce this suffering by stopping the harmful process, and because vegetarianism has not been hard at all for me to accomplish.

 

Animals Can Suffer

Back in the 1600s, Réné Descartes thought nonhuman animals were soulless automatons that could respond to their environment and react to stimuli, but could not feel anything — humans were the only species that were truly conscious. Descartes hit on an important point — since feelings are completely internal to the animal doing the feeling, it is impossible to demonstrate that anyone is truly conscious.

However, when it comes to humans, we don’t let that stop us from assuming other people feel pain. When we jab a person with a needle, no matter who they are, where they come from, or what they look like, they share a rather universal reaction of what we consider to be evidence of pain. We also extend this to our pets — we make great strides to avoid harming kittens, puppies, or other companion animals, and no one would want to kick a puppy or light a kitten on fire just because their consciousness cannot be directly observed. That’s why we even go as far as having laws against animal cruelty.

The animals we eat are no different. Pigs, chickens, cows, and fish all have incredibly analogous responses to stimuli that we would normally agree cause pain to humans and pets.  Jab a pig with a needle, kick a chicken, or light a cow on fire, and they will react aversively like any cat, dog, horse, or human.

 

The Science

But we don't need to rely on just our intuition -- instead, we can look at the science.  Animal scientists Temple Grandin and Mark Deesing conclude that "[o]ur review of the literature on frontal cortex development enables us to conclude that all mammals, including rats, have a sufficiently developed prefrontal cortex to suffer from pain".  An interview of seven different scientists concludes that animals can suffer.

Dr. Jane Goodall, famous for having studied animals, writes in her introduction to The Inner World of Farm Animals that "farm animals feel pleasure and sadness, excitement and resentment, depression, fear, and pain. They are far more aware and intelligent than we ever imagined…they are individuals in their own right."  Farm Sanctuary, an animal welfare organization, has a good overview documenting this research on animal emotion.

Lastly, among much other evidence, in the "Cambridge Declaration On Consciousness", prominent international group of cognitive  neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists and computational neuroscientists states:

Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors.  Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also  possess these neurological substrates.

 

Factory Farming Causes Considerable Suffering

However, the fact that animals can suffer is just one piece of the picture; we next have to establish that animals do suffer as a result of people eating meat.  Honestly, this is easier shown than told -- there's an extremely harrowing and shocking 11-minute video about the cruelty available.  Watching that video is perhaps the easiest way to see the suffering of nonhuman animals first hand in these "factory farms".

In making the case clear, Vegan Outreach writes "Many people believe that animals raised for food must be treated well because sick or dead animals would be of no use to agribusiness. This is not true."

They then go on to document, with sources, how virtually all birds raised for food are from factory farms where "resulting ammonia levels [from densely populated sheds and accumulated waste] commonly cause painful burns to the birds' skin, eyes, and respiratory tracts" and how hens "become immobilized and die of asphyxiation or dehydration", having been "[p]acked in cages (usually less than half a square foot of floor space per bird)".  In fact, 137 million chickens suffer to death each year before they can even make it to slaughter -- more than the number of animals killed for fur, in shelters and in laboratories combined!

Farm Sanctuary also provides an excellent overview of the cruelty of factory farming, writing "Animals on factory farms are regarded as commodities to be exploited for profit. They undergo painful mutilations and are bred to grow unnaturally fast and large for the purpose of maximizing meat, egg, and milk production for the food industry."

It seems clear that factory farming practices are truly deplorable, and certainly are not worth the benefit of eating a slightly tastier meal.  In "An Animal's Place", Michael Pollan writes:

To visit a modern CAFO (Confined Animal Feeding Operation) is to enter a world that, for all its technological sophistication, is still designed according to Cartesian principles: animals are machines incapable of feeling pain. Since no thinking person can possibly believe this any more, industrial animal agriculture depends on a suspension of disbelief on the part of the people who operate it and a willingness to avert your eyes on the part of everyone else.

 

Vegetarianism Can Make a Difference

Many people see the staggering amount of suffering in factory farms, and if they don't aim to dismiss it outright will say that there's no way they can make a difference by changing their eating habits.  However, this is certainly not the case!

 

How Many Would Be Saved?

Drawing from the 2010 Livestock Slaughter Animal Summary and the Poultry Slaughter Animal Summary, 9.1 billion land animals are either grown in the US or imported (94% of which are chickens!), 1.6 billion are exported, and 631 million die before anyone can eat them, leaving 8.1 billion land animals for US consumption each year.

A naïve average would divide this total among the population of the US, which is 311 million, assigning 26 land animals for each person's annual consumption.  Thus, by being vegetarian, you are saving 26 land animals a year you would have otherwise eaten.  And this doesn't even count fish, which could be quite high given how many fish need to be grown just to be fed to bigger fish!

Yet, this is not quite true.  It's important to note that supply and demand aren't perfectly linear.  If you reduce your demand for meat, the suppliers will react by lowering the price of meat a little bit, making it so more people can buy it.  Since chickens dominate the meat market, we'll adjust by the supply elasticity of chickens, which is 0.22 and the demand elasticity of chickens, which is -0.52, and calculate the change in supply, which is 0.3.  Taking this multiplier, it's more accurate to say you're saving 7.8 land animals a year or more.  Though, there are a lot of complex considerations in calculating elasticity, so we should take this figure to have some uncertainty.

 

Collective Action

One might critique this response by responding that since meat is often bought in bulk, reducing meat consumption won't affect the amount of meat bought, and thus the suffering will still be the same, except with meat gone to waste.  However, this ignores the effect of many different vegetarians acting together.

Imagine that you're supermarket buys cases of 200 chicken wings.  It would thus take 200 people together to agree to buy 1 less wing in order for the supermarket to buy less wings.  However, you have no idea if you're vegetarian #1 or vegetarian #56 or vegetarian #200, making the tipping point for 200 less wings to be bought.  You thus can estimate that by buying one less wing you have a 1 in 200 chance of reducing 200 wings, which is equivalent to reducing the supply by one wing.  So the effect basically cancels out.  See here or here for more.

Every time you buy factory farmed meat, you are creating demand for that product, essentially saying "Thank you, I liked what you are doing and want to encourage you to do it more".  By eating less meat, we can stop our support of this industry.

 

Vegetarianism Is Easier Than You Think

So nonhuman animals can suffer and do suffer in factory farms, and we can help stop this suffering by eating less meat.  I know people who get this far, but then stop and say that, as much as they would like to, there's no way they could be a vegetarian because they like meat too much!  However, such a joy for meat shouldn't count much compared to the massive suffering each animal undergoes just to be farmed -- imagine if someone wouldn't stop eating your pet just because they like eating your pet so much!

This is less than a problem than you might think, because being a vegetarian is really easy.  Most people only think about what they would have to give up and how good it tastes, and don't think about what tasty things they could eat instead that have no meat in them.  When I first decided to be a vegetarian, I simply switched from tasty hamburgers to tasty veggieburgers and there was no problem at all.

 

A Challenge

To those who say that vegetarianism is too hard, I’d like to simply challenge you to just try it for a few days. Feel free to give up afterward if you find it too hard. But I imagine that you should do just fine, find great replacements, and be able to save animals from suffering in the process.

If reducing suffering is one of your goals, there’s no reason why you must either be a die-hard meat eater or a die-hard vegetarian. Instead, feel free to explore some middle ground. You could be a vegetarian on weekdays but eat meat on weekends, or just try Meatless Mondays, or simply try to eat less meat. You could try to eat bigger animals like cows instead of fish or chicken, thus getting the same amount of meat with significantly less suffering.

-

(This was also cross-posted on my blog.)

505 comments

Comments sorted by top scores.

comment by Rob Bensinger (RobbBB) · 2013-07-23T23:03:59.201Z · LW(p) · GW(p)

As someone who agrees with (almost) everything you wrote above, I fear that you haven't seriously addressed what I take to be any of the best arguments against vegetarianism, which are:

  1. Present Triviality. Becoming a vegetarian is at least a minor inconvenience — it restricts your social activities, forces you to devote extra resources to keeping yourself healthy, etc. If you're an Effective Altruist, then your time, money, and mental energy would be much better spent on directly impacting society than on changing your personal behavior. Even minor inconveniences and attention drains will be a net negative. So you should tell everyone else (outside of EA) to be a vegetarian, but not be one yourself.

  2. Future Triviality. Meanwhile, almost all potential suffering and well-being lies in the distant future; that is, even if we have only a small chance of expanding to the stars, the aggregate value for that vast sum of life dwarfs that of the present. So we should invest everything we have into making it as likely as possible that humans and non-humans will thrive in the distant future, e.g., by making Friendly AI that values non-human suffering. Even minor distractions from that goal are a big net loss.

  3. Experiential Suffering Needn't Correlate With Damage-Avoiding or Damage-Signaling Behavior. We have reason to think the two correlate in humans (or at least developed, cognitively normal humans) because we introspectively seem to suffer across a variety of neural and psychological states in our own lives. Since I remain a moral patient while changing dramatically over a lifetime, other humans, who differ from me little more than I differ from myself over time, must also be moral patients. But we lack any such evidence in the case of non-humans, especially non-humans with very different brains. For the same reason we can't be confident that four-month-old fetuses feel pain, we can't be confident that cows or chickens feel pain. Why is the inner experience of suffering causally indispensable for neurally mediated damage-avoiding behavior? If it isn't causally indispensable, then why think it is selected at all in non-sapients? Alternatively, what indispensable mechanism could it be an evolutionarily unsurprising byproduct of?

  4. Something About Sapience Is What Makes Suffering Bad. (Or, alternatively: Something about sapience is what makes true suffering possible.) There are LessWrongers who subscribe to the view that suffering doesn't matter, unless accompanied by some higher cognitive function, like abstract thought, a concept of self, long-term preferences, or narratively structured memories — functions that are much less likely to exist in non-humans than ordinary suffering. So even if we grant that non-humans suffer, why think that it's bad in non-humans? Perhaps the reason is something that falls victim to...

  5. Aren't You Just Anthropomorphizing Non-Humans? People don't avoid kicking their pets because they have sophisticated ethical or psychological theories that demand as much. They avoid kicking their pets because they anthropomorphize their pets, reflexively put themselves in their pets' shoes even though there is little scientific evidence that goldfish and cockatoos have a valenced inner life. (Plus being kind to pets is good signaling, and usually makes the pets more fun to be around.) If we built robots that looked and acted vaguely like humans, we'd be able to make humans empathize with those things too, just as they empathize with fictional characters. But this isn't evidence that the thing empathized with is actually conscious.

I think these arguments can be resisted, but they can't just be dismissed out of hand.

You also don't give what I think is the best argument in favor of vegetarianism, which is that vegetarianism does a better job of accounting for uncertainty in our understanding of normative ethics (does suffering matter?) and our understanding of non-human psychology (do non-humans suffer?).

Replies from: Viliam_Bur, Lukas_Gloor, DxE, peter_hurford, MTGandP, shminux, Xodarap, Juno_Watt
comment by Viliam_Bur · 2013-07-24T15:18:29.408Z · LW(p) · GW(p)

Becoming a vegetarian is at least a minor inconvenience — it restricts your social activities, forces you to devote extra resources to keeping yourself healthy, etc.

How about becoming a mostly vegetarian? Avoid eating meat... unless it would be really inconvenient to do so.

Depending on your specific situations, perhaps you could reduce your meat consumption by 50%, which from the utilitarian viewpoint is 50% as good as becoming a full vegetarian. And the costs are trivial.

This is what I am doing recently, and it works well for me. For example, if I have a lunch menu, by default I read the vegetarian option first, and I choose otherwise only if it is something I dislike (or if it contains sugar), which is maybe 20% of cases. The only difficult thing was to do it for the first week, then it works automatically; it is actually easier than reading the full list and deciding between similar options.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T17:54:12.218Z · LW(p) · GW(p)

How about becoming a mostly vegetarian? Avoid eating meat... unless it would be really inconvenient to do so.

I think that would pretty much do away with the 'it's a minor inconvenience' objections. However, I suspect it would also diminish most of the social and psychological benefits of vegetarianism -- as willpower training, proof to yourself of your own virtue, proof to others of your virtue, etc. Still, this might be a good option for EAists to consider.

It's worth keeping in mind that different people following this rule will end up committing to vegetarianism to very different extents, because both the level of inconvenience incurred, and the level of inconvenience that seems justifiable, will vary from person to person.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-24T19:38:33.408Z · LW(p) · GW(p)

suspect it would also diminish most of the social and psychological benefits of vegetarianism -- as willpower training, proof to yourself of your own virtue, proof to others of your virtue, etc. Still, this might be a good option for EAists to consider.

I can train my willpower on many other situations, so that's not an issue. So it's about the virtue, or more precisely, signalling. Well, depending on one's mindset, one can find a "feeling of virtue" even in this. Whether the partial vegetarianism is easier to spread than full vegetarianism, I don't know -- and that is probably the most important part. But some people spreading full vegetarianism, and other people spreading partial vegetarianism where the former fail, feels like a good solution.

comment by Lukas_Gloor · 2013-07-23T23:25:13.879Z · LW(p) · GW(p)

Good points!

1) This is indeed an important consideration, although I think for most people the inconveniences would only present themselves during the transition phase. Once you get used to it sufficiently and if you live somewhere with lots of tasty veg*an food options, it might not be a problem anymore. Also, in the social context, being a vegetarian can be a good conversation starter which one can use to steer the conversation towards whatever ethical issues one considers most important. ("I'm not just concerned about personal purity, I also want to actively prevent suffering. For instance...")

I suspect paying others to go veg*an for you might indeed be more effective, but especially for people who serve as social role models, personal choices may be very important as well, up to the point of being dominant.

2) Yeah but how is the AI going to care about non-human suffering if few humans (and, it seems to me, few people working on fAI) take it seriously?

3)-5) These are reasons for some probabilistic discounting, and then the question becomes whether it's significant enough. They don't strike me as too strong but this is worthy of discussion. Personally I never found 4. convincing at all but I'm curious as to whether people have arguments for this type of position that I'm not yet aware of.

Replies from: RobbBB, army1987
comment by Rob Bensinger (RobbBB) · 2013-07-23T23:40:22.220Z · LW(p) · GW(p)

1) I agree that being a good role model is an important consideration, especially if you're a good spokesperson or are just generally very social. To many liberals and EA folks, vegetarianism signals ethical consistency, felt compassion, and a commitment to following through on your ideals.

I'm less convinced that vegetarianism only has opportunity costs during transition. I'm sure it becomes easier, but it might still be a significant drain, depending on your prior eating and social habits. Of course, this doesn't matter as much if you aren't involved in EA, or are involved in relatively low-priority EA.

(I'd add that vegetarianism might also make you better Effective Altruist in general, via virtue-ethics-style psychological mechanisms. I think this is one of the very best arguments for vegetarianism, though it may depend on the psychology and ethical code of each individual EAist.)

2) Coherent extrapolated volition. We aren't virtuous enough to make healthy, scalable, sustainable economic decisions, but we wish we were.

3)-5) I agree that 4) doesn't persuade me much, but it's very interesting, and I'd like to hear it defended in more detail with a specific psychological model of what makes humans moral patients. 3) I think is a much more serious and convincing argument; indeed, it convinces me that at least some animals with complex nervous systems and damage-avoiding behavior do not suffer. Though my confidence is low enough that I'd probably still consider it immoral to, say, needlessly torture large numbers of insects.

Replies from: Lukas_Gloor, MTGandP
comment by Lukas_Gloor · 2013-07-23T23:52:33.263Z · LW(p) · GW(p)

2) Yes, I really hope CEV is going to come out in a way that also attributes moral relevance to nonhumans. But the fact that there might not be a unique way to coherently extrapolate values and that there might be arbitrariness in choosing the starting points makes me worried. Also, it is not guaranteed that a singleton will happen through an AI implementing CEV, so it would be nice to have a humanity with decent values as a back-up.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T00:08:58.006Z · LW(p) · GW(p)

If you're worried that CEV won't work, do you have an alternative hope or expectation for FAI that would depend much more on humans' actual dietary practices?

Honestly, I find it most likely that an FAI would kill all non-human animals, because sustaining multiple species with very different needs and preferences is inefficient and/or because of meta-ethical or psychological uncertainty about the value of non-humans.

If we're more worried that non-humans might be capable of unique forms of suffering than we are worried that non-humans might be capable of unique forms of joy and beauty, then preventing their existence makes the most sense (once humans have no need for them). That includes destroying purely wild species, and includes ones that only harm each other and are not impacted by humanity.

Replies from: Lukas_Gloor, MTGandP
comment by Lukas_Gloor · 2013-07-24T00:23:59.750Z · LW(p) · GW(p)

It doesn't need to depend on people's dietary habits directly. A lot of people think animals count at least somewhat, but they might be too prone to rationalizing objections and too lazy to draw any significant practical conclusions from that. However, if those people were presented with a political initiative that replaces animal products by plant-based options that are just as good/healthy/whatever, then a lot of them would hopefully vote for it. In that sense, raising awareness for the issue, even if behavioral change is slow, may already be an important improvement to the meme-pool. Whatever utility functions society as a whole or those in power eventually decide to implement, is seems that this to at least some extent depends on the values of currently existing people (and especially people with high potential for becoming influential at some time in the future). This is why I consider anti-speciesist value spreading a contender for top priority.

I actually don't object to animals being killed, I'm just concerned about their suffering. But I suspect lots of people would object, so if it isn't too expensive, why not just take care of those animals that already exist and let them live some happy years before they die eventually? I'm especially talking about wild animals because for some animals, life in the wild might be as bad as in factory farms. And I think species-membership is ethically irrelevant, so there is no need for conservation in my view.

I don't want to fill the universe with animals, what would be the use of that? I'm mainly worried that people might decide to send out von Neumann probes to populate the whole universe with wildlife, or do ancestor simulations or other things that don't take into account animal suffering. Also, there might be a link between speciesism and "substratism", and of course I also care about all forms of conscious uploads and I wouldn't want them to suffer either.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T00:44:18.261Z · LW(p) · GW(p)

The thought that highly temporally variable memes might define the values for our AGI worries me a whole lot. But I can't write the possibility off, so I agree this provides at least some reason to try to change the memetic landscape.

I actually don't object to animals being killed, I'm just concerned about their suffering.

Ditto. It might be that killing in general is OK if it doesn't cause anyone suffering. Or, if we're preference utilitarians, it might be that killing non-humans is OK because their preferences are generally very short-term.

One interesting (and not crazy) alternative to lab-grown meat: If we figure out (with high confidence) the neural basis of suffering, we may be able to just switch it off in factory-farmed animals.

I'm especially talking about wild animals because for some animals, life in the wild might be as bad as in factory farms.

I'm about 95% confident that's almost never true. If factory-farmed animals didn't seem so perpetually scared (since fear of predation is presumably the main source of novel suffering in wild animals), or if their environment more closely resembled their ancestral environment, I'd find this line of argument more persuasive.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T01:01:32.937Z · LW(p) · GW(p)

Yeah, I see no objections to eating meat from zombie-animals (or animals that are happy but cannot suffer). Though I can imagine that people would freak out about it.

Most animals in the wild use r-selection as a reproductive strategy, so they have huge amounts of offspring of which only one child per parent survives and reproduces successfully (if the population remains constant). This implies that the vast majority of wild animals die shortly after birth in ways that are presumably very painful. There is not enough time for having fun for these animals, even if life in the wild is otherwise nice (and that's somewhat doubtful as well). We have to discount the suffering somewhat due to the possibility that newborn animals might not be conscious at the start, but it still seems highly likely that suffering dominates for wild animals, given these considerations about the prevalence of r-selection.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T01:28:44.574Z · LW(p) · GW(p)

Most animals in the wild use r-selection as a reproductive strategy, so they have huge amounts of offspring of which only one child per parent survives and reproduces successfully

Yes, but we agree death itself isn't a bad thing, and I don't think most death is very painful and prolonged. Prolonged death burns calories, so predators tend to be reasonably efficient. (Parasites less so, though not all parasitism is painful.) Force-feeding your prey isn't unheard of, but it's unusual.

There is not enough time for having fun for these animals

If we're worried about lost opportunities for short-lived animals, are we also worried about lost opportunities for counterfactual animals that easily could have existed? Also, I agree it's bad for an organism to suffer for 100% of a very short life, but it's not necessarily any better for it to suffer for 80% of a life that's twice as long.

it still seems highly likely that suffering dominates for wild animals

Oh, I have no doubt that suffering dominates for just about every sentient species on Earth. That's part of why I suspect an FAI would drive nearly all species to extinction. What I doubt is that this suffering exceeds the suffering in typical factory farms. These organisms aren't evolved to navigate environments like factory farms, so it's less likely that they'll have innate coping mechanisms for the horrors of pen life than for the horrors of jungle life. If factory farm animals are sentient, then their existence is probably hell, i.e., a superstimulus exceeding the pain and fear and frustration and sadness (if these human terms can map on to nonhuman psychology) they could ever realistically encounter in the wild.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T01:47:04.344Z · LW(p) · GW(p)

If we're worried about lost opportunities for short-lived animals, are we also worried about lost opportunities for counterfactual animals that easily could have existed?

Yes, it would be hard give a good reason for treating these differently, unless you're a preference utilitarian and think there is no point in creating new preference-bundles just in order to satisfy them later. I was arguing from within a classical utilitarian perspective, even though I don't share this view (I'm leaning towards negative utilitarianism), in order to make the point that suffering dominates in nature. I see though, you might be right about factory farms being much worse on average. Some of the footage certainly is, even though the worst instance of suffering I've ever watched was an elephant being eaten by lions.

comment by MTGandP · 2013-07-24T01:18:23.558Z · LW(p) · GW(p)

Honestly, I find it most likely that an FAI would kill all non-human animals, because sustaining multiple species with very different needs and preferences is inefficient and/or because of meta-ethical or psychological uncertainty about the value of non-humans.

If it wanted to maximize positive states of consciousness, it would probably kill all sentient beings and attempt to convert all the matter in the universe into beings that efficiently experience large amounts of happiness. I find it plausible that this would be a good thing. See here for more discussion.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T02:03:32.815Z · LW(p) · GW(p)

I don't find that unlikely. (I think I'm a little less confident than Eliezer that something CEV-like would produce values actual humans would recognize, from their own limited perspectives, as preferable. Maybe my extrapolations are extrapolateder, and he places harder limits on how much we're allowed to modify humans to make them more knowledgeable and rational for the purpose of determining what's good.)

But I'm less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans. Humans care a lot more about themselves than about other species, and are less confident about non-human subjectivity.

Of course, I suppose the reverse is a possibility. Maybe some existing non-human terrestrial species has far greater capacities for well-being, or is harder to inflict suffering on, than humans are, and an FAI would kill humans and instead work on optimizing that other species. I find that scenario much less plausible than yours, though.

Replies from: MTGandP
comment by MTGandP · 2013-07-24T02:33:14.558Z · LW(p) · GW(p)

But I'm less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans.

If a CEV did this then I believe it would be acting unethically--at the very least, I find it highly implausible that, among the hundreds of thousands(?) of sentient species, homo sapiens is capable of producing the most happiness per unit of resources. This is a big reason why I feel uneasy about the idea of creating a CEV from human values. If we do create a CEV, it should take all existing interests into account, not just the interests of humans.

It also seems highly implausible that any extant species is optimized for producing pleasure. After all, evolution produces organisms that are good at carrying on genes, not feeling happy. A superintelligent AI could probably create much more effective happiness-experiencers than any currently-living beings. This seems to be similar to what you're getting at in your last paragraph.

comment by MTGandP · 2013-07-24T00:19:45.087Z · LW(p) · GW(p)

I don't understand how CEV would be capable of deducing that non-human animals have moral value purely from current human values.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T00:33:28.340Z · LW(p) · GW(p)

CEV asks what humans would value if their knowledge and rationality were vastly greater. I don't find it implausible that if we knew more about the neural underpinnings of our own suffering and pleasure, knew more about the neurology of non-humans, and were more rational and internally consistent in relating this knowledge to our preferences, then our preferences would assign at least some moral weight to the well-being of non-sapients, independent of whether that well-being impacts any sapient.

As a simpler base case: I think the CEV of 19th-century slave-owners in the American South would have valued black and white people effectively equally. Do we at least agree about that much?

Replies from: MTGandP
comment by MTGandP · 2013-07-24T01:09:00.627Z · LW(p) · GW(p)

I don't know much about CEV (I started to read Eliezer's paper but I didn't get very far), but I'm not sure it's possible to extrapolate values like that. What if 19th-century slave owners hold white-people-are-better as a terminal value?

On the other hand, it does seem plausible that slave owner would oppose slavery if he weren't himself a slave owner, so his CEV may indeed support racial equality. I simply don't know enough about CEV or how to implement it to make a judgment one way or the other.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T01:53:05.369Z · LW(p) · GW(p)

Terminal values can change with education. Saying that the coherent extrapolated volition of 19th-century slave owners would have been racist is equivalent to saying that either racism is justified by the facts, or the fundamental norms of rationality latent in 19th-century slave-owner cognition are radically unlike our contemporary fundamental norms of rationality. For instance, slave-owners don't don't on any deep level value consistency between their moral intuitions, or they assign zero weight to moral intuitions involving empathy.

If new experiences and rationality training couldn't ever persuade a slave-owner to become an egalitarian, then I'm extremely confused by the fact that society has successfully eradicated the memes that restructured those slave-owners' brains so quickly. Maybe I'm just more sanguine than most people about the possibility that new information can actually change people's minds (including their values). Science doesn't progress purely via the eradication of previous generations.

Replies from: Nornagest, None
comment by Nornagest · 2013-07-24T02:37:02.136Z · LW(p) · GW(p)

I'm not sure I'd agree with that framing. If an ethical feature changes with education, that's good evidence that it's not a terminal value, to whatever extent that it makes sense to talk about terminal values in humans. Which may very well be "not very much"; our value structure is a lot messier than that of the theoretical entities for which the terminal/instrumental dichotomy works well, and if we had a good way of cleaning it up we wouldn't need proposals like CEV.

People can change between egalitarian and hierarchical ethics without neurological insults or biochemical tinkering, so human "terminal" values clearly don't necessitate one or the other. More importantly, though, CEV is not magic; it can resolve contradictions between the ethics you feed into it, and it might be able to find refinements of those ethics that our biases blind us to or that we're just not smart enough to figure out, but it's only as good as its inputs. In particular, it's not guaranteed to find universal human values when evaluated over a subset of humanity.

If you took a collection of 19th-century slave owners and extrapolated their ethical preferences according to CEV-like rules, I wouldn't expect that to spit out an ethic that allowed slavery -- the historical arguments I've read for the practice didn't seem very good -- but I wouldn't be hugely surprised if it did, either. Either way it wouldn't imply that the resulting ethic applies to all humans or that it derives from immutable laws of rationality; it'd just tell us whether it's possible to reconcile slavery with middle-and-upper-class 19th-century ethics without downstream contradictions.

comment by [deleted] · 2013-07-25T09:29:40.266Z · LW(p) · GW(p)

"Saying that the coherent extrapolated volition of 19th-century slave owners would have been racist is equivalent to saying that either racism is justified by the facts, or the fundamental norms of rationality latent in 19th-century slave-owner cognition are radically unlike our contemporary fundamental norms of rationality."

Could you elaborate on this please? If you're saying what I think you're saying then I would strongly like to argue against your point.

You might also like Brian Tomasik's critique of CEV

comment by A1987dM (army1987) · 2013-07-24T11:53:16.495Z · LW(p) · GW(p)

Personally I never found 4. convincing at all

Do you think the kind of pain I feel while (say) eating spicy foods is bad whether or not I dislike it?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T11:59:00.016Z · LW(p) · GW(p)

I think the word "pain" is misleading. What I care about precisely is suffering, defined as a conscious state a being wants to get out of. If you don't dislike it and don't have an urge to make it stop, it's not suffering. This is also why I think the "pain" of people with pain asymbolia is not morally bad.

comment by DxE · 2013-07-25T16:44:27.271Z · LW(p) · GW(p)

Here is a thought experiment. Suppose that explorers arrive in a previously unknown area of the Amazon, where a strange tribe exists. The tribe suffers from a rare genetic anomaly, whereby all of its individuals are physically and cognitively stuck at the age of 3.

They laugh and they cry. They love and they hate. But they have no capacity for complex planning, or normative sophistication. So they live their lives as young children do -- on a moment to moment basis -- and they have no hope for ever developing beyond that.

If the explorers took these gentle creatures and murdered them -- for science, for food, or for fun -- would we say, "Oh but those children are not so intelligent, so the violence is ok." Or would we be even more horrified by the violence, precisely because the children had no capacity to fend for themselves?

I would submit that the argument against animal exploitation is even stronger than the argument against violence in this thought experiment, because we could be quite confident that whatever awareness these children had, it was "less than" what a normal human has. We are comparing the same species after all, and presumably whatever the Amazonian children are missing, due to genetic anomaly, is not made up for in higher or richer awareness in other dimensions.

We cannot say that about other species. A dog may not be able to reason. But perhaps she delights in smells in a way that a less sensitive nose could never understand. Perhaps she enjoys food with a sophistication that a lesser palate cannot begin to grasp. Perhaps she feels loneliness with an intensity that a human being could never appreciate.

Richard Dawkins makes the very important point that cleverness, which we certainly have, gives us no reason to think that animal consciousness is any less rich or intense than human consciousness (http://directactioneverywhere.com/theliberationist/2013/7/18/g2givxwjippfa92qt9pgorvvheired). Indeed, since cleverness is, in a sense, an alternative mechanism for evolutionary survival to feelings (a perfect computational machine would need no feelings, as feelings are just a heuristic), there is a plausible case that clever animals should be given LESS consideration.

But all of this is really irrelevant. Because the basis of political equality, as Peter Singer has argued, has nothing to do with the facts of our experience. Someone who is born without the ability to feel pain does not somehow lose her rights because of that difference. Because equality is not a factual description, it is a normative demand -- namely, that every being who crosses the threshold of sentience, every being that could be said to HAVE a will -- ought be given the same respect and freedom that we ask for ourselves, as "willing" creatures.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-07-25T19:23:25.882Z · LW(p) · GW(p)

This is a variant of the argument from marginal cases: if there is some quality that makes you count morally, and we can find some example humans (ex: 3 year olds) that have less of that quality than some animals, what do we do?

I'm very sure than an 8 year old human counts morally and that a chicken does not, and while I'm not very clear on where along that spectrum the quality I care about starts getting up to levels where it matters, I think it's probably something no or almost no animals have and some humans don't have. Making this distinction among humans, however, would be incredibly socially destructive, especially given how unsure I am about where the line should go, and so I think we end up with a much better society if we treat all humans as morally equal. This means I end up saying things like "value all humans equally; don't value animals" when that's not my real distinction, just the closest schelling point).

Replies from: Dchudz, Pablo_Stafforini, DxE
comment by Dchudz · 2013-07-26T22:14:16.545Z · LW(p) · GW(p)

It seems like your answer to the argument from marginal cases is that maybe the (human) marginal cases don't matter and "Making this distinction among humans, however, would be incredibly socially destructive."

That may work for you, but I think it doesn't work for the vast majority of people who don't count animals as morally relevant. You are "very sure than an 8 year old human counts morally" (intrinsically, by which I mean "not just because doing otherwise would be socially destructive). I'm not sure if you think 3 year old humans count (intrinsically), but I'm sure that almost everyone does. I know that they count these humans intrinsically (and not just to avoid social destruction), because in fact most people do make these distinctions among humans: For example, median opinion in the US seems to be that humans start counting sometime in the second trimester.

Given this, it's entirely reasonable to try to figure out what quality makes things count morally, and if you (a) care intrinsically about 3 year old humans (or 1 year old or minus 2 months old or whatever), and (b) find that chickens (or whatever) have more of this quality than 3 year old humans, you should care about chickens.

comment by Pablo (Pablo_Stafforini) · 2013-07-25T23:17:49.592Z · LW(p) · GW(p)

I'm very sure than an 8 year old human counts morally and that a chicken does not,

Consider an experience which, if had by an eight-year-old human, would be morally very bad, such as an experience of intense suffering. Now suppose that a chicken could have an experience that was phenomenally indistinguishable from that of the child. Would you be "very sure" that it would be very bad for this experience to be had by the human child, but not at all bad to be had by the chicken?

Replies from: Jiro, SaidAchmiz
comment by Jiro · 2013-07-26T14:28:28.925Z · LW(p) · GW(p)

I smell a variation of Pascal's Mugging here. In Pascal's Mugging, you are told that you should consider a possibility with a small probability because the large consequence makes up for the fact that the probability is small. Here you are suggesting that someone may not be "very sure" (i.e. that he may have a small degree of uncertainty), but that even a small degree of uncertainty justifies becoming a vegetarian because something about the consequence of being wrong (presumably, multiplying by the high badness, though you don't explicitly say so) makes up for the fact that the degree of uncertainty is small.

comment by Said Achmiz (SaidAchmiz) · 2013-07-26T00:15:16.298Z · LW(p) · GW(p)

Now suppose that a chicken could have an experience that was phenomenally indistinguishable from that of the child.

"Phenomenally indistinguishable"... to whom?

In other words, what is the mind that's having both of these experiences and then attempting to distinguish between them?

Thomas Nagel famously pointed out that we can't know "what it's like" to be — in his example — a bat; even if we found our mind suddenly transplanted into the body of a bat, all we'd know is what's it's like for us to be a bat, not what it's like for the bat to be a bat. If our mind were transformed into the mind of a bat (and placed in a bat's body), we could not analyze our experiences in order to compare them with anything, nor, in that form, would we have comprehension of what it had been like to be a human.

Phenomenal properties are always, inherently, relative to a point of view — the point of view of the mind experiencing them. So it is entirely unclear to me what it means for two experiences, instantiated in organisms of very different species, to be "phenomenally indistinguishable".

Replies from: Pablo_Stafforini, DxE
comment by Pablo (Pablo_Stafforini) · 2013-07-26T01:33:47.977Z · LW(p) · GW(p)

In other words, what is the mind that's having both of these experiences and then attempting to distinguish between them?

When a subject is having a phenomenal experience, certain phenomenal properties are instantiated. In saying that two experiences are phenomenally indistinguishable, I simply meant that they instantiate the same phenomenal properties. As should be obvious, there need not be any mind having both experiences in order for them to be indistinguishable from one another. For example, two people looking at the same patch of red may have phenomenally indistinguishable visual experiences--experiences that instantiate the same property of phenomenal redness. I'm simply asking Jeff to imagine a chicken having a painful experience that instantiates the property of unpleasantness to the same degree that a human child does, when we believe that the child's painful experience is a morally bad thing.

Thomas Nagel famously pointed out that we can't know "what it's like" to be — in his example — a bat; even if we found our mind suddenly transplanted into the body of a bat, all we'd know is what's it's like for us to be a bat, not what it's like for the bat to be a bat.

Sorry, but this is not an accurate characterization of Nagel's argument.

Replies from: Jiro, SaidAchmiz, army1987
comment by Jiro · 2013-07-26T14:19:42.625Z · LW(p) · GW(p)

How does this not apply to me imagining that I'm a toaster making toast? I can imagine a toaster having an experience all I want. That doesn't imply that an actual toaster can have that experience or anything which can be meaningfully compared to a human experience at all.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-07-26T17:24:06.134Z · LW(p) · GW(p)

Are you denying that chickens can have any of the experiences which, if had by a human, we would regard as morally bad? That seems implausible to me. Most people think that it would be very bad, for instance, if a child suffered intensely, and most people agree that chickens can suffer intensely.

comment by Said Achmiz (SaidAchmiz) · 2013-07-26T02:56:21.193Z · LW(p) · GW(p)

That's a view of phenomenal experience (namely, that phenomenal properties are intersubjectively comparable, and that "phenomenal properties" can be described from a third-person perspective) that is far, far from uncontroversial among professional philosophers, and I, personally, take it to be almost entirely unsupported (and probably unsupportable).

For example, two people looking at the same patch of red may have phenomenally indistinguishable visual experiences--experiences that instantiate the same property of phenomenal redness.

Intersubjective incomparability of color experiences is one of the classic examples of (alleged) intersubjective incomparability in the literature (cf. the huge piles of writing on the inverted spectrum problem, to which even I have contributed).

... imagine a chicken having a painful experience that instantiates the property of unpleasantness to the same degree that a human child does...

I really don't think this is a coherent thing to imagine. Once again — unpleasantness to whom? "Unpleasant" is not a one-place predicate.

Sorry, but this is not an accurate characterization of Nagel's argument.

If your objection is that Nagel only says that the structure of our minds and sensory organs does not allow us to imagine the what-it's-like-ness of being a bat, and does not mention transplantation and the like, then I grant it; but my extension of it is, imo, consistent with his thesis. The point, in any case, is that it doesn't make sense to speak of one mind having some experience which is generated by another mind (where "mind" is used broadly, in Nagel-esque examples, to include sensory modalities, i.e. sense organs and the brain hardware necessary to process their input; but in our example need not necessarily include input from the external world).

comment by A1987dM (army1987) · 2013-07-27T09:24:04.933Z · LW(p) · GW(p)

For example, two people looking at the same patch of red may have phenomenally indistinguishable visual experiences--experiences that instantiate the same property of phenomenal redness.

I don't think there's a God-given mapping from the set of Alice's possible subjective experiences to the set of Bob's possible subjective experiences. (This is why I think the inverted spectrum thing is meaningless.) We can define a mapping that maps each of Alice's qualia to the one Bob experiences in response to the same kind of sensory input, but 1) there's no guarantee it's one-to-one (colours as seen by young, non-colourblind people would be a best case scenario, but think about flavours), and 2) it would make your claim tautological and devoid of empirical content.

comment by DxE · 2013-07-26T02:25:30.943Z · LW(p) · GW(p)

Nagel had no problems with taking objective attributes of experience -- e.g. indicia of suffering -- and comparing them for the purposes of political and moral debate. The equivalence or even comparability of subjective experience (whether between different humans or different species) is not necessary for an equivalence of moral depravity.

comment by DxE · 2013-07-25T21:45:48.159Z · LW(p) · GW(p)

jkaufman,

  • Justifying violence against an oppressed group, on the basis of some unobserved and ambiguous quality, is the definition of bigotry.

  • Have you interacted with a disabled human before? What it is it about them that you think merits less consideration? My best friend growing up was differently abled, at the cognitive capacity of a young child. But he is also probably the most praiseworthy individual I have ever met. Generous to a fault, forgiving even of those who had mistreated him (and there were many of those), and completely lacking in artifice. A world filled with animals such as he would be a good world indeed. So why should he receive any fewer rights than you or I? What is this amorphous quality that he is missing?

  • Factually, it is not true that human inequality is "socially destructive." Human civilization has thrived for 10,000 years despite horrific caste systems. And even just a generation prior, disabled humans were systematically mistreated as our moral inferiors. Even lions of the left like Arthur Miller had no qualms about locking up their disabled children and throwing away the key.

Inequality is a terrible thing, if you are on the wrong side of the hierarchy. But there is nothing intrinsically destabilizing about bigotry. Far from it, prejudice against "outsiders" is our natural state.

Replies from: Viliam_Bur, SaidAchmiz
comment by Viliam_Bur · 2013-07-26T11:04:24.225Z · LW(p) · GW(p)

I think you are technically wrong. A world filled with people at the cognitive capacity of a young child would include a lot of suffering. (Unless there would be also someone else to solve their problems.) Hunger, diseases, predators... and no ability to defend against them.

comment by Said Achmiz (SaidAchmiz) · 2013-07-26T00:20:43.869Z · LW(p) · GW(p)

DxE, I have to ask, and I don't mean to be hostile: are you using emotionally-charged, question-begging language deliberately (to act as intuition pumps, perhaps)? Would you be able to rephrase your comments in more neutral, objective language?

Replies from: DxE
comment by DxE · 2013-07-26T00:47:28.073Z · LW(p) · GW(p)

The language I use is deliberate. It accurately conveys my point of view, including normative judgments. I do not relish the idea of antagonizing anyone. However, the content of certain viewpoints is inherently antagonizing. If I were to factually state that someone were a rapist, for example, I could not phrase that in a neutral, objective way.

For what it's worth, I actually love jkaufman.. He's one of the smartest and most solid people I know. But his views on this subject are bigoted.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-26T03:10:13.391Z · LW(p) · GW(p)

I see. However, I disagree that your comments accurately convey your point of view, or any point of view; there's a lot of unpacking I'd have to ask you to do on e.g. the great-grandparent before I could understand exactly what you were saying; and I'm afraid I'm not sufficiently interested to try.

If I were to factually state that someone were a rapist, for example, I could not phrase that in a neutral, objective way.

Couldn't you? I could. Observe:

Bob has, on several occasions, initiated and carried on sexual intercourse with an unwilling partner, knowing that the person in question was not willing, and understanding his actions to be opposed to the wishes of said person, as well as to the social norms of his society.

There you go. That is, if anything, too neutral; I could make it less verbose and more colloquial without much loss of neutrality; but it showcases my point, I think. If you believe you can't phrase something in language that doesn't sound like you're trying to incite a crowd, you are probably not trying hard enough.

If you like (and only if you like), I could go through your response to jkaufman and point out where and how your choice of language makes it difficult to respond to your comments in any kind of logical or civilized manner. For now, I will say only:

Expressing your normative judgments is not very useful, nor very interesting to most people. What we're looking for is for you to support those judgments with something. The mere fact that you think something is bad, really very bad, just no good... is not interesting. It's not anything to talk about.

Replies from: DxE
comment by DxE · 2013-07-26T08:42:39.616Z · LW(p) · GW(p)

So what you are demonstrating is that it is possible (and apparently, in your eyes, desirable) to whitewash rape and make it seem morally neutral.

No thanks.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-26T12:14:12.894Z · LW(p) · GW(p)

There's a difference between making it seem morally neutral and not implying anything about its morality or lack thereof. What SaidAchmiz was trying to do is the latter.

comment by Peter Wildeford (peter_hurford) · 2013-07-24T06:28:26.645Z · LW(p) · GW(p)

You're right it might have been good to answer these in the core essay.

Present Triviality. Becoming a vegetarian is at least a minor inconvenience...

I disagree that being a vegetarian is an inconvenience. I haven't found my social activities restricted in any non-trivial way and being healthy has been just as easy/hard as when eating meat. It does not drain my attention from other EA activities.

~

Future Triviality. [...] we should invest everything we have into making it as likely as possible that humans and non-humans will thrive in the distant future

I agree with this in principle, but again don't think vegetarianism is a stop from that. Certainly removing factory farming is a small win compared to successful star colonization, but I don't think there's much we can do now to ensure successful colonization, while there is stuff we can do now to ensure factory farming elimination.

~

Experiential Suffering Needn't Correlate With Damage-Avoiding or Damage-Signaling Behavior.

It need not, which is what makes consciousness thorny. I don't think there is a tidy resolution to this problem. We'll have to take our best guess, and that involves thinking nonhuman animals suffer. We'd probably even want to err on the safe side, which would increase our consideration toward nonhuman animals. It would also be consistent with an Ocham's razor approach.

~

Something About Sapience Is What Makes Suffering Bad.

This doesn't feature among my ethical framework, at least. I don't know how this intuitively works for other people. I also don't think there's much I can say about it.

~

Aren't You Just Anthropomorphizing Non-Humans? [...] But this isn't evidence that the thing empathized with is actually conscious.

It's not. But there's other considerations and lines of evidence, so my worry that we're just anthropomorphizing is present, but rather low.

Replies from: Ishaan, smk
comment by Ishaan · 2014-01-06T12:52:40.244Z · LW(p) · GW(p)

This doesn't feature among my ethical framework, at least.

Wait...what? Why not?

I don't know how this intuitively works for other people. I also don't think there's much I can say about it.

My morality is applicable to agents. The extent to which an object can be modeled as an agent plays a big role (but not the only role) in determining its moral weight. As such, there is a rough hierarchy:

nonliving things and single celled organisms < plants, oysters, etc < arthropods, worms, etc < fish, lizards < dumber animals (chickens, cows) < smarter animals (pigs, dogs, crows) < smartest animals (apes, elephants, cetaceans...)

Practically speaking from an animal rights perspective, this means that I would consider it a moral victory if meat eaters shifted a greater portion of their meat diet downwards towards "lower" animals like fish and arthropods, The difference in weight between much more and much less intelligent animals is rather extreme - it would kill several crickets, shrimp, herring, or salmon to replace a single pig, but I would still count that as a positive because I think that a pig's moral weight is magnitudes greater than a salmons. Convincing a person like me not to harm an object involves behavioral measures (with intelligence being one of several factors) which demonstrate the object as a certain kind of agent which is within the class of agents with positive moral weight.

I'm guessing that we're thinking of different things when we read "sapience is what makes suffering bad (or possible)". Do you think that my version of the thought doesn't feature in your ethical framework? If not, what does determine which objects are morally weighty?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2014-01-06T15:24:42.588Z · LW(p) · GW(p)

I'm guessing that we're thinking of different things when we read "sapience is what makes suffering bad (or possible)". Do you think that my version of the thought doesn't feature in your ethical framework? If not, what does determine which objects are morally weighty?

For me, suffering is what makes suffering bad. Or, rather, I care about any entity that is capable of having feelings and experiences. And, for each of these entities, I much prefer them not to suffer. I care about not having them suffer for their sakes, of course, not for the sake of reducing suffering in the abstract. I don't view entities as utility receptacles.

But I don't think there's anything special about sapience, per se. Rather, I only think sapeince or agentiness is relevant in so far as more sapient and more agenty entities are more capable of suffering / happiness. Which seems plausible, but isn't certain.

~

Practically speaking from an animal rights perspective, this means that I would consider it a moral victory if meat eaters shifted a greater portion of their meat diet downwards towards "lower" animals like fish and arthropods

This seems plausible to me from a perspective of "these animals likely are less capable of suffering", but I think you're missing two things in your analysis: ...(1) the degree of suffering required to create the food, which varies between species, and ...(2) the amount of food provided by each animal.

When you add these two things together, you get a suffering per kg approach that has some counterintuitive conclusions, like the bulk of suffering being in chicken or fish, though I think this table is desperately in need of some updating with more and better research (something that's been on my to-do list for awhile).

Replies from: Ishaan, Solitaire
comment by Ishaan · 2014-01-06T19:18:18.110Z · LW(p) · GW(p)

Let's temporarily taboo words relating to inaccessible subjective experience, because the definitions of words like "suffering" haven't been made rigorous enough to talk about this - we could define it in concrete neurological terms or specific computations, or we could define it in abstract terms of agents and preferences, and we'd end up talking past each other due to different definitions.

I want to make sure to define morality such that it's not dependent on the particulars of the algorithm that an agent runs, but by the agent's actions. If we were to meet weird alien beings in the future who operated in completely alien ways, but who act in ways that can be defined as preferences and can engage in trade, reciprocal altruism, etc...then our morality should extend to them.

Similarly, I think our morality shouldn't extend to paperclippers - even if they make a "sad face" and run algorithms similar to human distress when a paperclip is destroyed, it doesn't mean the same thing morally.

So I think morality must necessarily be based on input-output functions, not on what happens in between. (at this point someone usually brings up paralyzed people - briefly, you can quantify the extent of additions/modifications necessary to create a functioning input-output agent from something and use that to extrapolate agency in such cases.)

the amount of food provided by each animal.

Wait, didn't I take that into account with...

The difference in weight between much more and much less intelligent animals is rather extreme - it would kill several crickets, shrimp, herring, or salmon to replace a single pig, but I would still count that as a positive because I think that a pig's moral weight is magnitudes greater than a salmons.

...or are you referring to a different concept?

I really do think the relationship between moral weight and intelligence is exponential - as in, I consider a human life to be weighted like ~10 chimps, ~100 dogs...(very rough numbers, just to illustrate the exponential nature)...and I'm not sure there are enough insects in the world to morally outweigh one human life (instrumental concerns about the environment and the intrinsic value of diverse ecosystems aside, of course). I'd wager the human hedons and health benefits from eating something very simple, like a shrimp or a large but unintelligent fish, might actually outweigh the cost to the fish and be a net positive (as it is with plants). My certainty in that matter is low, of course

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2014-01-09T12:45:58.283Z · LW(p) · GW(p)

Let's temporarily taboo words relating to inaccessible subjective experience, because the definitions of words like "suffering" haven't been made rigorous enough to talk about this - we could define it in concrete neurological terms or specific computations, or we could define it in abstract terms of agents and preferences, and we'd end up talking past each other due to different definitions.

I agree that people generally and I specifically need to understand "suffering" better. But I don't think substitutes like "runs an algorithm analogous to human distress" or "has thwarted preferences" offer anything better understood or well-defined.

I suppose when I think of suffering probably involves most of the following: noiception, a central nervous system (with connected nociceptors), endogenous opiods, a behavioral pain response, and a behavioral pain response affected by pain killers.

~

If we were to meet weird alien beings in the future who operated in completely alien ways, but who act in ways that can be defined as preferences and can engage in trade, reciprocal altruism, etc...then our morality should extend to them. Similarly, I think our morality shouldn't extend to paperclippers - even if they make a "sad face" and run algorithms similar to human distress when a paperclip is destroyed, it doesn't mean the same thing morally.

I think this is the clearest case where our moral theories differ. If the paperclipper suffers, I don't see any reason not to care about that experience. Or, rather, I don't fully understand why you lack care for the paperclipper.

Similarly, while I'm all for extending morality to weird aliens, I don't think trade nor reciprocal altruism per se are the precise qualities that make things count morally (for me). I assume you mean these qualities as a proxy for "high intelligence", though, rather than precise qualities?

~

Wait, didn't I take that into account with...

Yes, you did. My bad for missing it. Sorry.

~

I'd wager the human hedons and health benefits from eating something very simple, like a shrimp or a large but unintelligent fish, might actually outweigh the cost to the fish and be a net positive (as it is with plants). My certainty in that matter is low, of course

How does your uncertainty weigh in practically in this case? Would you, for example, refrain from eating fish while trying to learn more?

Replies from: Ishaan
comment by Ishaan · 2014-01-09T20:48:49.777Z · LW(p) · GW(p)

But I don't think substitutes like "runs an algorithm analogous to human distress" or "has thwarted preferences" offer anything better understood or well-defined.

Point of disagreement: I do think that both of those are more well-defined than "suffering".

I suppose when I think of suffering probably involves most of the following: noiception, a central nervous system (with connected nociceptors), endogenous opiods, a behavioral pain response, and a behavioral pain response affected by pain killers.

Additionally, I think this statement means you define suffering as "runs an algorithm analogous to human distress". All of these things are specific to Earth-evolved life forms. None of this applies to the class of agents in general.

(Also, nitpick - going by lay usage, you've outlined pain, not suffering. In my preferred usage, for humans at least pain is explicitly not morally relevant except insofar as it causes suffering.)

If the paperclipper suffers, I don't see any reason not to care about that experience. Or, rather, I don't fully understand why you lack care for the paperclipper.

Rain-check on this...have some work to finish. Will reply properly later.


Would you, for example, refrain from eating fish while trying to learn more?

I don't think so, but I might be wrong...Is risk aversion in the face of uncertainty actually rational in this scenario? Seems to me that there are certain scenarios where risk aversion makes sense (personal finance, for example) and scenarios where it doesn't (effective altruism, for example) and this decision seems to fall in the latter camp. AFAIK, risk / loss aversion only applies where there are diminishing returns on the value of something.

I haven't seen any behavioral evidence of fish doing problem solving, being empathetic towards each other, exhibiting cognitive capacities beyond very basic associative learning & memory, or that sort of thing.

practically

Practically, I eat things fish and lower guilt-free. I limit consumption of animals higher than fish to very occasional consumption only - in a similar vein to how I sometimes do things that are bad for the environment, or (when I start earning) plan to sometimes spend money on things that aren't charity, with the recognition that it's mildly immoral selfishness and I should keep it to a minimum. Basically, eating animals seems to be on par with all the other forms of everyday selfishness we all engage in...certainly something to be minimized, but not an abomination.

Where I do consume higher animals, I have plans in the future to shift that consumption towards unpopular cuts of meat (organs, bones, etc) because that means less negative impact through reduced wasteage (and also cheaper, which may enable upgrades with respect to buying from ethical farms + better nutritional profile). The bulk of the profit from slaughtering seems to be the popular muscle meat cuts - if meat eaters would be more holistic about eating the entire animal and not parts of it, I think there would be less total slaughter.

The trade-offs here are not primarily a taste thing for me - I just get really lethargic after eating grains, so I try to limit them. My strain of indian culture is vegetarian, so I am accustomed to eating less meat and more grain through childhood...but after I reduced my intake of grains I felt more energetic and the period of fogginess that I usually get after meals went away. I also have a family history of diabetes and metabolic disorders (which accelerate age-related declines in cognitive function, which I'm terrified of), and what nutrition research I've done indicates that shifting towards a more paleolithic diet (fruits, vegetables, nuts and meat) is the best way to avoid this. Cutting out both meat and grain makes eating really hard and sounds like a bad idea.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2014-01-11T03:30:54.312Z · LW(p) · GW(p)

Rain-check on this...have some work to finish. Will reply properly later.

Just for the sake of completeness, I'll wait for you to follow-up on this before continuing our discussion here.

Replies from: Ishaan
comment by Ishaan · 2014-01-11T13:42:05.642Z · LW(p) · GW(p)

I think this is the clearest case where our moral theories differ. If the paperclipper suffers, I don't see any reason not to care about that experience. Or, rather, I don't fully understand why you lack care for the paperclipper.

If the paper-clipper even can "suffer" ... I suspect a more useful word to describe the state of the paperclipper is "unclippy". Or maybe not...let's not think about these labels for now. The question is, regardless of the label, what is the underlying morally relevant feature?

I would hazard to guess that many of the supercomputers running our google searches, calculating best-fit molecular models, etc... have enough processing power to simulate a fish that behaves exactly like other fishes. If one wished, one could model these as agents with preference functions. But it doesn't mean anything to "torture" a google-search algorithm, whereas it does mean something to torture a fish, or to torture a simulation of a fish.

You could model something as simple as a light switch as an agent with a preference function but it would be a waste of time. In the case of an algorithm which finds solutions in a search space it is actually useful to model it as an agent who prefers to maximize some elements of a solution, as this allows you to predict its behavior without knowing details of how it works. But, just like the light switch, just because you are modelling it as an agent doesn't mean you have to respect its preferences.

"rational agent" explores the search space of possible actions it can take, and chooses the actions which maximize its preferences - the "correct solution" is when all preferences are maximized. An agent is fully rational if it made the best-possible choice given the data at hand. There are no rational agents, but it's useful to model things which act approximately in this way as agents.

Paperclippers, molecular modelers, search engines, seek to maximize a simple set of preferences (number of paperclips, best fit model, best search). They have "preferences", but not morally relevant ones.

A human (or, hopefully one day a friendly AI) seeks to fulfill an extremely complex set of preference...as does a fish. They have preferences which carry moral weight.

It's not specific receptors or any particular algorithm that captures what is morally relevant to me about other agent's preferences. If you took a human and replaced its brain with a search algorithm which found the motor output solutions which maximized the original human's preferences, I'd consider this search algorithm to fit the definition of a person (though not necessarily the same person). I'd respect the search algorithm's preferences the same way I respected the preferences of the human it replaced. This new sort of person might instrumentally prefer not having its arms chopped off, or terminally prefer that you not read its diary, but it might not show any signs of pain when you did these things unless showing signs of pain was instrumentally valuable. Violation of this being's preferences may or may not be called "suffering" depending on how you define "suffering"...but either way, I think this being's preferences are just as morally relevant as a humans.

So the question I would turn back to you is...under what conditions could a paper clipper suffer? Do all paper clippers suffer? What does this mean for other sorts of solution-maximizing algorithms, like search engines and molecular modelers?

My case is essentially that it is something about the composition of an agent's preference function which contains the morally relevant component with regards to whether or not we should respect its preferences. The specific nature of the algorithm it uses to carry this preference function out - like whether it involves pain receptors or something - is not morally relevant.

Replies from: William_Quixote, peter_hurford, Lumifer
comment by William_Quixote · 2014-01-11T16:53:41.797Z · LW(p) · GW(p)

Just as a data-point about intuition frequency, I found your intuitions about "a search algorithm which found the motor output solutions which maximized the original human's preference" to be very surprising

Replies from: Ishaan
comment by Ishaan · 2014-01-11T19:54:06.360Z · LW(p) · GW(p)

Do you mean that the idea itself is weird and surprising to consider?

Or do you mean that my intuition that this search algorithm fits the definition of a "person" and is imbued with moral weight is surprising and does not match your moral intuition?

comment by Peter Wildeford (peter_hurford) · 2014-01-12T21:44:27.646Z · LW(p) · GW(p)

Thanks for the well-thought out comment. It helps me think through the issue of suffering a lot more.

~

If you took a human and replaced its brain with a search algorithm which found the motor output solutions which maximized the original human's preferences, I'd consider this search algorithm to fit the definition of a person (though not necessarily the same person). [...] Violation of this being's preferences may or may not be called "suffering" depending on how you define "suffering"...but either way, I think this being's preferences are just as morally relevant as a humans. [...]

The question is, regardless of the label, what is the underlying morally relevant feature?

I think this is a good thought experiment and it does push me more toward preference satisfaction theories of well-being, which I have long been sympathetic to. I still don't know much myself about what I view as suffering. I'd like to read and think more on the issue -- I have bookmarked some of Brian Tomasik's essays to read (he's become more preference-focused recently) as well as an interview with Peter Singer where he explains why he's abandoned preference utilitarianism for something else. So I'm not sure I can answer your question yet.

There are interesting problems with desires, such as formalizing it (what is a desire and what makes a desire stronger or weaker, etc.), population ethics (do we care about creating new beings with preferences, etc.) and others that we would have to deal with as well.

~

Paperclippers, molecular modelers, search engines, seek to maximize a simple set of preferences (number of paperclips, best fit model, best search). They have "preferences", but not morally relevant ones. A human (or, hopefully one day a friendly AI) seeks to fulfill an extremely complex set of preference...as does a fish. They have preferences which carry moral weight.

So it seems like, to you, an entity's welfare matters when it has preferences, weighted based on the complexity of those preferences, with a certain zero threshold somewhere (so thermostat preferences don't count).

I don't think complexity is the key driver for me, but I can't tell you what is.

~

I haven't seen any behavioral evidence of fish doing problem solving, being empathetic towards each other, exhibiting cognitive capacities beyond very basic associative learning & memory, or that sort of thing.

Likewise, I don't think this is much of a concern for me, and it seems inconsistent with the rest of what you've been saying.

Why are problem solving and empathy important? Surely I could imagine a non-empathetic program without the ability to solve most problems, that still has the kind of robust preferences you've been talking about.

And what level of empathy and problem solving are you looking for? Notably, fish engage in cleaning symbiosis (which seems to be in the lower-tier of the empathy skill tree) and Wikipedia seems to indicate (though perhaps unreliably) that fish have pretty good learning capabilities.

~

I don't think so, but I might be wrong...Is risk aversion in the face of uncertainty actually rational in this scenario? Seems to me that there are certain scenarios where risk aversion makes sense (personal finance, for example) and scenarios where it doesn't (effective altruism, for example) and this decision seems to fall in the latter camp.

That makes sense to me.

Replies from: Ishaan
comment by Ishaan · 2014-01-12T23:08:48.748Z · LW(p) · GW(p)

an entity's welfare matters when it has preferences, weighted based on the complexity of those preferences

No, it's not complexity, but content of the preferences that make a difference. Sorry for mentioning the complexity - i didn't mean to imply that it was the morally relevant feature.

I'm not yet sure what sort of preferences give an agent morally weighty status...the only thing I'm pretty sure about is that the morally relevant component is contained somewhere within the preferences, with intelligence as a possible mediating or enabling factor.

Here's one pattern I think I've identified:

  • I belong within reference Class X.

  • All beings in Reference Class X care about other beings in Reference Class X, when you extrapolate their volition.

When I hear about altruistic mice, it is evidence that the mouse's extrapolated volition would cause it to care about Class X-being's preferences to the extent that it can comprehend them. The cross-species altruism of dogs and dolphins and elephants is an especially strong indicator of Class X membership.

On the other hand, the within-colony altruism of bees (basically identical to Reference Class X except it only applies to members of the colony and I do not belong in it), or the swarms and symbiosis of fishes or bacterial gut flora, wouldn't count...being in Reference Class X is clearly not the factor behind the altruism in those cases.

...which sounds awfully like reciprocal altruism in practice, doesn't it? Except that, rather than looking at the actual act of reciprocation of altruism, I'd be extrapolating the agent's preferences for altruism. Perhaps Class X would be better named "Friendly", in the "Friendly AI" sense - all beings within the class are to some extent Friendly towards each other.

This is at the rough edge of my thinking though - the ideas as just stated are experimental and I don't have well defined notions about which preferences matter yet.

Edit: Another (very poorly thought out) trend which seems to emerge is that agents which have a certain sort of awareness are entitled to a sort of bodily autonomy ... because it seems immoral to sit around torturing insects if one has no instrumental reason to do so. (But is it immoral in the sense that there are a certain number of insects which morally outweigh a a human? Or is it immoral in a virtue ethic-y, "this behavior signals sadism" sort of way?)

My main point is that I'm mildly guessing that it's probably safe to narrow down the problem to some combination of preference functions and level of awareness. In any case, I'm almost certain that there exist preference functions are sufficient (but maybe not necessary?) to confer moral weight onto an agent...and though there may be other factors unrelated to preference or intelligence that play a role, preference function is the only thing with a concrete definition that I've identified so far.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2014-01-13T23:27:33.771Z · LW(p) · GW(p)

...which sounds awfully like reciprocal altruism in practice, doesn't it? Except that, rather than looking at the actual act of reciprocation of altruism, I'd be extrapolating the agent's preferences for altruism. Perhaps Class X would be better named "Friendly", in the "Friendly AI" sense - all beings within the class are to some extent Friendly towards each other.

Just so I understand you better, how would you compare and contrast this kind of pro-X "kin" altruism with utilitarianism?

Replies from: Ishaan
comment by Ishaan · 2014-01-13T23:48:08.577Z · LW(p) · GW(p)

Utilitarianism has never made much sense to me except as a handy way to talk about things abstractly when precision isn't important

...but I suppose X would be a class of agents who consider each other's preferences when they make utilitarian calculations? I pretty much came up with the pro-X idea less than a month ago, and haven't thought it through very carefully.

Oh, here's a good example of where preference utilitarianism fails which illustrates it:

10^100 intelligent people terminally prefer that 1 person is tortured. Preference utilitarianism says "do the torture". My moral instinct says "no, it's still wrong, no matter how many people prefer it".

Perhaps under the pro-X system, the reason we can ignore the preferences of 10^100 people is that the preference which they have expressed lies strictly outside category X and therefore that preference can be ignored?

Whereas, if you have a Friendly Paperclipper (cares about X-agents and paperclips with some weight on each), the Friendly moral values put it within X...which means that we should now be willing to cater to its morally neutral paper-clip preferences as well.

(If this reads sloppy, it's because my thoughts on the matter currently are sloppy)

So...I guess there's sort of a taxonomy of moral-good, neutral-selfish, and evil preferences...and part of being good means caring about other people's selfish preferences? And part of being evil means valuing the violation of other's preferences? And, good agents can simply ignore evil preferences.

And (under the pro-X system), good agents can also ignore the preferences of agents that aren't in any way good...which seems like it might not be correct, which is why I say that there might be other factors in addition to pro-X that make an agent worth caring about for my moral instincts, but if they exist I don't know what they are.

Replies from: Dentin, Jiro
comment by Dentin · 2014-01-14T00:57:49.939Z · LW(p) · GW(p)

Are you perhaps confusing 'morally wrong' with 'a sucky tradeoff that I would prefer not to be bound by'?

Just because torturing one person sucks, just because we find it abhorrent, does not mean that it isn't the best outcome in various situations. If your definition of 'moral' is "best outcome when all things are considered, even though aspects of it suck a lot and are far from ideal", then yes, torturing someone can in fact be moral. If your definition of 'moral' is "those things which I find reprehensible", then quite probably you can never find torturing someone to be moral. However, there are scenarios where it may still be necessary, or the best option.

Replies from: Ishaan
comment by Ishaan · 2014-01-14T02:05:20.552Z · LW(p) · GW(p)

Are you perhaps confusing 'morally wrong' with 'a sucky tradeoff that I would prefer not to be bound by'?

Nope...because ..

quite probably you can never find torturing someone to be moral. However, there are scenarios where it may still be necessary, or the best option.

...because I believe that torturing someone could still instrumentally be the right thing to do on a consequential grounds.

In this scenario, 10^100 people terminally value torturing one person, but I do not care about their preferences, because it is an evil preference.

However, in an alternate scenario, if I had to choose between 10^100 people getting mildly hurt or 1 person getting tortured, I'd choose the one person getting tortured.

In these two scenarios, the preference weights are identical, but in the first scenario the preference of the 10^100 people is evil and therefore irrelevant in my calculations, whereas in the second scenario the needs of 10^100 outweigh the needs of the one.

This is less a discussion about torture, and more a discussion about whose/which preferences matter. Sadistic preferences (involving real harm, not the consensual kink), for example, don't matter morally - there's no moral imperative to fulfill those preferences, no "good" done when those preferences are fulfilled and no "evil" resulting from thwarting those preferences.

Replies from: Dentin
comment by Dentin · 2014-01-14T18:00:44.942Z · LW(p) · GW(p)

I think you should temporarily taboo 'moral', 'morality', and 'evil', and simply look at the utility calculations. 10^100 people terminally value something that you ascribe zero or negative value to; therefore, their preferences do not matter to you or will make your universe worse from the standpoint of your utility function.

Which preferences matter? Yours matter to you, and thiers matter to them. There's no 'good' or 'evil' in any absolute sense, merely different utility functions that happen to conflict. There's no utility function which is 'correct', except by some arbitrary metric, of which there are many.

Consider another hypothetical utility function: The needs of the 10^100 don't outweigh the needs of the one, so we let the entire 10^100 suffer when we could eliminate it by inconveniencing one single entity. Neither you nor the 10^100 are happy with this one, but the person about to be tortured may think it's just fine and dandy...

Replies from: Ishaan, TheOtherDave
comment by Ishaan · 2014-01-14T21:50:59.300Z · LW(p) · GW(p)

...I don't denotatively disagree with anything you've said, but I also think you're sort of missing the point and forgetting the context of the conversation as it was in the preceding comments.

We all have preferences, but we do not always know what our own preferences are. A subset of our preferences (generally those which do not directly reference ourselves) are termed "moral preferences". The preceding discussion between me and Peter Hurford is an attempt to figure out what our preferences are.

In the above conversation, words like "matter", "should" and "moral" is understood to mean "the shared preferences of Ishaan, Dentin, and Peter_Hurford which they agree to define as moral". Since we are all human (and similar in many other ways beyond that), we probably have very similar moral preferences...so any disagreement that arises between us is usually due to one or both of us inaccurately understanding our own preferences.

There's no 'good' or 'evil' in any absolute sense

This is technically true, but it's also often a semantic stopsign which derails discussions of morality. The fact is that the three of us humans have a very similar notion of "good", and can speak meaningfully about what it is...the implicitly understood background truths of moral nihilism notwithstanding.

It doesn't do to exclaim "but wait! good and evil are relative!" during every moral discussion...because here, between us three humans, our moral preferences are pretty much in agreement and we'd all be well served by figuring out exactly those preferences are. It's not like we're negotiating morality with aliens.

Which preferences matter? Yours matter to you

Precisely...my preferences are all that matter to me, and our preferences are all that matter to us. So if 10^100 sadistic aliens want to torture...so what? We don't care if they like torture, because we dislike torture and our preferences are all that matter. Who cares about overall utility? "Morality", for all practical purposes, means shared human morality...or, at least, the shared morality of the humans who are having the discussion.

"Utility" is kind of like "paperclips"...yes, I understand that in the best case scenario it might be possible to create some sort of construct which measures how much "utility" various agent-like objects get from various real world outcomes, but maximizing utility for all agents within this framework is not necessarily my goal...just like maximizing paperclips is not my goal.

comment by TheOtherDave · 2014-01-14T18:14:53.617Z · LW(p) · GW(p)

So, I'm curious... can you unpack what you mean by "temporarily" in this comment?

Replies from: Dentin
comment by Dentin · 2014-01-14T18:47:14.943Z · LW(p) · GW(p)

For the purposes of this conversation at least. I've largely got them taboo'd in general because I find them confusing and full of political connotations; I suspect at least some of that is the problem here as well.

comment by Jiro · 2014-01-14T00:27:17.884Z · LW(p) · GW(p)

10^100 intelligent people terminally prefer that 1 person is tortured. Preference utilitarianism says "do the torture". My moral instinct says "no, it's still wrong, no matter how many people prefer it".

Yet your moral instinct is perfectly fine with having a justice system that puts innocent people in jail with a greater than 1 in 10^100 error rate.

Replies from: Ishaan
comment by Ishaan · 2014-01-14T01:58:23.117Z · LW(p) · GW(p)

Sure, on instrumental grounds for consequentialist reasons. Not a terminal preference.

comment by Lumifer · 2014-01-11T14:48:31.916Z · LW(p) · GW(p)

Paperclippers, molecular modelers, search engines, seek to maximize a simple set of preferences (number of paperclips, best fit model, best search). They have "preferences", but not morally relevant ones.

Usually people speak of preferences when there is a possibility of choice -- the agent can meaningfully choose between doing A and doing B.

This is not the case with respect to molecular models, search engines, and light switches.

Replies from: V_V, Ishaan
comment by V_V · 2014-01-11T15:24:05.671Z · LW(p) · GW(p)

At least for search engines, I would say there exist a meaningful level of description where it can be said that the search engine chooses which results to display in response to a query, approximately maximizing some kind of scoring function.

Replies from: Lumifer
comment by Lumifer · 2014-01-11T15:34:54.709Z · LW(p) · GW(p)

there exist a meaningful level of description where it can be said that the search engine chooses which results to display in response to a query

I don't think it is meaningful in the current context. The search engine is not an autonomous agent and doesn't choose anything any more than, say, the following bit of pseudocode: if (rnd() > 0.5) { print "Ha!" } else { print "Ooops!" }

Replies from: Ishaan
comment by Ishaan · 2014-01-11T20:10:32.433Z · LW(p) · GW(p)

"If you search for "potatoes" the engine could choose to return results for "tomatoes" instead...but will choose to return results for potatoes because it (roughly speaking) wants to maximize the usefulness of the search results."

"If I give you a dollar you could choose to tear it to shreds, but you instead will choose to put it in your wallet because (roughly speaking) you want to xyz..."

When you flip the light switch "on" it could choose to not allow current through the system, but it will flow current through the system because it wants current to flow through the system when it is in the "on" position.

Except for degree of complexity, what's the difference? "Choice" can be applied to anything modeled as an Agent.

Replies from: Lumifer
comment by Lumifer · 2014-01-11T21:44:35.785Z · LW(p) · GW(p)

When you flip the light switch "on" it could choose to not allow current through the system, but it will flow current through the system because it wants current to flow through the system when it is in the "on" position.

Sorry, I read this as nonsense. What does it mean for a light switch to "want"?

Replies from: Ishaan, V_V
comment by Ishaan · 2014-01-11T21:57:44.927Z · LW(p) · GW(p)

To determine the "preferences" of objects which you are modeling as agents, see what occurs, and construct a preference function that explains those occurrences.

Example: This amoeba appears to be engaging in a diverse array of activities which I do not understand at all, but they all end up resulting in the maintenance of its physical body. I will therefore model it as "preferring not to die", and use that model to make predictions about how the amoeba will respond to various situations.

comment by V_V · 2014-01-14T17:51:34.815Z · LW(p) · GW(p)

I think the light switch example is far fetched, but the search engine isn't. The point is whether there exist a meaningful level of description where framing the system behavior in terms of making choices to satisfy certain preferences is informative.

Replies from: Lumifer
comment by Lumifer · 2014-01-14T19:59:38.342Z · LW(p) · GW(p)

Don't forget that the original context was morality.

You don't think it is far-fetched to speak of morality of search engines?

Replies from: V_V
comment by V_V · 2014-01-15T00:04:59.720Z · LW(p) · GW(p)

Yes, it is.

comment by Ishaan · 2014-01-11T20:04:58.070Z · LW(p) · GW(p)

The distinction you are making between the input-output function of a human as a "choice" vs. the input-output of a machine as "not-a-choice" sounds very reminiscent of the traditional naive / confused model of free will that people commonly have before dissolving the question...but you're a frequent poster here, so perhaps I've misunderstood your meaning. Are you using a specialized definition of the word "choice"?

Replies from: Lumifer
comment by Lumifer · 2014-01-11T21:06:24.279Z · LW(p) · GW(p)

I have no wish for this to develop into a debate about free will. Let me point out that just because I know the appropriate part of the Sequences does not necessarily mean I agree with it.

As a practical matter, speaking about choices of light switches seems silly. Given this, I don't see why speaking about choices of search engines is not silly. It might be useful conversational shorthand in some contexts, but I don't think it is useful in the context of talking about morality.

Replies from: Ishaan
comment by Ishaan · 2014-01-11T21:23:17.588Z · LW(p) · GW(p)

Let me point out that just because I know the appropriate part of the Sequences does not necessarily mean I agree with it.

Ah, ok - sorry. The materialist, dissolved view of free will related questions has been a strongly held view of mine since a very young age, so my prior for a person who is aware of thesel yet subscribes to what I'll call the "naive view" for lack of the better word is very low.

It's not really the particulars of the sequences here which are in question - the people who say free will doesn't exist, and the people who say it does but redefine free will in funny ways, the pan-psychists, the compatiblists and non-compatiblists, all share in common a non-dualist view which does not allow them to label the search engine's processes and the human's processes as fundamentally, qualitatively different processes. This is a deep philosophical divide that has been debated for, as far as I am aware, at least two thousand years.

As a practical matter, speaking about choices of light switches seems silly. Given this, I don't see why speaking about choices of search engines is not silly.

By analogy, speaking of choices of humans seems silly, since humans are made of the same basic laws.

The fundamental disagreement here runs rather deeply - it's not going to be possible to talk about this without diving into free will.

Replies from: Lumifer, TheOtherDave
comment by Lumifer · 2014-01-11T21:42:46.916Z · LW(p) · GW(p)

has been a strongly held view of mine since a very young age, so my prior ... is very low.

Philosophical disagreements aside, that doesn't seem to be a good way to construct priors for other people's views.

comment by TheOtherDave · 2014-01-11T21:56:43.149Z · LW(p) · GW(p)

If I understood the causal mechanisms underlying the actions of humans as well as I do those underlying lightswitches, talking about the former as "choices" would seem as silly to me as talking that way about the latter does.

But I don't, so it doesn't.

I assume you don't understand the causal mechanisms underlying the actions of humans either. So why does talking about them as "choices" seem silly to you?

Replies from: Ishaan
comment by Ishaan · 2014-01-11T22:09:21.885Z · LW(p) · GW(p)

I agree with you. Whether we model something as an agent or an object is a feature of our map, not the territory. It's not useful to model light switches as agents because they are too simple, and looking at them through the lens of preferences is not simple or informative. Meanwhile, it is useful to model humans partially as preference maximizing agents to make approximations.

However, in the context of the larger discussion, I interpret Lumifer as treating the distinction between "choice" and "event" as a feature of the territory itself, and positing a fundamental qualitative difference between a "choice" and other sorts of events. My reply should be seen as an assertion that such qualitative differences are not features of the map - if it's impossible to model a light switch as having choices, then it's also impossible to model a human as having choices. (My actual belief is that it's possible to model both as having choices or not having them)

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-12T02:58:30.136Z · LW(p) · GW(p)

Is your actual belief that there are equivalent grounds for modeling both either way?

If so, I disagree... from my own perspective, modeling people as preference-maximizing agents is significantly more justified (due to differences in the territory) than modeling a light switch that way.

If not, to what do you attribute the differential?

Replies from: Ishaan
comment by Ishaan · 2014-01-12T11:59:12.019Z · LW(p) · GW(p)

Is your actual belief that there are equivalent grounds for modeling both either way?

...it is possible to model things either way, but it is more useful for some objects than others.

It's not useful to model light switches as agents because they are too simple, and looking at them through the lens of preferences is not simple or informative. Meanwhile, it is useful to model humans partially as preference maximizing agents to make approximations.

Modeling an object as agents is useful when the object exhibits a pattern of behavior which is roughly consistent with preference maximizing. A search engine is well modeled as an agent. A human is very well modeled as an agent.

A light switch is very poorly modeled as an agent. Thinking of it in terms of preference pattern doesn't make it any easier to predict its behavior. But you can model it as an agent, if you'd like.

By "justified" do you mean "useful"?

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-12T16:12:16.906Z · LW(p) · GW(p)

I am willing to adopt "useful" in place of "justified" if it makes this conversation easier. In which case my question could be rephrased "Is it equally useful to model both either way?"

To which your answer seems to be no... it's more useful to model a human as an agent than it is a light-switch. (I'm inferring, because despite introducing the "useful" language, what you actually say instead introduces the language of something being "well-modeled." But I'm assuming that by "well-modeled" you mean "useful.")

And your answer to the followup question is because the pattern of behavior of a light switch is different from that of a search engine or a human, such that adopting an intentional stance towards the former doesn't make it easier to predict.

Have I understood you correctly?

Replies from: Ishaan
comment by Ishaan · 2014-01-12T18:38:57.018Z · LW(p) · GW(p)

Yup. Modeling something as a preference maximizing agent is generally useful to adopt for things which systematically behave in ways that maximize certain outcomes in a diverse array of situations. It allows you to make accurate predictions even when you don't fully understand the mechanics of the events that occur in generating the events you are predicting.

(I distinguished useful and justified because I wasn't sure if "justified" had moral connotations in your usage)

Edit: On reading the wiki, I tend to agree with the views that the wiki attributes to Dennett. Thanks for the reference and the word "intentional stance".

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-12T19:47:49.912Z · LW(p) · GW(p)

OK. So, having clarified that, I return to your initial comment:

The distinction you are making between the input-output function of a human as a "choice" vs. the input-output of a machine as "not-a-choice" sounds very reminiscent of the traditional naive / confused model of free will that people commonly have before dissolving the question

...and am as puzzled by it as I was in the first place.

You agree that the input-output function of a human differs from the input-output of a machine like a light switch in ways that make it more useful to model the former but not the latter as maximizing preferences. (To adopt the intentional stance towards the former and the design stance towards the latter, in Dennett's terminology.)

So, given that, what is your objection to Lumifer's distinction? "Choice" seems like a perfectly reasonable word to use when taking an intentional stance, and to not use when taking a design stance.

When I asked earlier, you explained that your objection had to do with attributing "territory-level" differences to humans and machines, when it's really a "map-level" objection... that it's possible to talk about a light-switch's choices, or not talk about a human's choices, so it's not really a difference in the system at all, just a difference in the speaker.

But given that you agree that there's a salient "territory-level" difference between the two systems (specifically, the differences which make the intentional stance more useful than the design stance wrt humans, but not wrt light-switches), I don't quite get the objection. Sure, it's possible to take either stance towards either system, but it's more useful to take the intentional stance towards humans, and that's a "fact about the territory."

No?

Replies from: Ishaan
comment by Ishaan · 2014-01-12T22:26:35.961Z · LW(p) · GW(p)

Because in the preceding comment, I was demonstrating that we should not morally care about light switches, search engines, and paperclippers...whereas we should morally care about fishes, dogs, and humans... because of differences in the preference profiles of these beings when they are modeled as agents.

Peter Hurford disagreed with me on the non-moral status of the paper-clipper. I was demonstrating the non-moral status of a being which cared only for paper clips by analogy to a search engine (a being which only cares about bringing up the best search result).

Whereas what Lumifer was saying is that the very premise that a search engine could have choices was fundamentally flawed (which, if true, would cause the whole analogy to break down).

The thing is, it's not fundamentally flawed to thing of a search engine as having choices. Sure, search engines are a little less usefully modeled as agent-like when compared to humans, but it's just a matter of degree.

the input-output function of a human as a "choice" vs. the input-output of a machine as "not-a-choice"

I was objecting to his hard, qualitative binary, not your and Dennet's soft/qualitative spectrum.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-12T22:32:04.439Z · LW(p) · GW(p)

Thanks for clarifying.

comment by Solitaire · 2014-01-06T16:23:17.282Z · LW(p) · GW(p)

This seems plausible to me from a perspective of "these animals likely are less capable of suffering", but I think you're missing two things in your analysis: ...(1) the degree of suffering required to create the food, which varies between species, and ...(2) the amount of food provided by each animal.

Additionally, when there is a burden of evidence to suggest that nutrient-equivalent food sources can be produced in a more energy-efficient manner and with no direct suffering to animals (indirect suffering being, for example, the unavoidable death of insects in crop harvesting), I believe it is a rational choice to move towards those methods.

comment by smk · 2013-07-27T00:28:26.618Z · LW(p) · GW(p)

I don't think there's much we can do now to ensure successful colonization

Existential risk reduction charities?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-27T02:59:20.127Z · LW(p) · GW(p)

I'm very unsure about the expected success of existential risk reduction charities.

comment by MTGandP · 2013-07-23T23:56:12.849Z · LW(p) · GW(p)

Your points (1) and (2) seem like fully general counterarguments against any activity at all, other than the single most effective activity at any given time. I do agree with you that future suffering could potentially greatly outweigh present suffering, and I think it's very important to try to prevent future suffering of non-human animals. However, it seems that one of the best ways to do that is to encourage others to care more for the welfare of non-human animals, i.e. become veg*ans.

Perhaps more importantly, it makes sense from a psychological perspective to become a veg*an if you care about non-human animals. It seems that if I ate meat, cognitive dissonance would make it much harder for me to make an effort to prevent non-human suffering on a broader scale.

(4): Although I see no way to falsify this belief, I also don't see any reason to believe that it's true. Furthermore, it runs counter to my intuitions. Are profoundly mentally disabled humans incapable of "true" suffering?

(5): Humans and non-human animals evolved in the same way, so it strikes me as highly implausible that humans would be capable of suffering while all non-humans would lack this capacity.

Replies from: Eliezer_Yudkowsky, RobbBB
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-24T02:27:45.240Z · LW(p) · GW(p)

I don't engage in the vast majority of possible activities. Neither do you, so on net, the class of arguments you accept must mitigate against almost all activities, right?

Replies from: MTGandP, pianoforte611
comment by MTGandP · 2013-07-24T02:37:21.629Z · LW(p) · GW(p)

Are you saying that most arguments that you should to X are fully general counterarguments against doing anything other than X?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-24T02:59:59.507Z · LW(p) · GW(p)

Why did you type that comment? Did you consider the arguments for typing that comment as fully general counterarguments against all the other possible comments you could have made? If not, why not post them too?

Replies from: MTGandP
comment by MTGandP · 2013-07-24T03:07:14.343Z · LW(p) · GW(p)

I'm not sure I understand what you're trying to say. It sounds like you're saying that we make decisions without considering all possible arguments for and against them, in which case I'm not sure what you're saying with regard to my original comment.

To construct the comment that you just replied to, I considered various possible questions that I roughly rated by how effectively they would help me to understand what you're saying, and limited my search due to time constraints. The arguments for posting that comment work as counterarguments against posting any other comment I considered, e.g. it was the best comment I considered. It's not the best possible comment, but it would be a waste of time to search the entirety of comment-space to find the optimal comment.

comment by pianoforte611 · 2013-07-24T13:53:39.853Z · LW(p) · GW(p)

No I don't decide what to do with my time by coming up with arguments ruling out every other activity that I could be doing.

comment by Rob Bensinger (RobbBB) · 2013-07-24T00:27:12.077Z · LW(p) · GW(p)

Your points (1) and (2) seem like fully general counterarguments against any activity at all, other than the single most effective activity at any given time.

That's more or less what I intended them to be. Isn't doing only the most effective activities available to you... a good idea?

However, I'd phrase the argument in terms of degrees: Activities are good to the extent they conduce to your making better decisions for the future, bad to the extent they conduce to your making worse decisions for the future. So doing the dishes might be OK even if it's not the Single Best Thing You Could Possibly Be Doing Right Now, provided it indirectly helps you do better things than you otherwise would. Some suboptimal things are more suboptimal than others.

However, it seems that one of the best ways to do that is to encourage others to care more for the welfare of non-human animals

Maybe? If you could give such an argument, though, it would show that my argument isn't a fully general counterargument -- vegetarianism would be an exception, precisely because it would be the optimal decision.

it makes sense from a psychological perspective to become a veg*an if you care about non-human animals.

Right. I think the disagreement is about the ethical character of vegetarianism, not about whether it's a psychologically or aesthetically appealing life-decision (to some people). It's possible to care about the wrong things, and it's possible to assign moral weight to things that don't deserve it. Ghosts, blastocysts, broccoli stalks, abstract objects....

Although I see no way to falsify this belief, I also don't see any reason to believe that it's true.

To assess (4) I think we'd need to look at the broader ethical and neurological theories that entail it, and assess the evidence for and against them. This is a big project. Personally, my uncertainty about the moral character of non-sapients is very large, though I think I lean in your direction. (Actually, my uncertainty and confusion about most things sapience- and sentience- related are very large.)

Replies from: Desrtopa, pianoforte611, MTGandP
comment by Desrtopa · 2013-07-24T02:12:06.186Z · LW(p) · GW(p)

That's more or less what I intended them to be. Isn't doing only the most effective activities available to you... a good idea?

Within practical limits. It's not effective altruism if you drive yourself crazy trying to hold yourself to unattainable standards and burn yourself out.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T02:53:57.593Z · LW(p) · GW(p)

Practical limits are built into 'effective'. The most effective activity for you to engage in is the most effective activity for you to engage in, not for a perfectly rational arbitrarily computationally powerful god to engage in. Going easy on yourself, to the optimal degree, is (for creatures like us) part of behaving optimally at all. If your choice (foreseeably) burns you out, and the burnout isn't worth the gain, your choice was just wrong.

comment by pianoforte611 · 2013-07-24T13:57:57.252Z · LW(p) · GW(p)

That's more or less what I intended them to be. Isn't doing only the most effective activities available to you... a good idea?

However, I'd phrase the argument in terms of degrees: Activities are good to the extent they conduce to your making better decisions for the future, bad to the extent they conduce to your making worse decisions for the future. So doing the dishes might be OK even if it's not the Single Best Thing You Could Possibly Be Doing Right Now, provided it indirectly helps you do better things than you otherwise would. Some suboptimal things are more suboptimal than others.

Wouldn't you agree that veganism is less suboptimal than say entertainment? I'm assuming you're okay with people playing video games, going to the movies etc. even if those activities don't accomplish any long term altruistic goals. So I don't know what your issue with veganism is.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T17:59:38.254Z · LW(p) · GW(p)

Wouldn't you agree that veganism is less suboptimal than say entertainment?

Depends. For a lot of people, some measure of entertainment helps recharge their batteries and do better work, much more so than veganism probably would. I'll agree that excessive recreational time is a much bigger waste (for otherwise productive individuals) than veganism. I'm not singling veganism out here; it just happens to be the topic of discussion for this thread. If veganism recharges altruists' batteries in a way similar to small amounts of recreation, and nothing better could do the job in either case, then veganism is justifiable for the same reason small amounts of recreation is.

Replies from: pianoforte611
comment by pianoforte611 · 2013-07-24T18:37:47.604Z · LW(p) · GW(p)

For a lot of people, some measure of entertainment helps recharge their batteries and do better work

I suspect that most people engage in much more entertainment than is necessary for recharging their batteries to do more work. I hope you don't think that entertainment and recreation are justifiable only because they allow us to work.

and nothing better could do the job in either case

This sounds like a fully general counterargument against doing almost anything at all.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T18:49:16.366Z · LW(p) · GW(p)

I suspect that most people engage in much more entertainment than is necessary for recharging their batteries to do more work.

Yes. I would interpret that as meaning that people spend too much time having small amounts of fun, rather than securing much larger amounts of fun for their descendants.

I hope you don't think that entertainment and recreation are justifiable only because they allow us to work.

No, fun is intrinsically good. But it's not so hugely intrinsically good that this good can outweigh large opportunity costs. And our ability to impact the future is large enough that small distractions, especially affecting people with a lot of power to change the world, can have big costs. I'm with Peter Singer on this one; buying a fancy suit is justifiable if it helps you save starving Kenyans, but if it comes at the expense of starving Kenyans then you're responsible for taking that counterfactual money from them. And time, of course, is money too.

(I'm not sure this is a useful way for altruists to think about their moral obligations. It might be too stressful. But at this point I'm just discussing the obligations themselves, not the ideal heuristics for fulfilling them.)

This sounds like a fully general counterargument against doing almost anything at all.

It is, as long as you keep in mind that for every degree of utility there's an independent argument favoring that degree over the one right below it. So it's a fully general argument schema: 'For any two incompatible options X and Y, if utility(X) > utility(Y), don't choose Y if you could instead choose X.' This makes it clear that the best option is preferable to all suboptimal options, even though somewhat suboptimal things are a lot better than direly suboptimal ones.

Replies from: pianoforte611
comment by pianoforte611 · 2013-07-24T21:32:11.638Z · LW(p) · GW(p)

In that case, why are you spending time arguing against vegetarianism, instead of spending time arguing against behaviors that waste even more time and resources?

comment by MTGandP · 2013-07-24T02:26:33.612Z · LW(p) · GW(p)

That's more or less what I intended them to be. Isn't doing only the most effective activities available to you... a good idea?

I felt like it was a bit unfair for you to use fully general counterarguments against veganism in particular. However, after your most recent reply, I can better see where you're coming from. I think a better message to take from this essay (although I'm not sure Peter would agree) is that people in general should eat less meat, not necessarily you in particular. If you can get one other person to become a vegan in lieu of becoming one yourself, that's just as good.

I think the disagreement is about the ethical character of vegetarianism, not about whether it's a psychologically or aesthetically appealing life-decision (to some people).

If non-vegans are less effective at reducing suffering than vegans due to a quirk of human psychology (i.e. cognitive dissonance preventing them from caring sufficiently about non-humans), then this becomes an ethical issue and not just a psychological one.

To assess (4) I think we'd need to look at the broader ethical and neurological theories that entail it, and assess the evidence for and against them. This is a big project.

I agree with you here. I feel sufficiently confident that animal suffering matters, but the empirical evidence here is rather weak.

comment by shminux · 2013-07-23T23:23:15.527Z · LW(p) · GW(p)

That's some excellent steelmanning. I would also add that creating animals for food with lives barely worth living is better than not creating them at all, from a utilitarian (if repugnant) point of view. And it's not clear whether a farm chicken's life is below that threshold.

Replies from: MTGandP, Lukas_Gloor, Quinn, Xodarap
comment by MTGandP · 2013-07-23T23:58:35.986Z · LW(p) · GW(p)

I think it's fairly clear that a farm chicken's life is well below that threshold. If I had the choice between losing consciousness for an hour or spending an hour as a chicken on a factory farm, I would definitely choose the former.

Ninja Edit: I think a lot of people have poor intuitions when comparing life to non-life because our brains are wired to strongly shy away from non-life. That's why the example I gave above used temporary loss of consciousness rather than death. Even if you don't buy the above example, I think it's possible to see that factory-farmed life is worse than death. This article discussed how doctors--the people most familiar with medical treatment--frequently choose to die sooner rather than attempt to prolong their lives when they know they will suffer greatly in their last days. It seems that life on a factory farm would entail much more suffering than death by a common illness.

Replies from: shminux
comment by shminux · 2013-07-24T00:05:37.746Z · LW(p) · GW(p)

If I could choose to live for an additional hour but had to spend that time as a chicken on a factory farm, I would certainly decline.

I probably would too, but I am not a chicken. I think you are over-anthropomorphizing them.

Replies from: MTGandP, Pentashagon
comment by MTGandP · 2013-07-24T00:20:48.436Z · LW(p) · GW(p)

I don't see why a chicken would choose any differently. We have no reason to believe that chicken-suffering is categorically different from human-suffering.

Replies from: Watercressed
comment by Watercressed · 2013-07-24T01:09:50.978Z · LW(p) · GW(p)

If we were to put a bunch of chickens into a room, and on one side of the room was a wolf, and the other side had factory farming cages that protected the chickens from the wolf, I would expect the chickens to run into the cages.

It's true that chickens can comprehend a wolf much better than they can comprehend factory farming, but I'm not quite sure how that affects this thought experiment.

Replies from: MTGandP
comment by MTGandP · 2013-07-24T01:14:37.601Z · LW(p) · GW(p)

And I expect that a human would do the same thing.

Replies from: Watercressed
comment by Watercressed · 2013-07-24T01:34:51.119Z · LW(p) · GW(p)

I made a hash of that comment; I'm sorry.

comment by Pentashagon · 2013-07-26T22:52:12.485Z · LW(p) · GW(p)

This is testable; give the chickens a lever to peck that knocks them out for an hour.

comment by Lukas_Gloor · 2013-07-23T23:41:12.249Z · LW(p) · GW(p)

Even if this is correct, in terms of value spreading it seems to be a very problematic message to convey. Most people are deontologists and would never even consider accepting this argument for human infants, so if we implicitly or explicitly accept it for animals, then this is just going to reinforce the prejudice that some forms of suffering are less important simply because they are not experienced by humans/our species. And such a defect in our value system may potentially have much more drastic consequences than the opportunity costs of not getting some extra live-years that are slightly worth living.

Then there is also an objection from moral uncertainty: If the animals in farms and especially factory farms (where most animals raised for food-purposes are held) are above "worth living", then barely so! It's not like much is at stake (the situation would be different if we'd wirehead them to experience constant orgasm). Conversely, if you're wrong about classical utilitarianism being your terminal value, then all the suffering inflicted on them would be highly significant.

comment by Quinn · 2013-07-24T20:04:42.465Z · LW(p) · GW(p)

Robin Hanson has advocated this point of view.

I find the argument quite unconvincing; Hanson seems to be making the mistake of conflating "life worth living" with "not committing suicide" that is well addressed in MTGandP's reply (and grandchildren).

comment by Xodarap · 2013-07-24T00:06:54.314Z · LW(p) · GW(p)

This is a good point, and was raised below. Note that the argument doesn't seem to be factually true, independent of moral considerations. (You don't actually create more lives by eating meat.)

comment by Xodarap · 2013-07-24T13:36:34.778Z · LW(p) · GW(p)

Regarding (4) (and to a certain extent 3 and 5): I assume you agree that a species feels phenomenal pain just in case it proves evolutionarily beneficial. So why would it improve fitness to feel pain only if you have "abstract thought"?

The major reason I have heard for phenomenal pain is learning, and all vertebrates show long-term behavior modification as the result of painful stimuli, as anyone who has taken a pet to the vet can verify. (Notably, many invertebrates do not show long-term modification, suggesting that vertebrate vs. invertebrate may be a non-trivial distinction.)

Richard Dawkins has even suggested that phenomenal pain is inversely related to things like "abstract thought", although I'm not sure I would go that far.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T18:10:30.465Z · LW(p) · GW(p)

Actually, I'm an eliminativist about phenomenal states. I wouldn't be completely surprised to learn that the illusion of phenomenal states is restricted to humans, but I don't think that this illusion is necessary for one to be a moral patient. Suppose we encountered an alien species whose computational substrate and architecture was so exotic that we couldn't rightly call anything it experienced 'pain'. Nonetheless it might experience something suitably pain-like, in its coarse-grained functional roles, that we would be monsters to start torturing members of this species willy-nilly.

My views about non-human animals are similar. I suspect their psychological states are so exotic that we would never recognize them as pain, joy, sorrow, surprise, etc. (I'd guess this is more true for the positive states than the negative ones?) if we merely glimpsed their inner lives directly. But the similarity is nonetheless sufficient for our taking their alien mental lives seriously, at least in some cases.

So, I suspect that phenomenal pain as we know it is strongly tied to the evolution of abstract thought, complex self-models, and complex models of other minds. But I'm open to non-humans having experiences that aren't technically pain but that are pain-like enough to count for moral purposes.

Replies from: davidpearce, Xodarap
comment by davidpearce · 2013-07-25T11:22:14.541Z · LW(p) · GW(p)

RobbBB, in what sense can phenomenal agony be an "illusion"? If your pain becomes so bad that abstract thought is impossible, does your agony - or the "illusion of agony" - somehow stop? The same genes, same neurotransmitters, same anatomical pathways and same behavioural responses to noxious stimuli are found in humans and the nonhuman animals in our factory-farms. A reasonable (but unproven) inference is that factory-farmed nonhumans endure misery - or the "illusion of misery" as the eliminativist puts it - as do abused human infants and toddlers.

comment by Xodarap · 2013-07-24T22:38:23.067Z · LW(p) · GW(p)

But I'm open to non-humans having experiences that aren't technically pain but that are pain-like enough to count for moral purposes.

I guess maybe I just didn't understand how you were using the term "pain" - I agree that other species will feel things differently, but being "pain-like enough to count for moral purposes" seems to be the relevant criterion here.

comment by Juno_Watt · 2013-07-24T18:49:47.484Z · LW(p) · GW(p)

Something About Sapience Is What Makes Suffering Bad.

A strong asertion of this principle can be foud here

comment by shminux · 2013-07-24T00:03:52.823Z · LW(p) · GW(p)

My other comment was downvoted below the troll level, so I'll ask here. Suppose we found a morphine-like drug which effectively and provably wireheads chickens to be happy with their living conditions, and with no side effects for humans consuming the meat. Would that answer your arguments about suffering?

Replies from: DanielLC, Desrtopa, RobbBB, Viliam_Bur, lsparrish, Solitaire, BarbaraB, Document, DxE, aelephant
comment by DanielLC · 2013-07-24T00:19:42.641Z · LW(p) · GW(p)

I'd be happy with that.

Until we do, I'm not eating meat.

comment by Desrtopa · 2013-07-24T02:22:41.543Z · LW(p) · GW(p)

Personally, my issues with eating meat are at least as much about ecological concerns as humane ones, but I would definitely be in favor of eating vat meat which can be cultured with minimal ecological impact.

comment by Rob Bensinger (RobbBB) · 2013-07-24T18:34:39.620Z · LW(p) · GW(p)

This is not at all an unrealistic possibility. It probably will be via gene knockout rather than a drug injection, if it happens. See Adam Shriver, "Knocking Out Pain in Livestock: Can Technology Succeed Where Morality Has Stalled?"

If this doesn't happen, it will probably be either because lab-grown meat ended up being cheaper to mass-produce, or because the people strongly pushing for animal rights were too squeamish to recognize the value of this option.

Replies from: Document, shminux, Juno_Watt
comment by Document · 2013-08-07T21:24:12.373Z · LW(p) · GW(p)

Previously discussed here at Overcoming Bias. (I also remember Michael Anissimov responding, but I can't find that.)

Also, you're certainly optimistic about advancing from chickens having a reduced experience of pain to their being undisputedly proven to be happy with all aspects of their experience.

comment by shminux · 2013-07-24T18:48:20.467Z · LW(p) · GW(p)

Thanks, it's a great link. I didn't know that it is possible to manipulate pain affect separately from pain sensitivity on a genetic level. I wonder how animal rights advocates react to this approach.

comment by Juno_Watt · 2013-07-24T18:40:18.057Z · LW(p) · GW(p)

I woudn;t hasten to describe them a confused. How about the modest proposal of growing acephalus humans for consumption? Is that too far down the slope?

Replies from: fractalcat, shminux, Ishaan
comment by fractalcat · 2013-07-29T12:22:27.838Z · LW(p) · GW(p)

Nitpick: 'anencephalic'. 'cephalon' is head, 'encephalon' is brain.

Replies from: Document
comment by Document · 2013-08-07T20:27:52.077Z · LW(p) · GW(p)

Given only the two options, I think I'd rather humans grown for consumption not have heads than have them.

comment by shminux · 2013-07-24T18:53:20.796Z · LW(p) · GW(p)

How about the modest proposal of growing acephalus humans for consumption?

Well, currently it's even prohibited for organ replacement, for knee-jerk reasons.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-07-24T19:29:32.506Z · LW(p) · GW(p)

My brain really, really, really wanted to read "knee jerky" there.
I wonder about my brain sometimes.

comment by Ishaan · 2014-01-06T14:03:16.699Z · LW(p) · GW(p)

Actually, I suspect (but am not certain, hence the questioning) that this falls in one of those areas where some humans genuinely differ from others with respect to morality.

I think it would be illuminating to hear individuals who think it is too far down the slope to articulate 1) why they feel that way 2) whether the objection goes away if it's for organs instead of food, 3) how they feel about early-term abortion and embryonic stem cells 4) whether it is morally okay to eat a corpse of a person who has died and has given permission to have their corpse eaten.

comment by Viliam_Bur · 2013-07-24T15:24:37.908Z · LW(p) · GW(p)

Uhm, somehow it feels even worse. I am not taking this feeling as a rational answer to the question, just as a warning that the topic may be more difficult than it seems. (One possible explanation is Shelling point against using wireheading as a solution.)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T19:05:54.217Z · LW(p) · GW(p)

Are your intuitions captured by this short Sandel essay? Specifically, the fourth-to-last paragraph.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-24T19:27:34.117Z · LW(p) · GW(p)

Chickens like to roam, but most egg-laying hens are confined, frustrated, in small battery cages. Suppose we could alter the gene that makes chickens want to run free. The chickens, now content to be confined, would suffer less frustration, and egg production would improve. Or suppose we found a way to dumb down cows to eliminate the fear they experience on their way to the slaughter chute. Or to engineer pigs without hooves, snouts, and tails. Is there anything troubling about altering animals in these ways?

Interesting, but no.

My objection was based on imagining a chicken that is hurt physically, but doesn't care, because the morphine supressed the pain. It was not in the comment, but I imagined that animals would be treated the same way as they are now (perhaps even worse, because if they don't react painfully, there will be even less sympathy for them), they just wouldn't subjectively suffer because of the morphine. That I find abhorent.

If the chicken or other animals are just modified to be content with being in prison, and they are not harmed in any other way, I would be okay with that. Actually, I would consider that ethically better than them living in nature.

The article seems to me about not playing god, but I don't worship the blind idiot god. If it is okay for evolution to give tails to some animals and not give tails to other animals, it is not different if people add or remove the tails genetically (assuming that kind of change does not harm the animal; for example a pig without the tail could have problem to fight off flies).

I also wouldn't have a problem with parents choosing a gender, height, or eye color for their children; I would be only concerned with crazy parents making choices that harm their child (for example the parents would choose some disability for their child, and the politically correct people would protect this choice to avoid offending the existing disabled people). Which would lead to gray area of traits where there is no general agreement whether they are harmful or not. But the true objection is against choosing harmful changes, not against changes per se.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-24T19:41:45.518Z · LW(p) · GW(p)

It sounds like you understand 'content' to mean 'pain-free and suffering-free', whereas I imagine it as something more like 'suffering-free'. I have masochist friends who are content (or far more than content) to experience pain, because of the positive valence they ascribe to (or associate with) that pain. How does your empathy for chickens that feel pain but don't care respond to human masochism?

The article seems to me about not playing god, but I don't worship the blind idiot god.

I think Sandel's argument is that we might have a basic, not-culturally-constructed anti-tampering moral intuition that doesn't depend on there being a God whose authority we are impinging on (any more than the thankfulness we feel when something spectacularly good happens in our life presupposes that there is a metaphysical being out there who is the Object Of Our Thanks). Which I don't find psychologically implausible, though if it's a harmful intuition and can't be brought into reflective equilibrium with our other moral intuitions then it might deserve suppression.

Replies from: fubarobfusco, Viliam_Bur
comment by fubarobfusco · 2013-07-25T23:29:31.186Z · LW(p) · GW(p)

I have masochist friends who are content (or far more than content) to experience pain, because of the positive valence they ascribe to (or associate with) that pain.

It's my impression that the typical masochist associates positive valence with pain only in certain circumstances. The person who enjoys being flogged by a lover may still dislike stubbing a toe every bit as much as the non-masochist.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-26T05:02:09.962Z · LW(p) · GW(p)

Yes. We can focus just on the instances of pain that are seen as desirable (or a matter of indifference). Or we can imagine a masochist who enjoys all pain. The analogy only depends on there being some possible instance of this; we then have to ask, if we permit this in the case of humans, why we would find it abhorrent in the case of chickens.

comment by Viliam_Bur · 2013-07-24T20:04:56.608Z · LW(p) · GW(p)

I have masochist friends who are content (or far more than content) to experience pain, because of the positive valence they ascribe to (or associate with) that pain. How does your empathy for chickens that feel pain but don't care respond to human masochism?

I don't have enough data about how masochism feels from inside, so I don't feel qualified to answer this. (I know about cases where people cause themselves pain to forget some other pain, physical or mental. I don't know if a typical masochist is like this, or completely different, and in the latter case, how specifically it feels from inside.)

Replies from: mare-of-night
comment by mare-of-night · 2013-07-26T00:42:50.372Z · LW(p) · GW(p)

I'm not exactly a masochist, but I suspect my perception of physical pain is a little wonky sometimes.

Example: I took a massage class in college once. The other student I usually worked with told me I tended to get really impressive knots in my shoulders, and I could tell it hurt a lot when they were worked on. I also remember not really minding most of the time, and getting bored when I didn't have many knots because the pain kept things interesting. (But uh, I do respond normally to pain in most circumstances, so if anyone reading this meets me in real life, please don't test it.)

The way this feels from the inside is pain is just another sensation, like heat, cold or pressure. I suspect it would be similar for a non-suffering chicken, if done right. (Though I have no idea about some of the other changes that would be required, like feeling content in a small cage.) Maybe imagine if you put clothes on the chickens, and the chickens got used to the feeling of having fabric on them (pressure sensation) and didn't mind it. I don't actually understand your rejection to morphine chickens, though, so I'm not sure whether you'd consider this an acceptable solution.

comment by lsparrish · 2013-07-24T03:10:40.743Z · LW(p) · GW(p)

I agree with others that vegetarianism is likely more practical and addresses other concerns besides suffering. However, perhaps there is a lot of utility that could be gained in the near term by e.g. keeping factory-farmed animals high on morphine for most of their lives. Would this impose additional costs that keep it from being economical? One would be food purity -- people might not like morphine in their meat/eggs/milk.

Replies from: Nornagest
comment by Nornagest · 2013-07-24T03:46:49.584Z · LW(p) · GW(p)

Opiates quickly build tolerance. They're also not terribly cheap, although fully synthetic opioids like methadone would probably be more amenable to production at these scales than poppy derivatives like morphine.

comment by Solitaire · 2014-01-05T20:41:27.441Z · LW(p) · GW(p)

So basically a chicken version of The Matrix?

Replies from: BarbaraB
comment by BarbaraB · 2014-01-05T21:21:16.666Z · LW(p) · GW(p)

Sell this idea to Hollywood !

comment by BarbaraB · 2014-01-05T21:19:44.956Z · LW(p) · GW(p)

It would still be sorry for the chicken.

However, I bet, somebody is working on it already.

In my previous research institution, they fed chicken with grain mixed with sand and afterwards cut the heads of and measured the concentration of happiness chemicals. The justification was, that these chicken breeds gain weight too quickly, which kills them before their age of reproduction. However, the farmers need to reproduce some of them, so they have to keep them half hungry, until they reach sexual maturity. The research was supposed to find out, whether the starvation is more pleasant with sand mixed into feed. Luckily for my mental health, I did not participate on this research. Duh, these memories make me shudder !

Replies from: shminux
comment by shminux · 2014-01-06T02:01:40.934Z · LW(p) · GW(p)

First, I am not convinced that chicken brains are advanced enough to feel suffering (as opposed to pain) the way higher primates do. Second, breeding chickens for quick weight gain would probably not be considered very ethical to begin with, so this research seems like locking the barn doors after the horses have bolted.

comment by Document · 2013-08-07T21:19:47.853Z · LW(p) · GW(p)

Previously discussed here at Overcoming Bias. (I also remember Michael Anissimov responding, but I can't find that.)

I considered answering your question, but then realized it was directed at peter_hurford and I'd have to do a lot of reading to understand the context.

comment by DxE · 2013-07-24T16:36:35.256Z · LW(p) · GW(p)

"Suppose we found a morphine-like drug which effectively and provably wireheads NON-WHITE PEOPLE to be happy with their living conditions, and with no side effects for WHITE PEOPLE consuming their flesh."

Has a different sort of emotional impact, no?

Replies from: SaidAchmiz, wedrifid, shminux, Kindly
comment by Said Achmiz (SaidAchmiz) · 2013-07-24T20:13:38.572Z · LW(p) · GW(p)

This is a silly strawman, but I'll respond anyway, because why not.

The difference is that we (well, not me, but the OP and people who agree with him) only care about chickens to the extent that they are (allegedly) suffering, and we think it's not ok for them to suffer. On the other hand, we think that NON-WHITE PEOPLE (just like WHITE PEOPLE) have the right of self-determination, that it's wrong to forcibly modify their minds, etc.

comment by wedrifid · 2013-07-24T17:07:50.142Z · LW(p) · GW(p)

"Suppose we found a morphine-like drug which effectively and provably wireheads NON-WHITE PEOPLE to be happy with their living conditions, and with no side effects for WHITE PEOPLE consuming their flesh."

Has a different sort of emotional impact, no?

Mostly it sounds like you are calling all NON-WHITE PEOPLE chickens.

comment by shminux · 2013-07-24T18:57:18.923Z · LW(p) · GW(p)

Not sure why the parent is downvoted, it's an interesting question. Where does one build a Schelling fence for farming meat without suffering?

comment by Kindly · 2013-07-24T16:46:15.458Z · LW(p) · GW(p)

Yes, but yours is a statement about what things have an emotional impact, not about what's the right thing to do.

comment by aelephant · 2013-07-27T01:29:09.555Z · LW(p) · GW(p)

To me it is not the suffering per se that bothers me about factory farming. I'm having trouble finding the right words, but I want to say it is the "un-naturalness" of it. Animals are not meant to live their whole lives in cages pumped full of antibiotics. I also believe it is harmful to humans, both to the humans who operate these factories (psychologically) & to the humans that consume the product (physically).

On the other hand, it is natural for animals to eat other animals, and properly raised animal products are arguably one of the best sources of nutrition for humans. I also don't think raising chickens on an open farm & slaughtering them is psychologically harmful; I imagine those farmers feel deeply in tune with nature & at peace with their way of life.

Replies from: lavalamp
comment by lavalamp · 2013-07-27T01:35:44.512Z · LW(p) · GW(p)

Animals are not meant to live their whole lives in cages pumped full of antibiotics.

Meaning requires a mind to provide it. Animals are not "meant" to do anything...

comment by imaginaryphiend · 2013-07-25T03:22:36.859Z · LW(p) · GW(p)

My interesting perspective is that I raise Scottish Highland cattle and keep my own back yard chicken coop and also enjoy the company of my family pets. I am also finding my self more and more sympathetic to the sentiments and reasoning of the vegan position when it comes to food politics.

My animals feel and interact socially. They have personal, unique characters - Yes, even the chickens. They display emotions, trust, empathize, grieve... They are fellow beings deserving of our care and compassion[.]

My 2000 lb bull likes to nuzzle and enjoys being brushed. If any of the bovines in my care see me with a pail they anticipate a treat of grains and will come at a run. They will come when called and some even know their names. They enjoy nice grass fed open pastures and woods and clean water and shelter and even protection from predators so that i am quite confident that they have a better quality of life than the wild deer in the neighborhood.

My dogs, similarly have a better life than the wild koyotes. The chickens have it pretty good too in their nursing home (coop) for aged chickens not providing their eggs part of the bargain any more - another story.

But... The kind of farming i am doing is not commercially viable. I look around at many of the other farmers i know and i see that the only ones who are succeeding commercially are the ones growing bigger and engaging in the more economically rewarding (short to medium term, personal business economics horizons) practices of industrial farming.

The genersl comsuming public wants their cake (conveniently packaged, cheap, sugar coated, fat saturated and ready for them in air conditioned mega food boutiques) and want to eat it too. They want variety in and out of season from wherever it can be sourced and they'll buy it at the best price offered regardless of the back story of how it got there.

The food industry/industrial complex is business. It doesn't have a conscience. It has a bottom line. It tries to assist in creating to some extent the demands of the general comsumer, but mostly it just responds to consumer demand in ways that will best make it $$$$.

I've discovered in trying to farm ethically that if i'm not subsidizing my farm operation with outside income, and in effect therefore subsidizing my customers, then i can't afford to farm. Even selling directly to my customers i cannot compete on price with the supermarkets. That's telling. Industrial, factory farming is the response to the demands of the general consumer, the indifferent and little caring or hardly consciencous general consumer.

I would like to be able to say that the great masses of people can have what they want and be assured that animals will be treated ethically and humanely and with dignity and caring treatment, but i think the reality is that as long as people can maintain their ignorance about how things work they will continue to consume without conscience - and the producers will do whatever it takes to survive and thrive in the very competitive and demanding business that is farming.

Maybe population pressures will drive us to better practices and vegetarian or vegan values will win out. I don't know, but i suspect that our generally omnivorous population will likely not change their ways as long as they can maintain their protective mix of ignorance, denial and indifference.

'If' the consumer can be offered tastier, more convenient, cheaper alternatives... But, of course, anyone trying to come up with those alternative would have to compete openly with the powers that be, the established systems we have in place that many have vested interests in. Tuff to fight with the momentum of the way things are when many are fighting to maintain things the way they and some are even fighting for their notions of how things used to be in 'the good old days'.

If you eat eggs or dairy or beef, lamb, chicken, pork etc and you don't know the personal particulars of the animals or animal products you are consuming, then you likely are contributing to the inhumane exploitation of animals in our factory farming industrial complex food supply system ---

I have no simple solutions or grand ideas of how to change things. I'm just another voice in the conversation, with, hopefully, a perspective helpful to the ongoing narrative.

i

Replies from: Qiaochu_Yuan, Ishaan
comment by Qiaochu_Yuan · 2013-07-25T03:28:48.594Z · LW(p) · GW(p)

Please break this up into paragraphs.

comment by Ishaan · 2014-01-06T13:48:41.367Z · LW(p) · GW(p)

So, you are in the position of interacting with commonly eaten animals on a daily basis, you care about the animals enough to name them, and you're philosophically inclined...which means I have a question for you:

Having known these animals and having developed a relationship with them, and knowing that they have lived a better life than they would have in the wild, would you feel intellectually and emotionally comfortable killing and personally eating any of them for meat? What about selling them for slaughter? Have you ever done so?

(if your answer change depending on species, please specify)

comment by Alicorn · 2013-07-24T07:23:02.539Z · LW(p) · GW(p)

Incidental: I don't care unusually much about evangelizing vegetarianism, but I happen to like to talk about food and most of what I know about it is vegetarianism-specialized, so if people are curious about practicalities I am happy to answer questions about what vegetarians eat and how it can be yummy.

Replies from: None, SaidAchmiz, MileyCyrus
comment by [deleted] · 2013-07-24T13:50:35.844Z · LW(p) · GW(p)

I'm interested! I became a vegetarian about 4 months ago, shortly after I started doing my own cooking. My abilities are basically limited to pasta, salads, mushrooms in sandwiches or tortilla wraps, and lots more pasta. To learn recipes, Youtube videos were my main sources. I just haven't gotten around to searching for vegetarian specific foods. What are some more options out there?

Replies from: Alicorn, jbay, AlexanderD
comment by Alicorn · 2013-07-24T23:21:18.967Z · LW(p) · GW(p)

Not to knock pasta (and I recommend my signature sauce, as well as putting artichokes through the blender and adding them to cream sauces for pasta), but I'm more of a soup fan. Bean soup, veggie soup (here's one way to do veggie soup), eggdrop soup, chowder (clam if you eat seafood, broccoli or corn if you don't), polenta leaf soup, miso soup.

There's also more things you can put in sandwiches besides mushrooms. I like Tofurkey, but even if you don't, here are things I put on bread (all of these things include cheese, but you could omit it if you aren't a huge fan of cheese):

  • Panfried tofu slices, spinach sauteed with cheese, hummus
  • Hummus, avocado, shredded cheddar, cucumber slices, sprouts, lettuce
  • Goat cheese, avocado slices, over-easy egg with dill and cayenne
  • Particularly copious amounts of cheese (melted), with optional hummus, avocado, onion slices
  • Fried zucchini and eggplant slices, avocado, hummus, fresh mozzarella
  • Minced garlic, basil leaves, fresh mozzarella

In most of the above cases I make the sandwiches open-faced, and fry them in butter to crisp them up (the last I put in the toaster oven with olive oil, and add the basil and mozzarella after they come out toasty).

Many veggies are lovely roasted. For pretty much all of them, you cut them into bites, put them on an oil-spritzed baking pan, and put them in a 400º oven for twenty minutes. This works for several kinds of squash, asparagus, broccoli, potatoes, etc. You can eat roasted veggies by themselves, or put them in omelets or your pasta or whatever.

I go on Foodgawker for inspiration. For advanced food-related fun, learn to deep fry things - I use my wok and spider skimmer, I don't usually bother with a thermometer and just flick little bits of whatever I'm cooking to see how it reacts, and then I filter the oil for reuse with paper towels and a funnel.

comment by jbay · 2013-07-24T15:39:03.911Z · LW(p) · GW(p)

I recommend getting familiar with chickpeas and tofu. They are both very cheap, very filling, and very nutritious (chickpeas in particular, once you learn how to reconstitute the dried ones). Experimenting with recipes that involve those ingredients is definitely a good idea. Learning to cook quinoa and rice is another helpful skill (wild rice is also nutritious and filling, and quinoa offers a complete protein). Working with those four ingredients and mixing in other vegetables, spices, mushrooms, sauces, etc will offer a very wide range of delicious and nutritious foods that you can make as a baseline.

You can also look into the dishes of different cultures that have vegetarian traditions. For example, Indian food has a very large range of interesting vegetarian dishes. So does Taiwan, and other strongly Buddhist-influenced cultures. In Japan, Buddhism-inspired vegetarian food is referred to as "Shojin-ryouri", so if you like Japanese food, you might look up some shojin recipes. Those are just some examples =)

comment by AlexanderD · 2013-07-24T19:14:16.352Z · LW(p) · GW(p)

Tofu is a good choice, and can be used in many ways. One secret to tofu is to pay attention to the amount of water in the tofu, as that seriously changes the way it tastes, feels, and acts in dishes. For example, when you are making a stew with tofu, such as the spicy and delicious Korean soup kimchi jiggae, you probably want to choose silken tofu, which is soft and will interact well with the rich broth. But if you are making something like McFoo, a tofu sandwich where you marinate the tofu in select spices until it tastes like junk food, then you want a firm and chewy tofu. You can achieve the latter by pressing your tofu for an hour (there are special things to do this, but a towel, cutting boards, and a brick does just fine). You can make it even firmer and more textured by freezing it first, so most of my tofu goes right into the freezer until I need it.

There are also a few veg-specific things that you almost certainly have never had, such as TVP: textured vegetable protein. Despite the unappetizing sci-fi name, it's actually an amazing thing to include in your diet. The trick to learning to love and use it is not to make the sad mistake of just pretending it's meat. Most fake meat things don't taste anything like meat, but instead have a rank and lingering chemical taste and overwhelming profile of salt and sugar, as they try to mimic what you might have liked about meat. TVP and other decent meat substitutes are different, and they just taste good without trying to taste like meat. So TVP chili is hearty and rich and has a great mouthfeel, giving you that chewiness and resistance that's part of what makes meat good, but it doesn't try to ape meat.

Other things you can make: veggie shepherd's pie (lentils and veggies for the filling), pumpkin mac and cheese (add shredded pumpkin when making mac and cheese; if you use a sharp cheese the tastes blend amazingly), filo-wrapped spinach and veggies (you can buy prepared filo dough), loaded baked potatoes, pizza, calzones, quiches, grilled cheese and chard sandwiches, and lots of variations on curries and stews and things.

comment by Said Achmiz (SaidAchmiz) · 2013-07-24T15:26:40.441Z · LW(p) · GW(p)

Do you eat eggs and dairy?

If you do not, then question: what is the best non-eggs/dairy solution to desserts? That is, what would you substitute in e.g. pastry cream, whipped cream, meringue, cakes, pastry dough, etc.? Is there some general solution, or is it handled on a case-by-case basis?

(If you do eat eggs/dairy, disregard this question.)

Replies from: Allison_Smith, Alicorn
comment by Allison_Smith · 2013-07-24T17:13:50.385Z · LW(p) · GW(p)

I am not Alicorn, but I also like talking about delicious food and I do not eat eggs and dairy. Unfortunately, there is no general solution to the egg/dairy substitution problem, especially for the eggs end of it.

There are some things I just don't try to adapt: meringue, pastry cream, and whipped cream fall more-or-less into this category. I have had delicious dairy-free versions of whipped cream that seem to have been based on the fatty part of coconut milk, but I haven't made any myself.

There are some substitutions that are easy and consistent. In baking cakes, cookies, and similar things, you can usually use any unsweetened soy or nut milk 1:1 for milk, and use margarine in place of butter, or mild flavored vegetable oil in place of melted butter. It is easiest to get good results if your recipe is for spice or chocolate cake, or is otherwise meant to taste like something other than butter, as even the best non-dairy butter substitutes do not taste quite like the real thing. Eggs are a slightly harder thing to substitute for, so for a really easy experience, go for a recipe that does not use them; sometimes these are "light" cakes or recipes written when food was expensive or rationed.

Eggs, even in baking where they are non-obvious in the final product, can be tricky to substitute for because they do so many things. If the eggs are mainly adjusting the consistency of the batter or dough, you can substitute for 1 egg with 1/4 cup of soft silken tofu , applesauce, or soy yogurt, or anything of a similar texture that you think would taste good. If I expect the egg to actually do some work on helping the rising process, I use 1/4 cup of the liquid from the recipe or of soy milk, plus 1 Tbsp ground flaxseed or 1 tsp ground psyllium husk. If there are more than 1 or 2 eggs called for, I re-evaluate whether I want to use this recipe (things that are supposed to get flavor from eggs, or that use eggs in complicated ways, like with yolks and whites separated, are beyond my skill level to adapt), and if I still want to, I use some combination of the substitutions available to me, to avoid the food tasting heavily of flax or applesauce when I didn't intend that.

Replies from: SaidAchmiz, thomblake
comment by Said Achmiz (SaidAchmiz) · 2013-07-24T18:51:41.964Z · LW(p) · GW(p)

Thank you for your response!

I was, in fact, largely thinking of recipes where the butter, eggs, cream, etc. are doing a lot of the flavor and texture work. It sounds like that's something that is lost in an eggs/dairy free diet. This is valuable information.

Next question: would you be able to recommend a good source of dessert recipes that make the most of veg*an limitations on ingredients (rather than attempting to imperfectly substitute for eggs/dairy/etc.)?

(My motivation for these questions, by the way, is that I regularly bake desserts for my friends, and I'd like to be able to make sure that any people of my acquaintance who have veg*an dietary limitations don't feel left out.)

Replies from: Allison_Smith
comment by Allison_Smith · 2013-07-24T21:59:51.012Z · LW(p) · GW(p)

There seem to be a lot of vegan dessert cookbooks out there these days, but of course they are of varying quality. My personal favorites are by Isa Chandra Moskowitz; the link goes to the Desserts category of her blog, so you can see if you like her style.

One really specific recipe that I found surprising, in terms of successfully replacing a food that depends heavily on dairy, is this chocolate mousse. The other creamy food it is easy to successfully replace milk in is pudding; a blancmange (aka Jello cook'n'serve) will work fine with soymilk or with a thick enough nut milk. (Rice milk in particular is thin enough that you have to adjust the ratios or cooking time to get it to set properly.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-24T23:13:07.937Z · LW(p) · GW(p)

Thanks for the links, I will check them out!

Glancing quickly at the chocolate mousse recipe, something occurred to me: how do you deal with vegan ingredients being more expensive than non-vegan ones? For instance, vegan chocolate is way pricier around here than regular chocolate. Maple syrup is VERY expensive (is imitation syrup vegan?).

Replies from: Allison_Smith
comment by Allison_Smith · 2013-07-25T16:19:49.228Z · LW(p) · GW(p)

I tend to figure that price increase on individual ingredients is compensated for by the fact that avoiding animal products encourages me to buy food in an earlier state of processing, which tends to be less expensive. Also, some aspects of a vegetarian or vegan diet are less expensive than the alternative; for instance, protein from dried beans is often cheaper than protein from meat. I have never found groceries a problematically large portion of my budget.

I think imitation syrup is usually high fructose corn syrup with colors and flavors added, so in most cases it is probably vegan. I'm not sure it would taste good in this recipe, but you could experiment.

comment by thomblake · 2013-07-24T17:40:01.434Z · LW(p) · GW(p)

The last category you mention is basically "eggs used as an emulsifier" - so other emulsifiers should also work.

comment by Alicorn · 2013-07-24T22:59:13.079Z · LW(p) · GW(p)

I do eat eggs and dairy - and lots of 'em - but I have a really good vegan chocolate cake recipe which I will paste below. Churros are also vegan and delicious, and they're not really hard to make if you know how to deep-fry. Direct substitution for dairy ingredients is mostly disappointing, although coconut products can do some neat things and coconut oil often substitutes straight across with butter.

1 1/2 c flour
1 tsp baking soda
1 c sugar
1/4 c cocoa or carob powder
1/2 tsp salt
1 tablespoon white vinegar
1 tsp vanilla
1/3 c canola oil
1 c water

Preheat oven to 350º. Mix the dry ingredients in an 8" square pan. Add the wet ingredients and stir well, making sure the edges and corners of the pan are not omitted. When the batter is smooth and incorporated, bake for 30 minutes or until a toothpick inserted in the center comes out clean.

comment by MileyCyrus · 2013-07-24T20:05:22.559Z · LW(p) · GW(p)

What vegetarian things can I eat that won't leave me hungry an hour later?

Replies from: MTGandP, Alicorn, TabAtkins, Jabberslythe
comment by MTGandP · 2013-07-24T23:11:14.792Z · LW(p) · GW(p)

I don't find that this is ever a problem for me. YMMV, but I'd suggest eating calorie-dense foods such as nuts, beans, grains, and fatty foods.

This LiveStrong article has a sample meal plan:

A meal plan that provides over 3,000 calories begins with 1 cup of cooked quinoa topped with1/4 cup raisins, 1 oz. toasted almonds, 1 tbsp. honey and 1 cup hemp milk. For lunch, have 2 cups of whole wheat pasta tossed with 1 tbsp. olive oil, 1 cup white beans and sautéed kale. Dinner might include 6 oz. of firm tofu stir fried with broccoli, soy sauce, 1 oz. cashews and served over 1 cup brown rice. Snack on a peanut butter and jelly sandwich, made with whole grain bread, 2 tbsp. peanut butter and all-fruit spread; ½ cup of granola with soy milk and a smoothie made by blending a frozen banana with mangos, flaxseed meal, almond butter and coconut water.

comment by Alicorn · 2013-07-24T22:52:34.335Z · LW(p) · GW(p)

I don't have this problem with most any food so I'm not sure what exactly might cause it, but if you find that you have this problem with vegetarian food and not with meat, I'd try heavy stuff like cheese omelets, preferred unmeats with nice sauces on them, maybe bean stew.

comment by TabAtkins · 2013-07-27T15:32:17.392Z · LW(p) · GW(p)

If you're having issues with your hunger response, it's almost certainly because you've simply eliminated meat from the meal, without replacing it with something nutritionally equal. Your hunger response is mediated by a number of food chemicals, which you've like never had to notice before because meat provides the appropriate ones automatically,

Solving it is easy - just eat protein (nuts, beans, etc) and fat (nuts, oil, peanut butter, etc.). That'll hit you with the right stuff to replace what you're losing with meat, and keep your stomach's brain happy because it's receiving the right chemicals.

People too often think vegetarianism is just a light salad at every meal. >_<

comment by Jabberslythe · 2013-07-26T20:15:38.083Z · LW(p) · GW(p)

It could be that the vegetarian stuff you are eating doesn't have much protein in it. Or that the protein source doesn't have all the amino acids. There is certainly vegetarian stuff that does have these things, it just takes more knowledge and meal design that for meat diets.

Protein powder can also be helpful for vegetarians (and everyone). I recommend pea protein powder.

comment by novalis · 2013-07-23T22:10:51.279Z · LW(p) · GW(p)

This essay's thesis is that we should eat less meat, but its evidence is only that factory-farmed meat is a problem.

Most (but not all) of the meat I eat is not factory-farmed. The coop where I buy my meat says (pdf) that it buys only "humanely and sustainably raised" meat and poultry ... from animals that are free to range on chemical-free pastures, raised on a grass-based diet with quality grain used only as necessary, never given hormones and produced and processed by small-scale farmers." (For eggs, the coop does offer less-humane options, but I only buy the most-humane ones).

I might stop eating most of the factory-farmed meat that I eat. It would simply mean never eating out at non-frou-frou places. The exception would be dealing with non-local family (for local family, I could simply bring meat from the coop to share).

That said, it's hard to know when a restaurant is serving humanely raised meat. It seems like it would be nice to have a site where I could type in a restaurant's name, and find out who their suppliers are and what standards they adhere to. For the vast majority of restaurants, the answer would be that they just don't care. But, at least in NYC, it's common for foodie sorts of restaurants to list their suppliers. My favorite restaurant, Momofuku, for instance, sometimes specifically lists that some dish's meat is from e.g. Niman Ranch. Niman Ranch claims to raise their animals humanely. Do they really? And such a site would increase the pressure on restaurants to choose humane suppliers.

Replies from: Xodarap, peter_hurford
comment by Xodarap · 2013-07-23T22:57:34.233Z · LW(p) · GW(p)

Niman Ranch claims to raise their animals humanely. Do they really?

The shareholders of Niman Ranch voted to reduce their standards to increase profits. As a result, Bill Niman (who originally founded the company) now refuses to eat their products, Wikipedia has more

comment by Peter Wildeford (peter_hurford) · 2013-07-23T22:28:20.581Z · LW(p) · GW(p)

This essay's thesis is that we should eat less meat, but its evidence is only that factory-farmed meat is a problem.

I only think factory-farmed meat is the problem. I use "eat less meat" as a shorthand, since nearly all meat is factory-farmed meat.

~

The coop where I buy my meat says (pdf) that it buys only "humanely and sustainably raised" meat and poultry

I definitely agree it's better to buy "humanely raised" meat and poultry than not "humanely raised" meat/poultry. And perhaps you have found a trustworthy source.

But be careful of why I put "humanely raised" in quotes -- many such operations are not actually humane. Cage-free is much better than not cage-free, but conditions are still pretty bad. Free-range is better than not free-range, but just legally requires the animal be allowed to stay outside. There are no legal restrictions on the quality of the outside section, how long they can stay outside, or crowding. Vegan Outreach has more information.

~

I might stop eating most of the factory-farmed meat that I eat. It would simply mean never eating out at non-frou-frou places. The exception would be dealing with non-local family (for local family, I could simply bring meat from the coop to share).

That sounds like an excellent idea!

Replies from: novalis, Larks, PeerGynt, magfrump, Larks
comment by novalis · 2013-07-23T23:04:32.694Z · LW(p) · GW(p)

I was going to ask what you thought about http://www.certifiedhumane.com/ but it is completely fucking useless: "The Animal Care Standards for Chickens Used in Broiler Production do not require that chickens have access to range." So nevermind.

So instead I'll ask why a meaningful set of standards doesn't exist. http://www.globalanimalpartnership.org/ Step 5, maybe? Their web site sucks, because it doesn't give me a searchable list of products, but maybe they just need some help.

Anyway, this seems like it would be a way more effective thing for EAA to do than just about anything else -- I bet lots more people would be willing to pay more for meat, than would be convinceable to eat less meat directly.

Replies from: MTGandP
comment by MTGandP · 2013-07-24T00:07:19.920Z · LW(p) · GW(p)

Anyway, this seems like it would be a way more effective thing for EAA to do than just about anything else -- I bet lots more people would be willing to pay more for meat, than would be convinceable to eat less meat directly.

That sounds like it could be a good idea. One immediate problem I see with this is that most consumers wouldn't be able to distinguish EAA's label from the dozens of nearly-meaningless labels such as "Free Range", "Cage-Free", etc.

Replies from: novalis
comment by novalis · 2013-07-24T00:37:07.653Z · LW(p) · GW(p)

It would take a serious marketing campaign. But Givewell seems to be increasingly popular -- they would probably promote a well-designed program.

comment by Larks · 2013-07-24T09:59:22.307Z · LW(p) · GW(p)

How much of that is US-specific? According to defra:

Stocking rate in the house is as follows: ... Chickens = 13 birds but not more than 27.5 kg live weight per m²;

"the birds have had during at least half their lifetime continuous daytime access to open-air runs, comprising an area mainly covered by vegetation, of not less than:

1m² per chicken or guinea fowl (in the case of guinea fowls, open-air runs may be replaced by a perchery having a floor space of at least that of the house and a height of at least 2m, with perches of at least 10 cm length available per bird in total (house and perchery)).

2m² per duck

4m² per turkey or goose

source

comment by PeerGynt · 2013-07-24T00:05:03.186Z · LW(p) · GW(p)

I only think factory-farmed meat is the problem. I use "eat less meat" as a shorthand, since nearly all meat is factory-farmed meat.

Factory-farmed meat converts photosynthetic energy (grass) to food much more efficiently than free-range farming. Factory farming requires less inputs in terms of arable land and water, and emits less CO2. If everyone in the world ate non-factory farmed meat, we would have to cut down the Amazon many times over, thereby drastically reducing earth's capacity to convert CO2 back to carbohydrates.

When you decide whether your meat should be factory farmed or not, , there are consequences on two scales that are negatively correlated: Animal welfare and global warming. Which of these scales you give most weight to, will depend on your prior for anthropogenic global warming, on your beliefs about the consequences of global warming, and on the priority you give animals in your aggregation scheme over individuals with moral standing.

Replies from: Douglas_Knight, peter_hurford
comment by Douglas_Knight · 2013-07-24T14:39:33.202Z · LW(p) · GW(p)

Modern farming techniques are designed to minimize labor, especially managerial labor, not energy.

Factory-farmed meat converts photosynthetic energy (grass) to food much more efficiently than free-range farming.

Factory-farmed animals don't eat grass. This is a really important detail.

comment by Peter Wildeford (peter_hurford) · 2013-07-24T06:10:12.982Z · LW(p) · GW(p)

I hadn't considered that. Do you have any sources for your claims?

Personally, I don't eat meat of any type, so this wouldn't be a problem for my diet.

Replies from: PeerGynt
comment by PeerGynt · 2013-07-24T15:37:15.857Z · LW(p) · GW(p)

A source is "Allison, Richard. “Organic chicken production criticised for leaving a larger carbon footprint.” Poultry World. 1 Mar. 2007". This article is behind a paywall. I am pasting a table from the article:

AVERAGE ENVIRONMENTAL IMPACT FROM POULTRY PRODUCTION (% DIFFERENCE TO CONVENTIONAL) Organic Free range

Energy use +33% +25%

Global warming potential (CO2) +46% +20%

Eutrophication potential +75% +28%

Acidification potential +52% +33%

Pesticide use (dose/ha) -92% +12%

Note: This article is in a trade publication and could be biased. It is based on an original report which I could not locate, and which apparently has sparse data. Obviously, more research is needed.

My prior beliefs are not the result of scientific studies, but follow from the following observations:

(1) To reduce global warming, we need to maximize the number of calories produced per unit of CO2 emission

(2) The most effective way we can alter that ratio, is by reducing the amount of biochemical energy that is used to power the biochemical processes of farm animals over the course of their lives. This is primarily a function of duration of their lives.

(3) Factory farming achieves shorter duration, by having the animals grow more quickly

(4) Another way we can reduce the net amount of CO2 produced per unit food, is by reducing the amount of land used, thus allowing less deforestation.

(5) Factory farming achieves this by using less land

(6) I cannot see any other mechanisms that differ between factory farming and organic farming which would have a major net effect on the carbon cycle.

comment by magfrump · 2013-07-26T21:02:04.244Z · LW(p) · GW(p)

How much value would this conversion have relative to vegetarianism?

For example, I recently changed to only buying grass-fed beef (in part for health/taste reasons); how much humane value would you think that has relative to replacing my beef with whatever else?

What about replacing eggs with cage free or free range eggs versus a vegan replacement?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-26T21:12:55.467Z · LW(p) · GW(p)

The value seems obviously positive, but it's going to be very unclear as to what the exact value is. On a pure suffering per kg of meat basis, whatever you're doing with chicken welfare is going to dominate. I expect cage free / free range to be moderately better than still eating regular eggs, but not eating eggs to be perhaps like 3x better, relatively.

comment by Larks · 2013-07-24T09:54:23.695Z · LW(p) · GW(p)

There are no legal restrictions ... how long they can stay outside

That's not quite what the source (beware, unpleasant images) says;

No other requirements - such as ... the amount of time spent outdoors ... are specified

An animal that is allowed to go outside every day, but always chooses not to, satisfies the latter but not the former. Which is actually the case? Moreover, these are very morally distinct! People who are forced to stay in a cell are prisoners; people who choose to stay in a cell are recluses.

comment by Lumifer · 2013-07-24T17:42:47.451Z · LW(p) · GW(p)

Thus, by being vegetarian, you are saving 26 land animals a year

I don't quite understand in which meaning is the word "save" used here.

It seems to me that an equivalent statement would be "After a short period of adjustment, you being a vegetarian would result in 26 land animals not existing any more (as in, not being born)".

In the ultimate case of everyone becoming a full vegetarian, domestic animals raised for meat would become endangered species in danger of extinction. I don't think it counts as "saving".

Replies from: Swimmer963, Lukas_Gloor, peter_hurford, MugaSofer
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-24T20:20:29.370Z · LW(p) · GW(p)

I agree with you on the technicality-it's a weird use of the word "save". Philosophically I agree with the original poster. As an individual who can suffer, I would prefer to not exist (edit: not have existed in the first place) than to live my life in a factory farm.

Replies from: Lumifer
comment by Lumifer · 2013-07-24T20:45:37.573Z · LW(p) · GW(p)

As an individual who can suffer, I would prefer to not exist than to live my life in a factory farm.

Are you willing to make that choice for others?

If you see a creature living in a factory farm and have an opportunity to save it from the rest of its existence, will you kill it?

Replies from: Swimmer963, DanielLC
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-24T20:54:05.955Z · LW(p) · GW(p)

will you kill it?

Whoa. I didn't say that if I was living in factory farm, I would prefer to be killed. I might, and I might seek suicide, but that's a hard choice, because the will-to-live-above-all-else exists and is quite strong (for good evolutionary reasons). Also, approaching death is scary = suffering. So no, I wouldn't make that choice for another person, if I couldn't communicate with them and ask. If I could ask them, I'm not sure.

(This is a situation I've imagined myself in, i.e. if I have a patient someday who is able to convince me that they have made a rational decision that they want to commit assisted suicide. I can't model myself well enough to know what I'd do in that situation either.)

An individual that doesn't exist in the first place, i.e. because of better birth control or because fewer animals are farmed for food, doesn't exist to have to make a choice; at least that's how I see it. I could conceive of people thinking they're philosophically the same situation, but I strongly think that they aren't.

Replies from: Lumifer
comment by Lumifer · 2013-07-24T21:06:46.809Z · LW(p) · GW(p)

I didn't say that if I was living in factory farm, I would prefer to be killed.

To quote you to you, "I would prefer to not exist than to live my life in a factory farm."

That's a pretty unambiguous statement. Maybe you want to modify it?

EDIT: Ah, I see you modified it. But that's not really a choice: the past is fixed. It's only an expression of a wish that the past were different. And, of course, it it were realized there would be no you to make the choice...

Replies from: Nisan, Swimmer963
comment by Nisan · 2013-07-26T02:45:20.601Z · LW(p) · GW(p)

An agent can have a preference to never have existed, operationalized as a tendency to act in such a way that agents that act that way are less likely to come into existence; e.g., if agent A creates agent B because A believes B will do X, and if B does not want to have existed, then B could refrain from doing X for that reason.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-24T21:10:04.290Z · LW(p) · GW(p)

I went back and edited it. I personally thought it was ambiguous tending in the direction of not exist=never have existed in the first place, as opposed to 'stop existing'. Illusion of transparency, etc.

comment by DanielLC · 2013-07-26T04:03:51.946Z · LW(p) · GW(p)

Are you willing to make that choice for others?

As opposed to what? I can't not make a choice. I can either buy meat, and choose for them to live a painful existence, or not buy meat, and choose for them not to. It's not as if I can offer them the opportunity to go back in time and kill their own grandfathers and make the choice for themselves.

Replies from: Lumifer
comment by Lumifer · 2013-07-26T04:10:22.783Z · LW(p) · GW(p)

A simple example of making a choice for others is making meat consumption illegal.

However this particular question was based on the Swimmer's question before editing which I understood as preferring suicide to living in a factory farm. If so, making a choice for other implies killing the other (animal) so that it does not continue to suffer on the farm.

Replies from: DanielLC
comment by DanielLC · 2013-07-26T05:31:12.738Z · LW(p) · GW(p)

I'm fine with euthanasia. I don't think failing to eat meat causes it, though.

comment by Lukas_Gloor · 2013-07-24T17:52:16.239Z · LW(p) · GW(p)

I think Peter is concerned about individual animals and not about the abstract/semantic fence we draw about some of them, labelling it their "species". But you're right to point out that the word "save" is used in a very unusual way. If we're talking about factory farmed animals, abstaining from consumption prevents the existence of individual beings that live short and miserable lives with slaughter at the end. Whether we call this "saving" or not, I regard it as something I want to be done more often in the world.

Replies from: Lumifer
comment by Lumifer · 2013-07-24T18:02:59.141Z · LW(p) · GW(p)

...concerned about individual animals

Becoming a vegetarian does absolutely nothing for already-born individual animals. All the impact is solely through potentially reducing future demand.

prevents the existence of individual beings that live short and miserable lives with slaughter at the end.

Well, this is a longstanding philosophical question with a lot of debate about it. Effectively it's a question about the value of not-existence.

Off the top of my head let me point out some outstanding issues here.

First, apply this to humans. Take a fetus with a genetic condition which guarantees that the child, if born, will live a short and miserable life. But he'll live, for some time. Does that justify an abortion?

Second, what's your criteria for "short and miserable"? For certain points of view the lives of most humans here on Earth are "short and miserable".

Third, if you think that some kind of life is worse than non-existing, then the implication is that the creature leading such a life will suicide as soon as he/she/it is able to. That's a pretty high bar for "worse than non-existing".

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T18:36:04.993Z · LW(p) · GW(p)

Right, I meant individual (potential) animals existing in the future.

I don't think suicide is a good indicator here because we could imagine an evil scientist designing a mind that will constantly experience the worst feeling ever but that still wants to continue to go on living. Is it wrong to not prevent such a mind from being turned on? I strongly think so. Also note that evolution might function a bit like the evil scientist in the hypothetical, because evolution is all about optimizing gene-copying success and not about the well-being of individuals.

I think most people would agree that some forms of existence are worse than not being born, especially if they imagine the torture as vividly as they can. (I think someone linked to footage from factory farms somewhere in this comment section.) An interesting question is whether there is a duty to procreate, assuming that beings will live lives that overall go well. I don't think factory farms qualify for that, even if we set the bar pretty low, but if they did, the evaluation would come down to population ethics, where things get messy because it has been proved formally that no possible solution fulfills some seemingly very conservative adequacy conditions.

Replies from: Lumifer
comment by Lumifer · 2013-07-24T18:57:44.167Z · LW(p) · GW(p)

I think most people would agree that some forms of existence are worse than not being born

They are not likely to agree on which ones, though.

This also brings to mind an image of a very well-meaning fellow sneaking into kitchens of slave compounds (in ancient Greece, or XVIII-century US, or elsewhere -- take your pick) and adding to the food a drug which makes everyone who consumes it permanently sterile.

if they imagine the torture as vividly as they can

:-)

Replies from: MugaSofer
comment by MugaSofer · 2013-07-29T07:34:05.702Z · LW(p) · GW(p)

They are not likely to agree on which ones, though.

And from this you conclude ... ?

This also brings to mind an image of a very well-meaning fellow sneaking into kitchens of slave compounds (in ancient Greece, or XVIII-century US, or elsewhere -- take your pick) and adding to the food a drug which makes everyone who consumes it permanently sterile.

To be clear, you would condemn this, would you?

if they imagine the torture as vividly as they can

:-)

Not as bad as they can imagine; the correct level of torture, but imagined as vividly as possible.

comment by Peter Wildeford (peter_hurford) · 2013-07-24T21:16:49.444Z · LW(p) · GW(p)

"Save" as in "saved" from a life of suffering.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T00:10:16.073Z · LW(p) · GW(p)

Oh dear . You know what this sounds like? It sound like the idea of going around killing babies because babies are pure and innocent and go straight to heaven while if allowed to grow up there's a chance they'll end up in hell FOR ETERNITY. So it's much better to save them by killing them while they're still innocent babies.

I think there were a few actual people who did this and claimed this line of reasoning as their justification.

Replies from: SaidAchmiz, MTGandP, hylleddin, Desrtopa
comment by Said Achmiz (SaidAchmiz) · 2013-07-25T02:26:00.078Z · LW(p) · GW(p)

But the reason we find such reasoning (about the babies) repugnant is that we don't actually believe that heaven and hell are real. Even the majority people who profess to hold these beliefs actually don't. So what we see is just babies being killed.

But what if heaven and hell were real things? What if it were the case that some or most people go to hell and suffer eternally, and we knew this, for a completely reliable fact, and also knew that murdered babies go to heaven? Don't you think our view of things might be different?

Replies from: Desrtopa, Lumifer
comment by Desrtopa · 2013-07-25T18:03:53.086Z · LW(p) · GW(p)

I think it's less that they don't believe heaven and hell are real, and more that heaven and hell are such far concepts that they don't generally intrude on regular believers' reasoning.

Most people believe in starving children, but they generally don't incorporate that knowledge into their day to day behavior even in situations where it would be relevant.

comment by Lumifer · 2013-07-25T02:32:59.209Z · LW(p) · GW(p)

But what if heaven and hell were real things? What if it were the case that some or most people go to hell and suffer eternally, and we knew this, for a completely reliable fact, and also knew that murdered babies go to heaven? Don't you think our view of things might be different?

It so happened (I'm running on memory and too lazy to google these things up) that people who did this lived in a society where Catholicism was the dominant religion and more or less universally believed. Notably, that didn't stop anyone from arresting these people and prosecuting them for murder.

Of course Catholicism is very far from utilitarianism.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-25T02:43:02.841Z · LW(p) · GW(p)

Yes; the people doing the arresting and the prosecuting are unlikely to be literal believers. The afterlife is just not a concept for which we get any support in our experience; and in general, most people do not take their religions sufficiently seriously to integrate the implications of their religions' supernatural teachings into their belief systems.

There's a reason why one relatively-common path to nonbelief is that of the intellectual who attempts to take their religion seriously and finds it horrifying and incoherent.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T02:45:50.556Z · LW(p) · GW(p)

the people doing the arresting and the prosecuting are unlikely to be literal believers.

I don't think so. In particular, I can't see why a literally believing Catholic would not arrest and prosecute these people for murder.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-25T02:52:56.122Z · LW(p) · GW(p)

You're right, that is a good point. I often forget that most Christians (and indeed most people) are not consequentialists.

I guess I should have said:

What if we, here at lesswrong, or elsewhere where consequentialists hang out, knew that heaven and hell were real and etc. etc.? THEN would our views be different?

Replies from: Lumifer
comment by Lumifer · 2013-07-25T03:02:29.460Z · LW(p) · GW(p)

There is a reason why most religions are not consequentialist and usually include strong prohibitions on suicide.

A society of consequentialists who fully believe in Christian heaven and hell would probably start by killing all (innocent) children and then commit suicide at the moment which maximizes their chances of getting into heaven. Thus after a single generation such a society would cease to exist.

comment by MTGandP · 2013-07-25T03:06:25.300Z · LW(p) · GW(p)

If Christianity were true, wouldn't this be the most ethical thing to do? It's only wrong because Christianity is not true.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T03:12:47.733Z · LW(p) · GW(p)

If Christianity were true, wouldn't this be the most ethical thing to do?

Careful :-) If Christianity were true this would NOT be an ethical thing to do since Christian morality is not utility-maximizing.

Replies from: MTGandP
comment by MTGandP · 2013-07-25T03:20:21.419Z · LW(p) · GW(p)

Hmm, fair point. If everything about Christianity is true then you're right. I suppose I was presuming that Christian facts about are true, but not Christian values.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-25T03:33:36.248Z · LW(p) · GW(p)

I think that this assumption is warranted because I don't know what it would even mean for Christian values to be "true". If by morality being true you mean that those who disobey will be punished, then yes, but that doesn't mean that disobeying is wrong according to your personal utility function. And as far as moral realism goes, I don't see how a world where it holds would be different from a world where it doesn't. Even if there is some moral law written into the fabric of the universe, it would still be a further question whether you decide to care about it. So I agree with your original comment, if the heaven-and-hell version of Christianity was true, killing babies would be a very altruistic thing to do.

Replies from: MTGandP, Lumifer
comment by MTGandP · 2013-07-25T04:29:11.323Z · LW(p) · GW(p)

Thank you for saying this! I thought something along those lines but I couldn't think of how to put it.

comment by Lumifer · 2013-07-25T03:41:53.502Z · LW(p) · GW(p)

I don't know what it would even mean for Christian values to be "true".

That's pretty simple. If God -- specifically the Christian God -- truly exists and the general teachings of Christianity adequately represent Him, then Christian morality is basically like physics: it's the natural law because that's how the world has been constructed and that's how it works. Your personal utility function is irrelevant in the same sense in which maybe you really want to fly, yet if you jump off a cliff gravity will still do its job.

That's quite standard theology -- since LW frequently discusses religious matters (without usually recognizing them as such :-D) I recommend at least some familiarity with it...

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-25T03:50:06.228Z · LW(p) · GW(p)

My claim is that this idea of morality being "like natural law" is conceptual nonsense. Does it mean that I'd be incapable of violating it? No. Does it mean that I'm being irrational if I don't follow it? Well, one could define rationality that way, but what is the point; I don't care if I'm losing according to the "law of the universe" just like I don't care that I'm losing in terms of reproductive success.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T04:03:23.155Z · LW(p) · GW(p)

My claim is that this idea of morality being "like natural law" is conceptual nonsense.

I don't think so. "Like natural law" here means "will lead to certain consequences regardless of whether you believe so".

If Christianity were true (we'll ignore a bunch of issues and self-contradictions here) then, for example, dying after committing a mortal sin and without proper repentance will lead you to Hell. Doesn't matter whether you think it wasn't a big deal -- you still end up in Hell. Having free will means you can violate the Christian morality but it's similar to jumping off a cliff -- you just will make a messy splat. Whether you care or not is irrelevant.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-25T04:09:34.750Z · LW(p) · GW(p)

That's not what I was disputing. Of course you'd end up in hell if Christianity is true, but if your personal utility function is utilitarianism, you'd sacrifice yourself for the greater good.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T04:18:16.672Z · LW(p) · GW(p)

you'd sacrifice yourself for the greater good

No, because if Christianity were true then God defines what is the "greater good", not you. Your belief that what you're going for is "greater good" would be mistaken. Because of free will you can choose between good and evil but you can't define what is good and what is evil.

Replies from: nshepperd, Lukas_Gloor
comment by nshepperd · 2013-07-25T17:04:52.620Z · LW(p) · GW(p)

Speakers use their actual language.

To elaborate: Even if counterfactually God were to legislate that all must use the holy dictionary that defines the symbol "greater good" as divine punishment, or face smiting, this would not make the sacrifice mentioned by ice9 not for the greater good. Because what we are talking about when we say "greater good" here, in the real world, is not divine punishment. Specifically, when ice9 says "you'd sacrifice yourself for the greater good" they mean you'd sacrifice yourself to save people from a god's evil vendetta.

Below you mentioned that you can't define gravity to repulse things instead of attract. This is a good analogy what you are supposing God to be doing. In fact God is free to define "gravity" (a symbol, a string of 7 letters) in a new language of His own. But claiming that God can define gravity itself is logical nonsense (gravity not being a symbol of any language).

The same way, it is logical nonsense to suppose that God can define the greater good to be something bad. He can invent a new language where "the greater good" (a string) refers to something bad, but that would simply be a useless language that no-one here speaks, and irrelevant to any matter of actual morality.

(I won't dispute that "god can redefine the greater good" might well be standard theology, but as theology it is logical nonsense.)

Replies from: Lumifer
comment by Lumifer · 2013-07-25T17:14:10.053Z · LW(p) · GW(p)

But claiming that God can define gravity itself is logical nonsense

You're quite mistaken about that. In the Christian world God not only can define gravity itself -- He did define gravity itself. Remember that whole Creator bit..?

Replies from: nshepperd
comment by nshepperd · 2013-07-25T17:30:58.493Z · LW(p) · GW(p)

Do not attempt to confuse yourself with wordplay.

If I build a house with a triangular roof, does the fact that I could have built one with a square roof instead mean I can define triangles as squares if I want to?

The fact remains that gravity is a force, not a word or a symbol. "Gravity" can be defined as something; gravity cannot, because defining things is intrinsically a linguistic activity.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T17:39:44.370Z · LW(p) · GW(p)

"Gravity" can be defined as something; gravity cannot, because defining things is intrinsically a linguistic activity.

Do not attempt to confuse yourself with wordplay.

Let's replace the word "define" with the word "create". God created gravity, the force itself. In the process of creating it he defined it to be what it is. This is all before language -- the same way by building a house with a triangular roof I defined that roof as triangular regardless of what you'll call it later.

Again, the major difference between the Christian world and the atheist world is not in what someone calls things -- it's in what things are and are not.

Specific morality is a built-in feature of reality in the Christian world, similar to how gravity is a built-in feature of reality in the physical world. The names that your mind assigns do not change this.

Replies from: nshepperd
comment by nshepperd · 2013-07-25T18:24:35.436Z · LW(p) · GW(p)

By building a house with any kind of roof you have done nothing more and nothing less than give that house a particular kind of roof. The actual squareness or triangularness of the particular kind of roof is an immutable mathematical fact. Not even God can imbue a three-sided shape with squareness.

Nor can you specify squareness to be a "built in feature of reality" in God's world, and true of triangles in that world. Squareness is simply that predicate that is true of exactly all equal-length four-sided shapes. Immutably. Mathematically.

What is morality? It is a predicate that is true of all and only those things that help others, avoid harm, promote happiness, etc etc etc.¹ You can no more imbue an evil deed with morality than you can imbue a triangle with squareness.

¹ Source: this is how the term "morality" is generally actually used by people. Nonsensicle self-referential things people tend to suppose morality to be, such as "whatever everyone agrees 'morality' means" quickly fall apart on examination. More specifically, this is how I use the term "morality" (ie. how I am using it above), and almost certainly is how ice9 uses the term.

[More linguistic stuff redacted so as not to distract everyone]

Replies from: Lumifer
comment by Lumifer · 2013-07-25T18:37:27.986Z · LW(p) · GW(p)

Squareness is simply that predicate that is true of exactly all equal-length four-sided shapes. Immutably. Mathematically.

Um. You're not really good at geometry, are you? :-D

What is morality? It is a predicate that is true of all and only those things that help others, avoid harm, promote happiness, etc etc etc

Huh? Not at all. Consult wikipedia for starters.

Source: this is how the term "morality" is generally actually used by people.

I am sorry, can I see your credentials for confidently making naked assertions about how people actually use the term "morality"? Or at least some evidence?

Replies from: nshepperd
comment by nshepperd · 2013-07-25T18:54:21.458Z · LW(p) · GW(p)

Um. You're not really good at geometry, are you? :-D

I assume it goes without saying that I'm talking about shapes in a flat euclidean plane because listing every random corner case is a waste of everyone's time. (EDIT: yeah... fuckin' parallelograms. Sneaky bastards.)

...

The evidence for this particular description of morality includes such as the fact that people confidently call some things good and some things bad, "even if $(RANDOM_COUNTERFACTUAL_CONDITION)", and thought experiments like the Gandhi murder pill and, well, there's too much subject to describe in one comment.

But that's not really important, and you're not going to believe me anyway. More generally, regardless of the specific form of the morality predicate, God can't make one mathematical object be something else. He can only modify physical circustances. For example, if morality was "murder is good, except at midday" he could make it always be midday by messing with the sun or something, which would affect when murder was good.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T19:04:59.857Z · LW(p) · GW(p)

I'm talking about shapes in a flat euclidean plane

Yes, you are bad at geometry.

And, of course, God is not constrained by your favorite dimensionality of space or by your preferences for Euclid over, say, Riemann.

God can't make one mathematical object be something else. He can only modify physical circustances.

Why is that? You seem to be very certain about limitations of God. You also seem to imply that morality is a mathematical object. That doesn't look obvious to me.

Replies from: nshepperd
comment by nshepperd · 2013-07-25T19:22:56.656Z · LW(p) · GW(p)

And, of course, God is not constrained by your favorite dimensionality of space or by your preferences for Euclid over, say, Riemann.

No, but my topic of discussion is. God can do whatever he likes, it doesn't change the facts of Euclidean geometry. Or Riemannian geometry for that matter. Or both of them together. Or any other type of geometry, or number theory or whatever.

You also seem to imply that morality is a mathematical object.

The same is true of every other abstract concept that divides thingspace into things-that-are and things-that-aren't. Except there are good reasons to think that morality in particular divides thingspace in a way that doesn't care about little XML tags attached to physical objects and actions (which are the sort of thing that God could mess with, being omnipotent regarding the physical world).

You seem to be very certain about limitations of God.

Perhaps you think that "God can override logic" isn't logical nonsense, or you prefer not to use logic. Either approach seems rather pointless as far as getting useful results is concerned.

comment by Lukas_Gloor · 2013-07-25T04:21:32.271Z · LW(p) · GW(p)

Taboo moral terms like good/bad/evil; can you still explain to me how a world where a God given morality exists is different from a world where it doesn't? My belief that I'm doing it for the greater good would not be mistaken because I'd define "the greater good" as the amount of suffering prevented (assuming that this is my terminal value), and I literally don't care whether that definition corresponds with whatever semantic tricks God wants to play.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T04:41:01.418Z · LW(p) · GW(p)

can you still explain to me how a world where a God given morality exists is different from a world where it doesn't?

See a couple of steps above: "Like natural law" here means "will lead to certain consequences regardless of whether you believe so".

because I'd define "the greater good"

You can't. Under Christianity you do not have the power to define "the greater good". In physical parallels this would be similar to "I define gravity to repulse objects instead of attract and I literally don't care whether that definition corresponds with whatever tricks nature wants to play".

Replies from: Jiro
comment by Jiro · 2013-07-25T06:46:56.839Z · LW(p) · GW(p)

Why can't you define "the greater good"?

If Christianity is true, then you can't define away things like "this action puts me in Hell", but I wouldn't call that being unable to define the greater good; I'd say that in that situation I am still defining the greater good but Hell is now decoupled from it.

It would be like saying "I define gravity to repulse objects" and then adding "Of course, this means that I am now using some name other than 'gravity' for the force that makes things fall". It's not at all clear that this is wrong. At most, it's just not very useful, because if I look around for things that satisfy my new definition of gravity, I can't find them. But that objection doesn't seem to apply to the "greater good" case--if I define :"greater good" to mean something other than "doesn't get me sent to Hell", I can in fact find things that meet my definition, and I have a reason to want to talk about them as a category.

To give a concrete example: Imagine that forcibly converting Jews gets you sent to heaven and refusing to do so gets you sent to Hell. Why can I not say that someone who refuses to forcibly convert Jews is acting for the greater good, but some people who act for the greater good get sent to Hell? That seems like an equally sensible way of describing it, rather than "forcibly converting Jews is for the greater good".

Replies from: Lumifer
comment by Lumifer · 2013-07-25T14:27:23.378Z · LW(p) · GW(p)

Why can't you define "the greater good"?

Because of the difference between the map and the territory.

We're talking about a counterfactual universe where Christianity is true. This is a different universe from the one we live in, different in many subtle and profound ways. It's not just the same old world that happens to have an angry old guy sitting up there in the clouds, chucking souls alternatively into a fire pit and into a line for harps.

One of the differences is that in the Christian world morality exists not in your mind, but in the world. It is objective, not subjective. It is built in into the fabric of reality. It is part of the territory.

You can redefine the greater good no more than you can redefine the value of pi or the Planck length.

Now maps, sure. You can draw whatever maps you like and tag things with whatever labels you want. You can make a map of the desert, call it a mountain, and start constructing a ship in the middle of it. That's all fine -- but all you're doing is scribbling on a map and the territory is not changed by that.

That's, by the way, is why occasionally Christians classify atheism as a mental disorder. From their point of view claiming to define the greater good yourself is tantamount to claiming to define the gravitational constant yourself -- clearly a crazy thing to do. You're constructing false maps.

Your ability to name things does not change what things are.

Replies from: Jiro
comment by Jiro · 2013-07-25T14:39:27.533Z · LW(p) · GW(p)

That's like suggesting a hypothetical world where diamonds are red and made of corundum, while rubies are a form of carbon.

We use terms such as "diamonds", "rubies", and "greater good" because we are trying to convey some concept. They're defined that way. In this hypothetical Christian world, "greater good" no longer means the same thing as that concept. If so, how is it meaningful to even call it greater good? It clearly is nothing like what I would otherwise think of as greater good.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T14:53:47.510Z · LW(p) · GW(p)

In this hypothetical Christian world, "greater good" no longer means the same thing as that concept.

To you. Why do you privilege your concept over the Christian concept? I bet more people believe in objective morality than in subjective morality.

Replies from: Jiro
comment by Jiro · 2013-07-25T15:14:24.841Z · LW(p) · GW(p)

The point is that I use the label because I want to express the concept. If something doesn't match the concept, I'm not going to use the label for it. I'm "privileging" my concept because I'm the one doing the communicating and I'm not going to deliberately communicate something other than what I want to communicate.

Answer your same question with the above definitions of diamonds and rubies. Are you really "privileging your concept" if you insist that because clear gemstones made from carbon are not what you mean by "ruby", you're not going to call them that? "Greater good" in this hypothetical Christian world is as far from what I mean by "greater good" as rubies are from "clear gemstone made of carbon".

Replies from: Lumifer
comment by Lumifer · 2013-07-25T15:51:17.653Z · LW(p) · GW(p)

I'm "privileging" my concept because I'm the one doing the communicating

A communication involves two parties. As I said, you can define things any way you like, that neither affects what they are nor helps your attempts to communicate.

The metaphor of diamonds and rubies works against you because the standard, default presumption on the part of most people in the real world is that morality is objective, not subjective. Most people would agree that you can't define your own morality. So when you come and say "I can define the greater good to be anything I like", you are the minority who says that corundum stones which everyone calls rubies should not be called so -- you personally define rubies to mean "the gleam of red in my eye" and so there!

In any case your disagreement with the Christians is deeper than just terminology. You insist that the gems are just an illusion and you can make them be anything you want in your mind's eye. They say that the gems are real and whatever you're imagining is your own problem and does not affect the real gems in the real world.

Replies from: Jiro
comment by Jiro · 2013-07-25T16:21:19.155Z · LW(p) · GW(p)

the standard, default presumption on the part of most people in the real world is that morality is objective, not subjective.

There's a wide gap between "I can define it to mean anything I like" and "I can define it within a certain range". Given the hypothetical where forcibly converting Jews is for the greater good, most people in the real world would say "in that hypothetical, 'greater good' is so far from what we ordinarily mean by 'greater good' that there's no point in even calling it that". People in the real world give lip service to morality being objective but wouldn't carry that to its conclusion.

Replies from: Lumifer, Eugine_Nier
comment by Lumifer · 2013-07-25T16:27:02.289Z · LW(p) · GW(p)

...most people in the real world would say ... People in the real world give lip service to morality being objective but wouldn't carry that to its conclusion.

Please provide some evidence for these assertions. I happen to think they are false. I think you're projecting your personal bubble onto the entire world.

comment by Eugine_Nier · 2013-07-29T08:25:38.867Z · LW(p) · GW(p)

Given the hypothetical where forcibly converting Jews is for the greater good, most people in the real world would say "in that hypothetical, 'greater good' is so far from what we ordinarily mean by 'greater good' that there's no point in even calling it that".

Except if you claim to be a utilitarian, you're not allowed to say that.

comment by hylleddin · 2013-07-25T04:54:01.384Z · LW(p) · GW(p)

It's much more like choosing not to have kids when you're in a situation where those kids' lives will be horrible.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T04:59:51.548Z · LW(p) · GW(p)

So every time you take the pill (or put on a condom) you go "I just SAVED another child!" ..?

comment by Desrtopa · 2013-07-25T03:40:47.362Z · LW(p) · GW(p)

I've never heard of anyone doing this, although it's within the realm of possibility, but this was the topic of one of my earliest comments on Less Wrong.

comment by MugaSofer · 2013-07-29T07:26:14.981Z · LW(p) · GW(p)

Well, "not existing anymore" sounds more like they existed and you got rid of them (i.e. mercy killing) rather than prevented them being created.

I am honestly unsure if it's worth retaining intended-for-factory-farming breeds; I would imagine, in any case, that tame "farm animals" would remain extant in zoos, though.

comment by Said Achmiz (SaidAchmiz) · 2013-07-24T05:00:04.762Z · LW(p) · GW(p)

I appreciate you making this post, peter_hurford (though I admit I skipped over the parts about vegetarianism's effectiveness and easiness, as those are not the parts of the argument I am interested in). However, I'm afraid that (as far as my objection to your view goes), your argument entirely begs the question.

You open with seemingly the following logic:

(1) We care about suffering.
(2) Animals can suffer.
(3) Animals do suffer.
(4) We can prevent animal suffering.
(5) By (1) and (4), we should prevent animal suffering.

But such a formulation leaves out some important qualifications. The actual logic behind your view is like so:

(1) We care about suffering, regardless of who or what is doing the suffering.
(2) Animals can suffer.
(3) Animals do suffer.
(4) We can prevent animal suffering.
(5) By (1) and (4), we should prevent animal suffering.

My objection was precisely to (1). Why should we care about suffering regardless of who or what is suffering? I care about the suffering of humans, or other beings of sufficient (i.e. approximately-human) intelligence to be self-aware. You seem to think I should care about "suffering"[1] more broadly. You take this broader caring as an assumption, but it's actually exactly what I'd like you to convince me of; otherwise, as far as I am concerned, your entire argument collapses.

[1] Though I easily grant that e.g. cows can experience pain, I am not entirely convinced that it's sensible to refer to their mental states and ours by the same word, "suffering". I think this terminological conflation, too, begs the question. But that is a side issue.

Replies from: peter_hurford, None
comment by Peter Wildeford (peter_hurford) · 2013-07-24T06:14:27.427Z · LW(p) · GW(p)

Though I easily grant that e.g. cows can experience pain, I am not entirely convinced that it's sensible to refer to their mental states and ours by the same word, "suffering". I think this terminological conflation, too, begs the question. But that is a side issue.

Why? I actually think this is an important consideration. Is "suffering" by definition something only humans can do? If so, isn't this arbitrarily restricting the definition? If not, do you doubt something empirical about nonhuman animal minds?

~

My objection was precisely to (1). Why should we care about suffering regardless of who or what is suffering? I care about the suffering of humans, or other beings of sufficient (i.e. approximately-human) intelligence to be self-aware. You seem to think I should care about "suffering"[1] more broadly.

You've characterized my argument correctly. It seems to me that most people already care about the suffering of nonhuman animals without quite realizing it, i.e. why they on the intuitive level resist kicking kittens and puppies. But I acknowledge that some people aren't like this.

I don't think there's a good track record for the success of moral arguments. As a moral anti-realist, I must admit that there's nothing irrational per se about restricting your moral sphere to humans. I guess my only counterargument would be that it seems weird and arbitrary.

What would you say to someone who thinks we should only care about the suffering of white humans of European descent? Would you be fine with that?

Replies from: Vaniver, SaidAchmiz
comment by Vaniver · 2013-07-24T23:25:50.124Z · LW(p) · GW(p)

As a moral anti-realist, I must admit that there's nothing irrational per se about restricting your moral sphere to humans. I guess my only counterargument would be that it seems weird and arbitrary.

Suppose morality is a 'mutual sympathy pact,' and it seems neither weird nor arbitrary to decide how sympathetic to be to others by their ability to be sympathetic towards you. Suppose instead that morality is a 'demonstration of compassion,' and the reverse effect holds--sympathizing with the suffering of those unable to defend themselves (and thus unable to defend you) demonstrates more compassion than the previous approach which requires direct returns. (There are, of course, indirect returns to this approach.)

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-25T06:13:34.743Z · LW(p) · GW(p)

I'm confused as to what those considerations are supposed to demonstrate.

Replies from: Vaniver
comment by Vaniver · 2013-07-25T10:09:30.645Z · LW(p) · GW(p)

Basically, I don't think much of your counterargument because it's unimaginative. If you ask the question of what morality is good for, you find a significant number of plausible answers, and different moralities satisfy those values to different degrees. If you can't identify what practical values are encouraged by holding a particular moral principle, what argument do you have for that moral principle besides that you currently hold it?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-25T15:53:24.631Z · LW(p) · GW(p)

I don't think moral principles are validated with reference to practical self-interested considerations.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T16:04:52.843Z · LW(p) · GW(p)

What do you think moral principles are validated by?

Or, to ask a more general question, what they could possibly be validated by?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-25T16:11:13.228Z · LW(p) · GW(p)

Broadly, I think moral principles exist as logical standards by wish actions can be measured. It's a fact whether a particular action is endorsed by utilitarianism or deontology, etc. Therefore moral facts exist in the same realm as any other sort of fact.

More specifically, I think the actual set of moral principles someone lives by are a personal choice that is subject to a lot of factors. Some of it might be self-interest, but even if it is, it's usually indirect, not overt.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T16:23:48.575Z · LW(p) · GW(p)

I think moral principles exist as logical standards by wish actions can be measured.

OK. But standards are not facts. They are metrics in the same way that a unit of length, say, meter, is not a fact but a metric.

How do you validate the choice of meters (and not, say, yards) to measure?

The usual answer is "fitness for a purpose", but how does this work for morality?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-25T17:41:52.267Z · LW(p) · GW(p)

But standards are not facts. They are metrics in the same way that a unit of length, say, meter, is not a fact but a metric.

True. But whether something meets a standard is a fact. While a meter is a standard, it's an objective fact that my height is approximately 1.85 meters.

~

How do you validate the choice of meters (and not, say, yards) to measure?

Social consensus. Also, a meter is much easier to use than a yard.

~

The usual answer is "fitness for a purpose", but how does this work for morality?

Standards could be evaluated on further desiderata, like internal consistency and robustness in the face of thought experiments.

Social consensus and ease of use could also be factors.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T17:48:57.227Z · LW(p) · GW(p)

But whether something meets a standard is a fact.

I agree. You can state as a fact whether some action meets some standard of morality. That does nothing to validate a standard of morality, however.

internal consistency ... robustness in the face of thought experiments ... [s]ocial consensus ... ease of use

Oh, boy. Social consensus, ease of use, really?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-25T19:58:22.652Z · LW(p) · GW(p)

That does nothing to validate a standard of morality, however.

I'm not sure a standard of morality could ever be validated in the way you might like.

What do you think validates a standard of morality?

~

Oh, boy. Social consensus, ease of use, really?

That's not a very helpful retort.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T20:06:05.594Z · LW(p) · GW(p)

What do you think validates a standard of morality?

Nothing, pretty much. I think standards of morality cannot be validated.

That's not a very helpful retort.

I don't know if you think your position is defensible or it was just a throwaway line. It's rather trivial to construct a bunch of moralities which will pass your validation criteria and look pretty awful at the same time.

It seems to me things like social consensus and ease of use are factors in determining whether a morality is popular, but I don't see how they can validate moral values.

Replies from: Vaniver, peter_hurford
comment by Vaniver · 2013-07-25T20:53:08.109Z · LW(p) · GW(p)

Nothing, pretty much. I think standards of morality cannot be validated.

In a handful of discussions now, you've commented "X doesn't do Y," and then later followed up with "nothing can do Y," which strikes me as logically rude compared to saying "X doesn't do Y, which I see as a special case of nothing doing Y." For example, in this comment, asking the question "what does it mean for a moral principle to be validated?" seems like the best way to clarify peter_hurford's position.

I do think that standards of morality can be 'validated,' but what I mean by that is that standards of morality have practical effects if implemented, and one approach to metaethics is to choose a moral system by the desirability of its practical effects. I understood peter_hurford's response here to be "I don't think practical effects are the reason to follow any morality."

This comment makes great sense inside of a morality, because moralities often operate by setting value systems. If one decides to adopt a value system which requires vegetarianism in order to signal that they are compassionate, that suggests their actual value system is the one which rewards signalling compassion. To use jargon, moralities want to be terminal goals, but in this metaethical system they are instrumental goals.

I don't think this comment makes sense outside of a morality (i.e. I have a low opinion of the implied metaethics). If one is deciding whether to adopt morality A or morality B, knowing that A thinks B is immoral and B thinks A is immoral doesn't help much (this is the content of the claim that a moral sphere restricted to humans is weird and arbitrary.) Knowing that morality A will lead to a certain kind of life and morality B will lead to a different kind of life seems more useful (although there's still the question of how to choose between multiple kinds of lives!).

This leads to the position that even if you have the Absolutely Correct Morality handed to you by God, so long as that morality is furthered by more adherents it would be useful to think outside of that morality because standard persuasion advice is to emphasize the benefits the other party would receive from following your suggestion, rather than emphasizing the benefits you would receive if the other party follows your suggestions ("I get a referral bonus from the Almighty for every soul I save" is very different from "you'll much prefer being in Heaven over being in Hell"). Instead of showing how your conclusion follows from your premises, it's more effective to show how your conclusion is implied by their premises.

(I should point out that you can sort of see this happening by the use of "weird and arbitrary" as they don't make sense as a logical claim but do make sense as a social claim. "All the cool kids are vegetarian these days" is an actual and strong reason to become vegetarian.)

Replies from: Lumifer
comment by Lumifer · 2013-07-26T04:04:00.505Z · LW(p) · GW(p)

...which strikes me as logically rude

Well, I didn't mean to be rude but I'll watch myself a bit more carefully for such tendencies. Talking to people over the 'net leads one to pick up some unfortunate habits :-)

For example, in this comment...

That one actually was a bona fide question. I didn't think morality could be validated, but on the other hand I didn't spend too much time thinking about the issue. So -- maybe I was missing something, and this was a question with the meaning of "well, how could one go about it?" Maybe there was a way which didn't occur to me.

one approach to metaethics is to choose a moral system by the desirability of its practical effects.

I am not a big fan of such an approach because I think that in this respect ethics is like philosophy -- any attempts at meta very quickly become just another ethics or just another philosophy. And choosing on the basis of consequences is the same thing as expecting a system of ethics to be consistent (since you evaluate the desirability of consequences on the basis of some moral values). In other words I don't think ethics can be usefully tiered -- it's a flat system.

Oh, and I think that moralities do not set value systems. Moralities are value systems. And they are terminal goals (or criteria, or metrics, or standards), they cannot be instrumental (again, because it's a flat system).

"All the cool kids are vegetarian these days" is an actual and strong reason to become vegetarian.

I very strongly disagree with this. From the descriptive side individual morality of course is influenced by social pressure. From the normative side, however, I don't believe it should be.

Replies from: Vaniver
comment by Vaniver · 2013-07-26T07:36:24.843Z · LW(p) · GW(p)

I am not a big fan of such an approach because I think that in this respect ethics is like philosophy -- any attempts at meta very quickly become just another ethics or just another philosophy.

Agreed that a given metaethical approach will cash out as a particular ethics in a particular situation. The reason I think it's useful to go to metaethics is because you can then see the linkage between the situation and the prescription, which is useful for both insight and correcting flaws in an ethical system. I also think that while infinite regress problems are theoretically possible, for most humans there is a meaningful cliff suggesting it's not worth it to go from meta-meta-ethics to meta-meta-meta-ethics, because to me ethics looks like a set of behaviors and responses, metaethics looks like psychology and economics, and meta-meta-ethics looks like biology.

I very strongly disagree with this. From the descriptive side individual morality of course is influenced by social pressure. From the normative side, however, I don't believe it should be.

It seems to me that there are a lot of obvious ways for morality derived without any sort of social help to go wrong, but we may be operating under different conceptions of 'pressure.'

Replies from: Lumifer
comment by Lumifer · 2013-07-26T15:16:21.186Z · LW(p) · GW(p)

because you can then see the linkage between the situation and the prescription

Can you give me an example where metaethics is explicitly useful for that? I don't see why in flat/collapsed ethics this should be a problem.

to me ethics looks like a set of behaviors and responses, metaethics looks like psychology and economics, and meta-meta-ethics looks like biology.

Ah. Interesting. To me ethics is practical application (that is, actions) of morality which is a system of values. Morality is normative. Psychology and economics for me are descriptive (with an important side-note that they describe not only what is, but also boundaries for what is possible/likely). Biology provides provides powerful external forces and boundaries which certainly shape and affect morality, but they are external -- you have to accept them as a given.

there are a lot of obvious ways for morality derived without any sort of social help to go wrong

Of course, but so what? I suspect this issue will turn on the attitude towards the primacy of social vs the primacy of individual.

Replies from: Vaniver
comment by Vaniver · 2013-07-26T17:53:05.367Z · LW(p) · GW(p)

Can you give me an example where metaethics is explicitly useful for that? I don't see why in flat/collapsed ethics this should be a problem.

Sure, but first I should try to be a little clearer: by 'situation' here I mean the incentives on the agent, not any particular dilemma. That is, I would cluster ethics as rules that eat scenarios and output actions, and meta-ethics as rules that eat agent-world pairs and output ethics. As a side note, I think requiring this sort of functional upgrade when you move up a meta level makes the transition much more meaningful and makes infinite regress much less likely to happen practically.

I should also comment that I've been using ethics and morality interchangeably in this series of comments, even though I think it's useful for the terms to be different along the lines you describe (of differentiating between value systems and action systems), mostly because I want to describe the system of picking value systems as meta-ethics instead of meta-morality.

It also seems worthwhile to remember that for most people, stated justifications follow decisions rather than decisions following stated justifications. This matches up with making decisions in near mode and justifying those decisions in far mode, which in the language I'm using here would look like far mode as ethics and near mode as meta-ethics.

An example would be vegetarianism. Vegetarianism in modern urban America, with a well-developed understanding of nutrition, is about as healthy as also eating animal products (possibly healthier, possibly less healthy, probably dependent on individual biology). Vegetarianism in undeveloped or rural areas is generally associated with malnutrition (often at subclinical levels, but that still has an effect on health and longevity). A metaethical system which recommends vegetarianism in America where it's cheap and recommends against it in undeveloped areas where it's expensive seems easy to construct; an ethical system which measures the weal gained and the woe inflicted to animals by eating meat and gets the balancing parameter just right to make the same recommendations seems difficult to construct.

(The operative phrase of that last sentence being the 'balancing parameter'- if the stated justifications are driving the decisions, they need to be doing so quantitatively, and the parameters need to be inputs to the model, not outputs. It's easy to say "this is the rule I want, find a parameter to implement that rule," but difficult to say "this is the right parameter to use, and that results in this rule.")

I suspect this issue will turn on the attitude towards the primacy of social vs the primacy of individual.

Even if you give individuals primacy, specialization of labor is still a powerful force for efficiency. (For social influence to be a obvious net negative, I think you would need individual neurological diversity on a level far higher than we currently have, even though we do see some negative impacts at our current level of neurological diversity.)

Replies from: Lumifer
comment by Lumifer · 2013-07-29T18:12:52.205Z · LW(p) · GW(p)

I would cluster ethics as rules that eat scenarios and output actions, and meta-ethics as rules that eat agent-world pairs and output ethics.

Ah. That makes a lot of sense.

for most people, stated justifications follow decisions rather than decisions following stated justifications.

True, but the key word here is "stated".

making decisions in near mode and justifying those decisions in far mode, which in the language I'm using here would look like far mode as ethics and near mode as meta-ethics.

That doesn't look right to me. For most people (those who justify post-factum) the majority of their ethics is submerged, below their consciousness level. That's why "stated" is a very important qualifier. People necessarily make decisions based on their "real" ethics but bringing the real reasons to the surface might not be psychologically acceptable to them, so post-factum justifications come into play.

I don't think people making decisions in near mode apply rules that "eat agent-world pairs and output ethics". I think that for many people factors like "convenience", "lookin' good", and "let's discount far future to zero" play considerably larger role in their real ethics than they are willing to admit or even realize.

A metaethical system which recommends vegetarianism in America where it's cheap and recommends against it in undeveloped areas where it's expensive seems easy to construct; an ethical system which measures the weal gained and the woe inflicted to animals by eating meat and gets the balancing parameter just right to make the same recommendations seems difficult to construct.

I don't see how this is so. Your meta-ethical system will still need to get that balancing parameter "just right" unless you start with the end result being known. Just because you divide the path from moral axioms to actions into several stages you don't get to avoid sections of that path, you still need to walk it all.

Oh, and I don't believe modern America has a "well-developed understanding of nutrition", though it's a separate discussion altogether.

Even if you give individuals primacy, specialization of labor is still a powerful force for efficiency.

I don't understand. What does specialization of labor has to do with morality?

And perhaps I should clarify my reaction. When I saw "All the cool kids are vegetarian these days" called "an actual and strong reason" to adopt this morality -- well, my first thought was "All the cool kids root out hidden Jews / string up uppity Negroes / find and kill Tutsi / denounce the educated agents of imperialism / etc." That must be an actual and strong reason to adopt this set of morals as well, right?

I don't know how to figure out whether social influence is a net positive given that in practice social influence is always there and you can't find a control group. My point is that accepting morality because many other people seem to follow it is a very dubious heuristic for me.

Replies from: Vaniver
comment by Vaniver · 2013-07-29T20:55:12.351Z · LW(p) · GW(p)

That doesn't look right to me.

Agreed that it's a stretch; "hidden ethics" and "stated ethics" is a much more natural divide for the two. I do think that "convenience" and "lookin' good" depends on the agent-world pair, but I think the adaption is opaque and slow (i.e. learn it when you're young over a long period) rather than explicit and fast.

I don't see how this is so.

I was unclear there as well; I'm assuming that the "right" result is the one that maximizes the health and social standing of the implementer. Targeting that directly is easy; targeting it indirectly by using animal welfare is hard.

Oh, and I don't believe modern America has a "well-developed understanding of nutrition", though it's a separate discussion altogether.

I was unclear; I meant that vegetarianism is safer for individuals with a well-developed understanding, not that urban America as a whole has a well-developed understanding.

I don't understand. What does specialization of labor has to do with morality?

Many moral questions are hard to figure out, especially when they rely on second or third order effects. Think of the parable of the broken window, of journalistic, clerical, or medical ethics which promise non-intervention or secrecy. There is strong value in the communication of moral claims, which I'm not sure how to distinguish from social pressure (and think social pressure may be a necessary part of communicating those claims).

Replies from: Lumifer
comment by Lumifer · 2013-07-29T21:30:47.447Z · LW(p) · GW(p)

There is strong value in the communication of moral claims

It seems to me the issues of trust and credibility are dominant here. People get moral claims thrown at them constantly from different directions, many of them are incompatible or sometimes even direct opposites of each other. One needs some system of sorting them out, of evaluating them and deciding whether to accept them or not. Popularity is, of course, one such system but it has its problems, especially when moral claims come from those with power. There are obvious incentives in spreading moral memes advantageous to you.

I guess I see the social communication of moral claims to be strongly manipulated by those who stand to gain from it (which basically means those with power -- political, commercial, religious, etc.) and so suspect.

comment by Peter Wildeford (peter_hurford) · 2013-07-25T20:37:59.412Z · LW(p) · GW(p)

Nothing, pretty much. I think standards of morality cannot be validated.

I think we agree there, then.

It seems to me things like social consensus and ease of use are factors in determining whether a morality is popular, but I don't see how they can validate moral values.

I was thinking of a different kind of "validation".

comment by Said Achmiz (SaidAchmiz) · 2013-07-24T06:42:11.596Z · LW(p) · GW(p)

Why? I actually think this is an important consideration. Is "suffering" by definition something only humans can do? If so, isn't this arbitrarily restricting the definition? If not, do you doubt something empirical about nonhuman animal minds?

I try not to argue by definition, so it's the latter: I have empirical concerns. See this post, point 4 (but also 3 and 5), for a near-perfect summary of my concerns.

That said, my overall objection to your view does not hinge on this point.

As a moral anti-realist, I must admit that there's nothing irrational per se about restricting your moral sphere to humans. I guess my only counterargument would be that it seems weird and arbitrary.

Well, firstly, I have to point out that I am not restricting my moral sphere to humans, per se. (Of known existing creatures, dolphins may qualify for membership; of imaginable creatures, aliens and AIs might.) In any case, the circle I draw seems quite non-arbitrary, even obvious, to me; but I suppose this only speaks to the non-universality of moral intuitions.

What would you say to someone who thinks we should only care about the suffering of white humans of European descent? Would you be fine with that?

That would indeed seem weird and arbitrary. One objection I might raise to such a person is that it's non-trivial, in many cases, to discern someone's "whiteness", not to mention one's exact ancestry. "European" is not a sharp boundary where humans are concerned, and a great many factors confound such categorization. Most of my other objections would be aimed at drawing out the moral intuitions behind this person's judgments about what sorts of beings are objects of morality (do they think "superficial" characteristics matter as much as functional ones? what is their response to various thought experiments such as brain transplant scenarios? etc.). It seems to me that there are both empirical facts and analytic arguments that would shift this person's position closer to my own; a logically contradictory, empirically incoherent, or reflectively inconsistent moral position is generally bound to be less convincing.

(Of course, I might answer entirely differently. I might say: no, I would not be fine with that, because my own ancestry may or may not be classified as "European" or "white", depending on who's doing the classifying. So I would, quite naturally, argue against a moral circle drawn thus. Moral anti-realism notwithstanding, I might convince some people (and in fact that seems to be, in part, how the American civil rights movement, and similar social movements across the world, have succeeded: by means of people who were previously outside the moral circle arguing for their own inclusion). Cows, of course, cannot attempt to persuade us that we should include them in our moral considerations. I do not take this to be an irrelevant fact.)

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-24T08:39:46.390Z · LW(p) · GW(p)

Me: What would you say to someone who thinks we should only care about the suffering of white humans of European descent? Would you be fine with that?

You: That would indeed seem weird and arbitrary. One objection I might raise to such a person is that it's non-trivial, in many cases, to discern someone's "whiteness", not to mention one's exact ancestry. "European" is not a sharp boundary where humans are concerned...

I think that fights the hypothetical a bit much. Imagine something a bit sharper, like citizenship. Why not restrict our moral sphere to US citizens? Or take Derek Parfit's within-a-mile altruism, where you only have concern for people within a mile of you. Weird, I agree. But irrational? Hard to demonstrate.

~

I try not to argue by definition, so it's the latter: I have empirical concerns. See this post, point 4 (but also 3 and 5), for a near-perfect summary of my concerns.

So do you think nonhuman animals may not suffer? I agree that's a possibility, but it's not likely. What do you think of the body of evidence provided in this post?

I don't think there is a tidy resolution to this problem. We'll have to take our best guess, and that involves thinking nonhuman animals suffer. We'd probably even want to err on the safe side, which would increase our consideration toward nonhuman animals. It would also be consistent with an Ocham's razor approach.

~

It seems to me that there are both empirical facts and analytic arguments that would shift this person's position closer to my own; a logically contradictory, empirically incoherent, or reflectively inconsistent moral position is generally bound to be less convincing.

What would you suggest?

Replies from: pragmatist, SaidAchmiz
comment by pragmatist · 2013-07-24T09:17:32.855Z · LW(p) · GW(p)

The moral sphere needn't work like a threshold, where one should extend equal concern to everyone within the sphere and no concern at all to anyone outside it. My moral beliefs are not cosmopolitan -- I think it is morally right to care more for my family than for absolute strangers. In fact, I think it is a huge failing of standard utilitarianism that it doesn't deliver this verdict (without having to rely on post-hoc contortions about long-term utility benefits). I also think it is morally acceptable to care more for people cognitively similar to me than for people cognitively distant (people with radically different interests/beliefs/cultural backgrounds).

This doesn't mean that I don't have any moral concern at all for the cognitively distant. I still think they're owed the usual suite of liberal rights, and that I have obligations of assistance to them, etc. It's just that I would save the life of one of my friends over the lives of, say, three random Japanese people, and I consider this the right thing to do.

I follow a similar heuristic when I move across species. I think we owe the great apes more moral consideration than we owe, say, dolphins. I don't eat any mammals but I eat chicken.

The idea of a completely cosmopolitan ethic just seems bizarre to me. I can see why one would be motivated to adopt it if the only alternative were caring about some subset of people/sentient beings and not caring at all about anyone else. Then there would be something arbitrary about where one draws the line. But this is not the most plausible alternative. One could have a sphere of moral concern that doesn't just stop suddenly but instead attenuates with distance.

Replies from: Lukas_Gloor, Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T15:31:08.374Z · LW(p) · GW(p)

The morality you suggest is what Derek Parfit calls collectively self-defeating. This means that if everyone were to follow it perfectly, there could be empirical situations where your actual goals, namely the well-being of those closest to you, are achieved less well than they would be if everyone followed a different moral view. So there could be situations in which people have more influence on the well-being of the family of strangers, and if they'd all favor their own relatives, everyone would end up worse off, despite everyone acting perfectly moral. Personally I want a world where everyone acts perfectly moral to be as close to Paradise as is empirically possible, but whether this is something you are concerned about is a different question (that depends on what question your seeking to answer by coming up with a coherent moral view).

Replies from: Jiro, Eugine_Nier, SaidAchmiz, pragmatist
comment by Jiro · 2013-07-24T15:45:24.404Z · LW(p) · GW(p)

By this reasoning everyone should give all their money and resources to charity (except to the extent that they need some of their resources to keep their job and make money).

Replies from: jkaufman, Lukas_Gloor
comment by jefftk (jkaufman) · 2013-07-25T14:30:13.355Z · LW(p) · GW(p)

That's not much of a reductio ad absurdum. It would be much better if people did that, or at least moved a lot in that direction.

Replies from: Jiro
comment by Jiro · 2013-07-27T04:26:35.980Z · LW(p) · GW(p)

People are motivated to do things that make money because the money benefits themselves and their loved ones. Many such things are also beneficial to everyone, either directly (inventors, for instance, or people who manufacture useful goods), or indirectly (someone who is just willing to work hard because working hard benefits themselves, thus producing more and improiving the economy). In a world where everyone gave their money to random strangers and kept them at an equal level of wealth, nobody would be able to make any money (since 1) any money they made would be accompanied by a reduction by the money other people gave them, and 2) they would feel (by hypothesis) obligated to give away the proceeds anyway). This would mean that money as a motivation would no longer exist, and we would lose everything that we gain when money is a motivation. Thts would be bad.

Even if you modified the rule to "I should give money to people so as to arranbge an equal level of wealth except where necessary to provide motivation", in deciding exactly who gets your money you'd essentially have a planned economy done piecemeal by billions of individual decisions. Unlike a normal planned economy, it wouldn't be imposed from the top, but it would have the same problem as a normal planned economy in that there's really nobody competent to plan such a thing. The result would be disaster. So overall it would be a better world if people kept the money they made even if someone else could use it more than they could.

Furthermore, the state where everyone acts this way is unstable. Even if your family would be better off if everyone acted that altruistically, your family would be worse off if half the world acted that way and you and they were part of that half.

comment by Lukas_Gloor · 2013-07-24T16:01:58.086Z · LW(p) · GW(p)

Yes. At least as long as there are problems in the world. What's wrong with that?

Everyone, including nonhumans, would have their interests/welfare-function fulfilled as well as possible. If I had to determine the utility function of moral agents before being placed into the world in any position at random, I would choose some form of utilitarianism from a selfish point of view because it maximizes my expected well-being. If doing the "morally right" thing doesn't make the world a better place for the sentient beings in the world, I don't see a reason to call it "right". Also note that this is not an all-or-nothing issue, it seems unfruitful to single out only those actions that produce the perfect outcome, or the perfect outcome in expectation. Every improvement into the right direction counts, because every improvement leads to someone else being better off.

comment by Eugine_Nier · 2013-07-25T04:05:40.011Z · LW(p) · GW(p)

That's a game theory/decision theory problem, not a problem with the utility function.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-25T04:14:21.907Z · LW(p) · GW(p)

If all the agents in the situation acted according to utilitarianism, everyone would be better off. To the extent that everyone acting according to common sense morality predictably fails to bring about the best of all possible worlds in this situation, and to the extent that one cares about this fact, this constitutes an argument against common sense morality.

Of course, if decision theory or game theory could make those agents cooperate successfully (so they don't do predictably worse than other moralities anymore) in all logically possible situations, then the objection disappears. I see no reason to assume this, though.

comment by Said Achmiz (SaidAchmiz) · 2013-07-25T02:30:41.831Z · LW(p) · GW(p)

This seems nonsensical; a utility function does not prescribe actions. If I care about my family most, but acting in a certain way will cause them to be worse off, then I won't act that way. In other words, if everyone acting perfectly moral causes everyone to end up worse off, then by definition, at least some people were not acting perfectly moral.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-25T02:53:30.716Z · LW(p) · GW(p)

The problem is not with your actions, but with the actions of all the others (who are following the same general kind of utility function but because your utility function is agent-relative, they use different variables, i.e. they care primarily about their family and friend as opposed to yours). However, I was in fact wondering whether this problem disappears if we make the agents timeless (or whatever does the job), so they would cooperate with each other to avoid the suboptimal outcome. This is seems fair enough since acting "perfectly moral" seems to imply the best decision theory.

Does this solve the problem? I think not; we could tweak the thought experiment further to account for it: we could imagine that due to empirical circumstances, such cooperation is prohibited. Let's assume that the agents lack the knowledge that the other agents are timeless. Is this an unfair addendum to the scenario? I don't see why, because given the empirical situation (which seems perfectly logically possible) the agents find themselves in, the moral algorithm they collectively follow may still lead to results that are suboptimal for everyone concerned.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-25T02:58:51.015Z · LW(p) · GW(p)

You don't follow a utility function. Utility functions don't prescribe actions.

... are you suggesting that we solve prisoner's dilemmas and similar problems by modifying our utility function?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-25T03:08:28.741Z · LW(p) · GW(p)

OK, bad choice of words.

No, but you need some decision theory to go with your utility function, and I was considering the possibility that Parfit merely pointed out a flaw of CDT and not a flaw of common sense morality. However, given that we can still think of situations where common sense morality (no matter the decision theory) executed by everyone does predictably worse for everyone concerned than some other theory, Parfit's objection still stands.

(Incidentally, I suspect that there could be situations where modifying your utility function is a way to solve a prisoner's dilemma, but that wasn't what I meant here.)

comment by pragmatist · 2013-07-30T09:19:09.396Z · LW(p) · GW(p)

It seems implausible to me that there is any ethical decision procedure that human beings (rather than idealized perfectly informed and perfectly rational super-beings) could follow that wouldn't be collectively self-defeating in this sense. Do you (or Parfit) have an example of one that isn't?

Anyway, I don't see this as a huge problem. First, I'm pretty sure I'm never going to live in a world (or even a close approximation to one) where everyone adheres to my moral beliefs perfectly. So I don't see why the state of such a world should be relevant to my moral beliefs. Second, my moral beliefs are ultimately beliefs about which consequences -- which states of the world -- are best, not beliefs about which actions are best. If there was good evidence that acting in a certain manner (in the aggregate) wasn't effective at producing morally better states of affairs, then I wouldn't advocate acting in that manner.

But I am not convinced that following a cosmopolitan decision procedure (or advocating that others follow one) would empirically be an effective means to achieving my decidedly non-cosmopolitan moral ends. Perhaps if everyone in the world mimicked my moral behavior (or did what I told them) it would be, but alas, that is not the case.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-30T12:53:10.818Z · LW(p) · GW(p)

Utilitarianism is not collectively self-defeating, but then there'd be no room for non-cosmopolitan moral ends.

(rather than idealized perfectly informed and perfectly rational super-beings)

This part shouldn't make a difference. If humans are too irrational to directly follow utilitarianism (U), then U implies they should come up with easier/less dangerous rules of thumb that will, on average, produce the most utility. This is termed "indirectly individually self defeating", if you have a theory that implies it would be best to follow some other theory. Parfit concludes, and I agree with him here, that this is not a reason to reject U. U doesn't imply that one ought to actively implement utilitarianism, it only wants you to bring about the best consequences regardless of how this happens.

Replies from: pragmatist
comment by pragmatist · 2013-07-30T14:36:41.387Z · LW(p) · GW(p)

If humans are too irrational to directly follow utilitarianism (U), then U implies they should come up with easier/less dangerous rules of thumb that will, on average, produce the most utility.

This is a pretty dubious move. Why think there will be easy to follow rules that will maximize aggregate utility? And even if such rules exist, how would we go about discovering them, given that the reason we need them in the first place is due to our inability to fully predict the consequences of our actions and their attached utilities?

Do you just mean that we should pick easy to follow rules that tend to produce more utility than other sets of easy to follow rules (as far as we can figure out), but not necessarily ones that maximize utility relative to all possible patterns of behavior? In that case, I don't see why your utilitarianism isn't collectively self-defeating according to the definition you gave. A world in which everyone acts according to such rules will not be a world that is as close to the utilitarian Paradise as empirically possible. After all, it seems entirely empirically possible for people to accurately recognize particular situations where actions contrary to the rules produce higher utility.

comment by Lukas_Gloor · 2013-07-24T17:04:31.169Z · LW(p) · GW(p)

Also note that the view you outlined is often concerned with the question of helping others. When it comes to not harming others, many people would agree with the declaration of human rights that inflicting suffering is equally bad regardless of one's geographical or emotional proximity to the victims. Personal vegetarianism is an instance of not harming.

Replies from: pragmatist
comment by pragmatist · 2013-07-30T09:31:08.687Z · LW(p) · GW(p)

I disagree with cosmopolitanism when it comes to "not harming" as well. I think needlessly inflicting suffering on human beings is always really bad, but it is worse if, say, you do it to your own children rather than to a random stranger's.

comment by Said Achmiz (SaidAchmiz) · 2013-07-24T13:13:48.012Z · LW(p) · GW(p)

I basically agree with pragmatist's response, with the caveat only that I think many (most?) people's moral spheres have too steep a gradient between "family, for whom I would happily murder any ten strangers" and "strangers, who can go take a flying leap for all I care". My own gradient is not nearly that steep, but the idea of a gradient rather than a sharp border is sound. (Of course, since it's still the case that I would kill N chickens to save my grandmother, where N can be any number, it seems that chickens fall nowhere at all on this gradient.)

So do you think nonhuman animals may not suffer? I agree that's a possibility, but it's not likely. What do you think of the body of evidence provided in this post?

Well, you can phrase this as "nonhuman animals don't suffer", or as "nonhuman animal suffering is morally uninteresting", as you see fit; I'm not here to dispute definitions, I assure you. As for the evidence, to be honest, I don't see that you've provided any. What specifically do you think offers up evidence against points 3 through 5 of RobbBB's post?

[Thinking that nonhuman animals suffer] would also be consistent with an Ocham's razor approach.

I don't think so; or at least this is not obviously the case.

What [empirical facts and analytic arguments] would you suggest?

Well, just the stuff about boundaries and hypotheticals and such that you referred to as "fighting the hypothetical". Is there something specific you're looking for, here?

Replies from: MTGandP
comment by MTGandP · 2013-07-24T23:23:42.506Z · LW(p) · GW(p)

As for the evidence, to be honest, I don't see that you've provided any.

The essay cited the Cambridge Declaration of Consciousness, as well as a couple of other pieces of evidence.

Here is another (more informal) piece that I find compelling.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T00:39:12.711Z · LW(p) · GW(p)

The essay cited the Cambridge Declaration of Consciousness, as well as a couple of other pieces of evidence.

That's not evidence, that's a declaration of opinion.

In particular, reading things like "Evidence of near human-like levels of consciousness has been most dramatically observed in African grey parrots" (emphasis mine) makes me highly sceptical of that opinion.

Replies from: MTGandP, Nornagest
comment by MTGandP · 2013-07-25T03:15:36.288Z · LW(p) · GW(p)

It's not scientific evidence, but it is rational evidence. In Bayesian terms, a consensus statement of experts in the field is probably much stronger evidence than, say, a single peer-reviewed study. Expert consensus statements are less likely to be wrong than almost any other form of evidence where I don't have the necessary expertise to independently evaluate claims.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T03:19:47.962Z · LW(p) · GW(p)

It's not scientific evidence, but it is rational evidence.

Not if I believe that this particular panel of experts is highly biased and is using this declaration instrumentally to further their undeclared goal.

In Bayesian terms, a consensus statement of experts in the field is probably much stronger evidence than, say, a single peer-reviewed study.

That may or may not be true, but doesn't seem to be particularly relevant here. The question is what constitutes "near human-like levels consciousness". If you point to an African grey as your example, I'll laugh and walk away. Maybe, if I were particularly polite, I'd ask in which meaning you're using the word "near" here.

Replies from: MTGandP
comment by MTGandP · 2013-07-25T03:22:56.579Z · LW(p) · GW(p)

The question is what constitutes "near human-like levels consciousness". If you point to an African grey as your example, I'll laugh and walk away.

If I were in your place, I'd be skeptical of my own intuitions regarding the level of consciousness of African grey parrots. Reality sometimes is unintuitive, and I'd be more inclined to trust the expert consensus than my own intuition. Five hundred years ago, I probably would have laughed at someone who said we would travel to the moon one day.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T03:28:07.756Z · LW(p) · GW(p)

Reality sometimes is unintuitive, and I'd be more inclined to trust the expert consensus.

I trust reality a great deal more than I trust the expert consensus. As has been pointed out, science advances one funeral at a time.

If you want to convince me, show me evidence from reality, not hearsay from a bunch of people I have no reason to trust.

Replies from: MTGandP, Kaj_Sotala
comment by MTGandP · 2013-07-25T04:27:55.516Z · LW(p) · GW(p)

This is evidence from reality. In reality, a bunch of neuroscientists organized by a highly respectable university all agree that many non-human animals are approximately as conscious as humans. This is very strong Bayesian evidence in favor of this proposition being true.

What form of evidence would you find more convincing than this?

Replies from: Lumifer
comment by Lumifer · 2013-07-25T04:45:00.452Z · LW(p) · GW(p)

This is evidence from reality.

No, I don't think so.

all agree that many non-human animals are approximately as conscious as humans

That's not a statement of fact. That's just their preferred definition for the expression "approximately as conscious as humans". I can define slugs to be "approximately as conscious as humans" and point out that compared with rocks, they are.

Replies from: MTGandP
comment by MTGandP · 2013-07-25T04:57:24.216Z · LW(p) · GW(p)

This is evidence from reality.

No, I don't think so.

I have no way to respond to this.

That's just their preferred definition for the expression "approximately as conscious as humans". I can define slugs to be "approximately as conscious as humans" and point out that compared with rocks, they are.

That interpretation of the quoted expression strikes me as implausible, especially in the context of the other statements made in the declaration; for example: "Mammalian and avian emotional networks and cognitive microcircuitries appear to be far more homologous than previously thought." This indicates that humans' and birds' consciousnesses are more similar than most people intuitively believe.

Again, I ask: What form of evidence would you find more convincing than the Cambridge Declaration of Consciousness?

Replies from: Lumifer
comment by Lumifer · 2013-07-25T05:17:15.134Z · LW(p) · GW(p)

What form of evidence would you find more convincing than the Cambridge Declaration of Consciousness?

Evidence of what?

It seems that you want to ask a question "Are human and non-human minds similar?" That question is essentially about the meaning of the word "similar" in this context -- a definition of "similar" would be the answer.

There are no facts involved, it's all a question of terminology, of what "approximately as conscious as humans" means.

Sure, you plausibly define some metric (or several of them) of similarity-to-human-mind and arrange various living creatures on the that scale. But that scale is continuous and unless you have a specific purpose in mind, thresholds are arbitrary. I don't know why defining only a few mammals and birds as having a mind similar-to-human is more valid than defining everything up to a slug as having a mind similar-to-human.

Replies from: MTGandP
comment by MTGandP · 2013-07-25T05:28:52.557Z · LW(p) · GW(p)

Evidence of what?

I originally posted the Cambridge Declaration of Consciousness because Peter asked you, "What do you think of the body of evidence provided in this post [that nonhuman animals suffer]?" You said he hadn't provided any, and I offered the Cambridge Declaration as evidence. The question is, in response to your original reply to Peter, what would you consider to be meaningful evidence that non-human animals suffer in a morally relevant way?

Replies from: Lumifer
comment by Lumifer · 2013-07-25T05:36:36.242Z · LW(p) · GW(p)

what would you consider to be meaningful evidence that non-human animals suffer in a morally relevant way?

I freely admit that animals can and do feel pain. "Suffer" is a complicated word and it's possible to debate whether it can properly be applied only to humans or not only. However for simplicity's sake I'll stipulate that animals can suffer.

Now, a "morally relevant way" is a much more iffy proposition. It depends on your morality which is not a matter of facts or evidence. In some moral systems animal suffering would be "morally relevant", in others it would not be. No evidence would be capable of changing that.

comment by Kaj_Sotala · 2013-07-28T16:51:13.944Z · LW(p) · GW(p)

As has been pointed out, science advances one funeral at a time.

Generally untrue.

comment by Nornagest · 2013-07-25T00:45:49.278Z · LW(p) · GW(p)

African grays are pretty smart. I'm not sure I'd go so far as to call them near-human, but from what I've read there's a case for putting them on par with cetaceans or nonhuman primates.

The real trouble is that the research into this sort of thing is fiendishly subjective and surprisingly sparse. Even a detailed ordering of relative animal intelligence involves a lot of decisions about which researchers to trust, and comparison with humans is worse.

comment by [deleted] · 2013-09-03T11:08:29.444Z · LW(p) · GW(p)

How does self-awareness make you care?

comment by Qiaochu_Yuan · 2013-07-24T20:12:30.094Z · LW(p) · GW(p)

Thank you for writing this. For future reference, I am much more convinced by arguments that animals suffer in a way that is similar to how humans suffer (e.g. in a way that, if I saw it, would activate the same neurons in my head that activate when I see a human suffer) than by arguments that animals suffer in some more abstract sense, and I expect that I'm not alone in this preference.

Replies from: peter_hurford, DanielLC, None
comment by Peter Wildeford (peter_hurford) · 2013-07-24T21:19:37.445Z · LW(p) · GW(p)

I'm not clear as to what would count as evidence toward satisfying your preference. Do you need fMRI scans of animals? Those probably exist.

Nonhuman animals react in very analogous ways to analogous painful stimuli.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-24T21:21:19.750Z · LW(p) · GW(p)

Something like that would help. I would also say "videos of animals suffering," but I anticipate already reacting negatively to those in a way that is similar to how I would react negatively to videos of humans suffering, so that's probably unnecessary.

Replies from: peter_hurford
comment by DanielLC · 2013-07-26T03:59:19.550Z · LW(p) · GW(p)

If you were unable to feel a specific unpleasant emotion, would you care about other people feeling it?

comment by [deleted] · 2013-09-04T15:58:32.932Z · LW(p) · GW(p)

Why does it matter / why do you care about human suffering?

comment by lsparrish · 2013-07-24T06:30:02.488Z · LW(p) · GW(p)

I've been eating less meat lately for a reason that has nothing directly to do with animal suffering. Rather I have been experimenting with a lifestyle of nutrient powders, aka DIY Soylent, to substitute for meals. The recipe I settled on happened to be vegan, since it uses soy powder as the main protein source. One could substitute whey (the main Soylent is whey based) or meat based protein, however I am thus far happy with the taste and effects of the soy version.

Anyway, I don't know how widely this practice is likely to spread. It makes remarkable sense to me, and people like me, but perhaps not to the majority. I am attracted to novelty to an above-average degree, and not particularly attached to eating (as long as I can be full/satisfied). The idea that humans can live (not just live, but thrive) on a bit of powder, oil, and water, is somehow fascinating and thrilling -- more so than the idea of surviving on lettuce and veggie burgers, which sounds like more of a boring halfway solution. The reports of less sleep / more energy / better cognition (which seem true to my experience so far) also caught my attention -- perhaps for the same reasons that transhumanism seems like a good idea.

So maybe when advertising veganism to transhumanists specifically, Soylent / quantified self / powder based diet is a good pitch. Market it as "cyborg food" or something. Yes, animals suffering is bad, we get that... But if we focus on animals suffering, what happens? Lab research gets subjected to a bunch of new regs that slow things down, while the food factories in their arrogance keep cranking away and making us look like idiots. The economics are strongly in favor of the meat industry continuing for as long as people remain attached to their meat products, ensuring that they are the last to go despite doing more harm and less good than labs. And as a transhumanist, I really want the labs to succeed -- at least on the important life extension related stuff.

Replies from: AlexanderD
comment by AlexanderD · 2013-07-25T00:48:40.206Z · LW(p) · GW(p)

This seems rather a separate issue, especially since you admit that your choice of "cyborg food" only happened to be vegan. You're an accidental vegan. Next week, you might discover that a powder made from lamb faces had more bio-available iron, and that'd be the end of that.

Unrelated: The Accidental Vegan also sounds like the most boring movie imaginable.

Replies from: lsparrish
comment by lsparrish · 2013-07-25T04:42:57.472Z · LW(p) · GW(p)

Yes, it is very different, if we are talking about core motivations. The main point of powdered cyborg food is to be cool for transhumanist hacker types (by virtue of being convenient, useful, inexpensive, cognition-boosting, and liberating oneself from the conventional norms and hassles of food dependency), and not to save helpless suffering animals.

However, just because cyborg food is not motivated by animal rights does not mean that it does not serve the interests of animal rights. The main competitors in the cyborg food market are soy and whey, neither of which is flesh based (although some whey contains rennet from calf stomach). Contrast to conventional food, where veggie burgers play second fiddle to aggressively marketed and addicting meat products.

Whether the whey based version is harmful for animal rights is debatable, given its status as a waste product from cheesemaking. Purchasing whey does support the dairy industry, but since we are talking about replacing meals that contain cheese, it could actually reduce demand for cheese and thus reduce milk production overall. Under current market conditions, casein (cheese protein) is more expensive than whey protein, despite representing 80% of the protein content of cow's milk.

If it were to become a primary food product (as opposed to a niche bodybuilding product), particularly if there was a reduced demand for cheese, I expect that whey protein would become more expensive, and thus would probably be disfavored as a base for cyborg food on grounds of cost. Thus it is probably not straightforwardly analogous to the chicken wing example in Peter's post.

comment by aelephant · 2013-07-26T11:47:41.392Z · LW(p) · GW(p)

If dogs & cats were raised specifically to be eaten & not involved socially in our lives as if they were members of the family, I don't think I'd care about them any more than I care about chickens or cows.

This article seems to assume that I oppose all suffering everywhere, which I'm not sure is true. Getting caught stealing causes suffering to the thief and I don't think there's anything wrong with that. I care about chickens & cows significantly less than I care about thieves because thieves are at least human.

Replies from: army1987, MTGandP, None, Jabberslythe, DxE
comment by A1987dM (army1987) · 2013-07-26T12:18:14.780Z · LW(p) · GW(p)

Indeed, few westerners appear to be that bothered that it is customary to eat dog meat in China.

comment by MTGandP · 2013-07-26T16:11:32.584Z · LW(p) · GW(p)

Why don't you care about non-humans? If other animals suffer in roughly the same way as humans, why should it matter at all what species they belong to?

Getting caught stealing causes suffering to the thief and I don't think there's anything wrong with that.

In this case I think that's justified because catching a thief leads to less suffering overall than failing to catch the thief.

Replies from: Vaniver
comment by Vaniver · 2013-07-26T17:58:20.834Z · LW(p) · GW(p)

In this case I think that's justified because catching a thief leads to less suffering overall than failing to catch the thief.

Not everyone has harm (avoidance) as their primary moral value; many people would voluntarily accept harm to have more purity, autonomy, or economic efficiency, to give three examples.

Replies from: aelephant, Jabberslythe, SaidAchmiz, army1987
comment by aelephant · 2013-07-27T01:36:38.390Z · LW(p) · GW(p)

If a moral theory accepted and acted upon by all moral people led to an average decrease in suffering, I'd take that as a sign that it was doing something right. For example, if no one initiated violence against anyone else (except in self defense), I have a hard time imagining how that could create more net suffering though it certainly would create more suffering for the subset of the population who previously used violence to get what they wanted.

comment by Jabberslythe · 2013-07-26T19:42:27.291Z · LW(p) · GW(p)

I don't think that very many people would except extreme harm to have these things, though. I used to think that I valued some non-experiential things very strongly, but I don't think that I was taking seriously how strong my preference not to be tortured is. And for most people I don't think there are peak levels of those three things that could outweigh torture.

comment by Said Achmiz (SaidAchmiz) · 2013-07-26T18:36:55.904Z · LW(p) · GW(p)

While I definitely value autonomy (and, to a lesser extent, some sorts of purity), and would trade away some pleasure or happiness to get those things, a theory of harm could include autonomy, purity, etc., by counting lack of satisfaction of preferences for those things as harm.

Replies from: Vaniver
comment by Vaniver · 2013-07-26T20:22:22.655Z · LW(p) · GW(p)

I mean harm as one of the moral foundations. It seems like a five-factor model of morality fits human intuitions better than contorting everything into feeding into one morality and calling it 'harm' or 'weal' or something else.

comment by A1987dM (army1987) · 2013-07-27T09:46:14.184Z · LW(p) · GW(p)

many people would voluntarily accept harm to have more ... economic efficiency

That's usually the result of confusion.

Replies from: Vaniver
comment by Vaniver · 2013-07-27T14:26:00.826Z · LW(p) · GW(p)

That's usually the result of confusion.

That story strikes me as accepting harm to have more economic activity. I was thinking more of trading off physical or emotional health for wealth-generating abilities or opportunities, or institutions which don't invest in care and thus come off as soulless.

comment by [deleted] · 2013-09-04T16:35:03.802Z · LW(p) · GW(p)

are at least human

How does this make you care?

Replies from: aelephant
comment by aelephant · 2013-09-04T23:31:38.310Z · LW(p) · GW(p)

To me morality is an agreement that people can come to with one another. Since animals can't come to agreements with one another, what happens between animals is amoral. It isn't immoral when a bird kills a worm or a cat kills a rat and it doesn't make me feel bad either. Humans could make agreements between themselves about how they want to treat other animals, but humans can't make agreements with other animals. For this reason, I consider all interactions with animals to be outside the realm of morality, although there are certain behaviors that disgust me & that are probably indicative of mental illness & a sign that someone is probably a danger to others (eg torturing kittens).

Replies from: Salemicus, None
comment by Salemicus · 2013-09-04T23:40:36.212Z · LW(p) · GW(p)

What about the way we treat others with whom we can't come to agreements? Is that a matter of morality? For example, consider young children. I suspect most people regard cruelty to a young child as a particular moral horror, precisely because the child cannot argue back or defend itself. Indeed, I would argue that our moral obligations are strongest to groups such as children.

Replies from: aelephant
comment by aelephant · 2013-09-06T00:10:06.838Z · LW(p) · GW(p)

To be completely honest, I agree with you but find it hard to come up for a good argument for why that should be. One way I've thought about it in the past is that the parents or caretakers of a child are sort of like stewards of a property that will be inherited one day. If I'm going to inherit a mansion from my grandfather on my 18th birthday, my parents can't arbitrarily decide to burn it down when I'm 17 & 364 days old. Harming children (physically or emotionally) is damaging the person they will be when they are an adult in a similar way.

Replies from: Solitaire, Jiro
comment by Solitaire · 2014-01-06T12:49:43.971Z · LW(p) · GW(p)

What about a mentally disabled person, or other groups of humans who will never be capable of consciously entering into a 'moral agreement' with society? Should they also be considered 'outside the realm of morality'? What makes them different from an animal, other than anthropocentricism?

Replies from: aelephant
comment by aelephant · 2014-01-10T02:44:52.308Z · LW(p) · GW(p)

Yes, I consider them outside the realm of morality. If a mentally disabled person committed murder, for example, he or she could not be held morally liable for their actions -- instead the parent or guardian has the moral & legal responsibility for making sure that he or she doesn't steal, kill, etc.

Replies from: Mestroyer
comment by Mestroyer · 2014-01-10T03:44:27.454Z · LW(p) · GW(p)

So are you saying it should only be considered "wrong" to torture mentally disabled people because of agreements made between non-mentally-disabled people, and if non-mentally-disabled people made a different agreement, then it would be okay?

Say the only beings in existence are you and a mentally disabled person. Are you bound by any morality in how you treat them?

comment by Jiro · 2013-09-06T22:18:00.890Z · LW(p) · GW(p)

By this reasoning, if the child is 5 years old but the world is going to be hit by an asteroid tomorrow, unavoidably killing everyone, it would be okay to be cruel to the child.

To save the original idea, I'd suggest modifying it to distinguish between having impaired ability to come to agreements and not having the ability to come to agreements. Children are generally in the former category, at least if they can speak and reason. This extends to more than just children; you shouldn't take advantage of someone who's stupid, but you can "take advantage" of the fact that a stick of broccoli doesn't understand what it means to be eaten and can't run away anyway.

Replies from: aelephant
comment by aelephant · 2013-09-06T23:05:56.896Z · LW(p) · GW(p)

Right. Like I said, I find it hard to come up with a good argument. I don't like arguments that extend things into the future, because everything has to get all probabilistic. Is it possible to prove that any particular child is going to grow into an adult? Nope.

Replies from: Watercressed
comment by Watercressed · 2013-09-07T02:17:01.229Z · LW(p) · GW(p)

But if we're 99.9% confident that a child is going to die (say, they have a very terminal disease), is being cruel to the child 99.99% less bad?

Replies from: wedrifid
comment by wedrifid · 2013-09-07T08:36:14.473Z · LW(p) · GW(p)

But if we're 99.9% confident that a child is going to die (say, they have a very terminal disease), is being cruel to the child 99.99% less bad?

No.

(If this is making some clever rhetorical point then perhaps consider a quotation? Right now it is just a rather easy question.)

comment by [deleted] · 2013-09-05T01:01:30.846Z · LW(p) · GW(p)

Thanks for the answer, I think I formulated my original question incorrectly: why do you care about human suffering?

Replies from: aelephant
comment by aelephant · 2013-09-06T00:11:20.826Z · LW(p) · GW(p)

Don't know. I imagine any answer I could produce would be a rationalization.

comment by Jabberslythe · 2013-07-26T23:22:22.504Z · LW(p) · GW(p)

If you found that you cared much more about your present self than your future self, you might reflect on that and decide that because those two things are broadly similar you would want to change your mind about this case. Even if those selves are not counted as such by your sentiments right now.

This article is trying to get you to undertake similar reflections about pets and humans vs. other animals.

comment by DxE · 2013-07-26T18:45:35.810Z · LW(p) · GW(p)

This post is a demonstration of what social justice activists (along with scholars such as Steven Pinker and Richard Dawkins) describe as species bigotry.

I assume you similarly would have stood with white supremacists in the 1950s and 60s, as they sought to crush the hopes, dreams, and dignity of non-whites, because even if said white supremacists were violent and abusive, well, at least they were white?

comment by blacktrance · 2014-01-06T15:46:43.797Z · LW(p) · GW(p)

I care about animal suffering in the same sense I care about dust specks not getting into my eye - if I'd be otherwise indifferent, I'd rather not have it, but it's very easily outweighed, in this case by the taste of animals. To the extent that factory farming causes meat to be cheaper, I welcome it. Why should I be a vegetarian?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2014-01-06T16:45:56.140Z · LW(p) · GW(p)

I wrote that "if one cares about suffering, one should also care about nonhuman animals, since (1) they are capable of suffering, (2) they do suffer quite a lot, and (3) we can prevent their suffering."

Presumably you either disagree with one of my three empirical claims (which means we can have a good discussion) or you don't care about suffering generally (perhaps you only care about human or sapient suffering alone) and there's not much we can discuss. I, or someone else, could attempt to throw some thought experiments at you, I suppose, but I don't expect they'll do much.

Replies from: blacktrance
comment by blacktrance · 2014-01-07T00:36:08.218Z · LW(p) · GW(p)

if one cares about suffering, one should also care about nonhuman animals

This assumes that if I care about suffering, my utility function places some negative weight on suffering much in the same way it places a positive weight on me eating food I like, but this need not be the case. If I care about suffering, it means I want less of it, but it doesn't mean that I'm willing to give up much to reduce the amount. Ceteris paribus, I want less suffering in the world, but that doesn't mean I care enough about it to not eat delicious hamburgers, or even to pay more for a burger. I care about not getting dust specks in my eye too, but if I got one dust speck in my eye per month, and I could get rid of it by never eating burgers, I'd keep eating burgers. It doesn't mean that I don't care, though.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2014-01-07T01:34:33.154Z · LW(p) · GW(p)

That's technically true, yeah. It means you don't care very much (or care very very much about eating burgers)...

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-12T05:02:37.871Z · LW(p) · GW(p)

Or it means that the formalism of a utility function does not fully describe your preferences.

That is, asking "how much do you care about X", and getting some real number as the answer to that question for any value of X, will not describe the preferences and choices of the agent in question. (This is one way to interpret my previously offered "chickens vs. grandmother" conundrum.)

A more apt formalism might be some sort of multi-tier system, perhaps. I haven't settled on an answer, myself.

comment by Xodarap · 2013-07-23T22:55:04.331Z · LW(p) · GW(p)

Thanks for posting this Peter. I've found it hard to find an action with a higher benefit/cost ratio than ordering a bean burrito instead of a chicken one, and I'm interested to see what others have to say on the subject.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T11:40:54.118Z · LW(p) · GW(p)

I agree only If the bean burrito is cheaper and if the saved money goes to the best cause.

comment by Said Achmiz (SaidAchmiz) · 2013-07-24T04:41:30.370Z · LW(p) · GW(p)

In fact, 137 million chickens suffer to death each year before they can even make it to slaughter -- number of animals killed for fur, in shelters and in laboratories combined!

Is this sentence missing words? Should be "more than the number of ...", I assume?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-24T06:05:09.224Z · LW(p) · GW(p)

Yes! Fixed, thanks!

comment by James_Miller · 2013-07-23T22:15:08.182Z · LW(p) · GW(p)

Don't farmers kill huge numbers of animals when they grow food because of tractors running over animals or from chemicals designed to kill animals that would otherwise eat the grown food? I suspect that most of the marginal animal suffering arising from my eating steak comes from the production of the food used to feed the cow.

Replies from: Xodarap, peter_hurford, None, MTGandP, DanielLC
comment by Xodarap · 2013-07-23T22:46:26.053Z · LW(p) · GW(p)

This is a great point, and was advanced by Steven Davis, who claimed that eating grass-fed cows would cause fewer deaths than being vegetarian. Matheny however found an error in his calculation - you can see the paper for full details, but the short version is that veg diets cause much less harm.

comment by Peter Wildeford (peter_hurford) · 2013-07-23T22:31:25.193Z · LW(p) · GW(p)

I suspect that most of the marginal animal suffering arising from my eating steak comes from the production of the food used to feed the cow.

I don't know for sure, but that would be pretty surprising. I know that tractors often kill animals, but I don't think it's in sufficiently high quantity to dominate.

Regardless, as you mention, eating meat requires even more grain than eating grain directly, so the vegetarian/vegan diet still results in less animal deaths.

comment by [deleted] · 2013-07-23T22:28:54.311Z · LW(p) · GW(p)

Yes, food has to be grown to feed cows, quite a lot of food. Apparently it takes about 10 times as much land to make a certain number of calories in the form of animals compared to other foods. So if you're worried about the wild animals being killed when you eat then that's an argument for not eating animals and animal products.

comment by MTGandP · 2013-07-23T23:31:26.736Z · LW(p) · GW(p)

In addition to what others have said, I think it's important to distinguish between killing and suffering. The real problem here isn't the killing of animals; it's the suffering they undergo before they're killed. I don't know what it's like for those animals that die as a result of agriculture, but suppose they generally have painful deaths that last for several hours. On the other hand, animals on factory farms suffer greatly for a much longer period of time.

comment by DanielLC · 2013-07-26T04:12:53.064Z · LW(p) · GW(p)

There is one thing people seem to miss when they mention this. Some kinds of farming are more harmful than others. Can someone please tell me what foods are relatively safe?

comment by Document · 2013-08-08T18:23:35.023Z · LW(p) · GW(p)

Your karma is -146 now. Weren't you going to declare victory and leave at -100?

comment by Qiaochu_Yuan · 2013-07-25T03:32:09.749Z · LW(p) · GW(p)

Starting from the perspective that the best way to cause a behavior change is to convince System 1 of something rather than System 2, one strategy for convincing people that they should care about animal suffering is to provide more opportunities for them to interact with non-pet animals in a substantial way (so not at a zoo). I have essentially no direct experience with animals other than cats, dogs, and the like.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-25T05:11:26.230Z · LW(p) · GW(p)

Well, in the days when most people farmed their own meat, they had much less compunction about killing their livestock.

Replies from: MTGandP
comment by MTGandP · 2013-07-25T22:01:59.044Z · LW(p) · GW(p)

At the same time, they treated their animals a lot better than factory farms do.

comment by Grant · 2013-07-24T20:06:40.454Z · LW(p) · GW(p)

Idea: if you're very interested in promoting veganism or vegitarianism, help make it taste better, or invest in or donate to those who are helping make it taste better. As my other much-downvoted comment showed, I am very skeptical that appeals to altruism will have nearly as much of an affect as appeals to self-interest, especially outside of this community. I believe most people eat meat because it just tastes better than their alternatives.

Grown crops are far more efficient to produce than livestock, so there are plenty of other good reasons to transition away from the use of livestock in agriculture. If steak were made to "grow on trees", why pay all that extra for the real thing? If you lower the cost of vegetarianism by improving taste, more people will adopt it. If they don't adopt it they'll still be more likely to forgo meats for vegetarian dishes if those dishes taste better.

In the case of low-quality meats (e.g. McDonalds) the taste bar isn't even set very high.

When I first decided to be a vegetarian, I simply switched from tasty hamburgers to tasty veggieburgers and there was no problem at all.

I think your sample size might have lead you astray here. My personal experience is exactly the opposite. That said, I looked for studies of meat vs. faux meat taste and didn't find anything. I wonder if a love of meat over alternatives is innate or is learned, and if there exist vegetarian recipes which really do taste as good as the real thing.

Replies from: Xodarap, Nornagest, None, Viliam_Bur, Solitaire
comment by Xodarap · 2013-07-24T22:46:07.105Z · LW(p) · GW(p)

My personal experience is exactly the opposite

It varies a lot by brand. The food columnist for the New York Times couldn't tell that Beyond Meat wasn't chicken, for example.

Replies from: Grant
comment by Grant · 2013-07-24T23:09:49.316Z · LW(p) · GW(p)

Good article, thanks. The author does say the taste was quite different from chicken, you just can't tell when its in a burrito as the chicken is mostly used for texture. The producer's website is here.

Another idea, with potentially better returns than the above: invest in faux-meat producers. There appear to be plenty of them.

Replies from: Xodarap
comment by Xodarap · 2013-07-25T00:29:38.905Z · LW(p) · GW(p)

I agree that this is potentially a high-impact avenue. New harvest is a charity which sponsors meat substitutes, both plant based and tissue engineered, if you are interested.

Replies from: None
comment by [deleted] · 2013-07-25T09:22:42.430Z · LW(p) · GW(p)

You seem to be missing a link? Perhaps he meant to link to the group "new harvest".

Replies from: Xodarap
comment by Xodarap · 2013-07-26T00:37:06.020Z · LW(p) · GW(p)

Thanks ruari. I had forgotten the http, which apparently makes the link invisible.

comment by Nornagest · 2013-07-24T23:10:38.688Z · LW(p) · GW(p)

I think your sample size might have lead you astray here. My personal experience is exactly the opposite. That said, I looked for studies of meat vs. faux meat taste and didn't find anything.

Well, as a meat-eater I've got to admit that meat substitutes have come a long way in the last few years. A couple days ago I ended up eating vegan burgers which would have passed muster as mediocre cow, and vegetarian sausage tends to be fairly acceptable as well. I can't say the same for anything made from chunks too big to stir-fry, though, and I've never eaten any vegetarian products passing as rare meat, which I tend to prefer.

comment by [deleted] · 2013-07-25T09:24:50.649Z · LW(p) · GW(p)

Perhaps, but some preliminary findings show that online ads may be very effective (Peter posted about this on LW recently). Hopefully more research into effective outreach will be done in the future.

comment by Viliam_Bur · 2013-07-26T11:34:11.443Z · LW(p) · GW(p)

I believe most people eat meat because it just tastes better than their alternatives.

Data point: I do.

In the case of low-quality meats (e.g. McDonalds) the taste bar isn't even set very high.

This is probably low-status, but I do prefer the taste of meat even in the junk foods to most of the alternatives. In my experience, most of the alternatives are significantly improved by adding some meat to them.

I wonder if ... there exist vegetarian recipes which really do taste as good as the real thing.

Most likely, no. Otherwise we would already see them sold everywhere. Unless they were invented yesterday, or are extra expensive, or something like that.

comment by Solitaire · 2014-01-06T13:53:23.945Z · LW(p) · GW(p)

When I envision a hypothetical future in which humans don't consume meat, I don't imagine everyone getting their protein from some kind of tank-grown super-tasty 'I Can't Believe It's Not McDonalds!' meat substitute . The meat-heavy diet of Western societies has no basis in evolutionary terms and I don't see why we should seek to perpetuate this relatively modern obsession and dietary imbalance. Contrary to what many meat eaters think, a vegan diet can be incredibly varied and tasty once you get used to cooking using a wider variety of herbs, spices and ingredients which aren't currently mainstream in Western cuisine. I personally find things like smoked tofu, coconut oil and milk and nuts like pistachios and cashews to be every bit as tasty as any meat product. The consumption of large quantities of red meat and animal-derived fats is cultural, not essential, and in terms of nuitrition not even especially desirable. The massive over-consumption of bovine dairy products is particularly nonsensical when more efficient, more nutritional alternatives exist.

comment by Said Achmiz (SaidAchmiz) · 2013-07-26T18:49:00.562Z · LW(p) · GW(p)

By the way, for everyone who's interested in convincing people that animals suffer and that animal suffering is morally relevant, I recommend reading (and quoting to people) Stanislaw Lem's short story "The Seventh Sally, or How Trurl's Own Perfection Led to No Good". I found it to be possibly the most emotionally and logically salient argument for the "suffering matters, no matter what sort" position I've ever read. Here's the most relevant passage (Klapaucius's reply to Trurl arguing that his creations are mere simulacra, and so are not capable of real suffering):

"Come now, don't pretend not to understand what I'm saying, I know you're not that stupid! A phonograph record won't run errands for you, won't beg for mercy or fall on its knees! You say there's no way of knowing whether Excelcius's subjects groan, when beaten, purely because of the electrons hopping about inside—like wheels grinding out the mimicry of a voice—or whether they really groan, that is, because they honestly experience the pain? A pretty distinction, this! No, Trurl, a sufferer is not one who hands you his suffering, that you may touch it, weigh it, bite it like a coin; a sufferer is one who behaves like a sufferer! Prove to me here and now, once and for all, that they do not feel, that they do not think, that they do not in any way exist as being conscious of their enclosure between the two abysses of oblivion—the abyss before birth and the abyss that follows death—prove this to me, Trurl, and I'll leave you be! Prove that you only imitated suffering, and did not create it!"

"You know perfectly well that's impossible," answered Trurl quietly, "Even before I took my instruments in hand, when the box was still empty, I had to anticipate the possibility of precisely such a proof—in order to rule it out. ...

The entire story is well worth a read for anyone interested in this debate.

Replies from: None, Moss_Piglet
comment by [deleted] · 2013-09-04T16:30:54.113Z · LW(p) · GW(p)

Still, this doesn't answer, why should I care?

comment by Moss_Piglet · 2013-09-04T17:27:37.066Z · LW(p) · GW(p)

I think part of the problem here is that there is still an unsupported assumption, a pretty big one, at the core of the argument which it seems like people aren't seriously addressing. Why is it exactly we should be going around trying to prevent suffering in the first place?

Obviously most of us already care about suffering, at least under certain circumstances, because of the human drive of empathy. And if you or the OP were to say "I am upset by the suffering of these animals because I empathize with them, and as such here is a solution I would endorse..." then that would be fine; I can't see any flaw in that argument at all. Of course it's not terribly convincing, which is a bit of an issue of efficiency if you want to get other people on board with your plan, but waving the flag of morality seems like a Dark Side sort of solution; it puts a pretty big target on someone's back when they have to essentially swear team allegiance before they're allowed to engage with the argument critically. An ethical argument is not exempt from having to have a solid foundation; assumptions should be acknowledged and named where reasonably possible if the objective is to present a strong argument.

This is especially problematic because this argument implicitly calls for restrictions on the behavior of people who don't agree with it's assumptions. People using very similar arguments have already severely restricted access to lab animals for medical / biological experimentation, so it's hardly unreasonable to see that these sorts of arguments have potential real-world political traction. If someone is trying to control my behavior, I certainly expect an explanation better than 'the alternative would upset me'!

I get that, in the long-run, empaths win and the sphere of things-we-care-about keeps expanding. But since this is a blog about rationality, maybe we could at least naming empathy as the motivator for these sorts of posts rather than dressing it all up in morality?

Replies from: Moss_Piglet
comment by Moss_Piglet · 2013-09-04T17:28:34.057Z · LW(p) · GW(p)

I misread the comment above mine; please ignore comment this as it is off-topic.

comment by Jonathan_Graehl · 2013-07-25T02:10:38.214Z · LW(p) · GW(p)

Ignoring economic/environmental cost, how many chickens would you create and breed into factory-farming suffering, in exchange for one additional QALY? That is, you wouldn't make the trade unless it took fewer than this number of farmed chickens.

[pollid:544]

(answers may be very small (less than 1) if you value avoiding chicken suffering more than healthy human life-years) or even negative if you'd give up human lives to create more suffering chickens.

(If you think factory-farmed chickens have lives worth creating, please don't answer the poll, as your answer of infinity will throw off the average - you can vote "yes" or "indifferent" to the poll below this instead; this poll is mostly for people who answer "no" to it)

(I don't claim that chickens can actually be traded for human QALY - I still haven't gotten the ritual exactly working yet).

Replies from: Lukas_Gloor, SaidAchmiz, Jonathan_Graehl, Jonathan_Graehl, None
comment by Lukas_Gloor · 2013-07-25T02:34:28.490Z · LW(p) · GW(p)

Are we assuming an average lifespan for a factory farmed chicken? That would be about 1.5 months. And do you perhaps mean numbers 0<x<1 rather than negative numbers?

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-25T02:36:47.431Z · LW(p) · GW(p)

No, I meant that someone who answers -0.001 would prefer removing 1000 human QALY in order to prevent the creation of a single factory farmed chicken. Though I wouldn't expect any such answers. It's a question about a trade.

Replies from: MTGandP
comment by MTGandP · 2013-07-25T03:01:33.061Z · LW(p) · GW(p)

But shouldn't such a person answer 0.001, not -0.001?

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-25T03:34:11.570Z · LW(p) · GW(p)

Yes, I realized this as I as making a sandwich and came back to say so :) I'll leave my mistake unedited as a warning to others. -.0001 means what I said but with "prevent the creation" being "create". The sign changes the sign of one of the items in the exchange.

comment by Said Achmiz (SaidAchmiz) · 2013-07-25T02:32:47.531Z · LW(p) · GW(p)

Your poll does not seem to accept ∞, "infinity", or any variant thereof. (Note: my answer is not motivated by thinking that the chickens have lives worth creating.)

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-25T02:35:14.837Z · LW(p) · GW(p)

Yeah, if your answer would be infinity, just answer "yes" to the other poll. I noticed this too :)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-25T02:44:51.883Z · LW(p) · GW(p)

But wait; my answer to the other poll is not "yes". I mean... what? Either I am confused or you are.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-26T04:59:14.218Z · LW(p) · GW(p)

Ok - I didn't see your "Note" at first. I'm not sure what you mean. Presumably your answer would be indifferent or yes, though. Otherwise, could you explain?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-26T05:37:06.839Z · LW(p) · GW(p)

It's simple: I am willing to create as many factory-farmed chickens as you like for a QALY. A million? Sure. 3^^^3? Sure. I just don't care about the chickens; they are not a factor in the calculation; I am getting a free QALY. So my answer to the first question is "infinity".

My answer to the second question is "indifferent", although depending on how you construe "suffering", it could also be "No".

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-27T04:32:07.411Z · LW(p) · GW(p)

I have genuine uncertainty as to the nature of farmed chicken suffering - enough that I'd say it's bad to create your average meat-farmed chicken - otherwise I'd be right there with you at 10^20 or something similarly ridiculous.

The suggestion to genetically engineer suffering-knockout chicken seems a good one (though I'd have some residual uncertainty even then).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-27T04:48:23.110Z · LW(p) · GW(p)

Sure, fair enough, I was just saying that the polls don't have any way for me to represent my position.

comment by Jonathan_Graehl · 2013-07-25T02:16:15.660Z · LW(p) · GW(p)

Do you think factory-farmed-chicken-lives are worth living? That is, if you could create infinitely many of them at no material cost would you do so? Please don't consider the economic value of chickens; suppose this marginal chicken has no practical use whatsoever. Further, it's not an option to create them and then transport them to chicken-rescue pleasure-domes.

[pollid:545]

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-25T02:17:53.368Z · LW(p) · GW(p)

Sorry about the grammatical ambiguity. "No" means you'd rather the chicken never existed, not that you'd rather the universe never existed. I just mean roughly that you prefer the chicken not exist.

comment by Jonathan_Graehl · 2013-07-26T05:06:16.880Z · LW(p) · GW(p)

Median answer - of 100 factory chickens (so 150 chicken-suffering-years) : 1 human QALY - impresses me.

Quite a few people take animal suffering pretty seriously. It must feel odd to have society's rules so far removed from that - like serious abortion-is-murder believers.

Replies from: Vaniver, None
comment by Vaniver · 2013-07-26T20:45:39.736Z · LW(p) · GW(p)

Like Hedonic_Treader points out, I think you have the longevity wrong, which may make the question somewhat difficult to answer. If 8 chicken lifespans represents one year, then saying "I think factory farming one chicken balances out one human life" represents an answer of 8, not an answer of 1.

I don't think that has a huge impact on the analysis, though, because the breakdown at present looks like this (and I would expect that, at most, this would impact the Less than 1 group):

Less than 1: There are 2 0.4s and a 0.5. Low: Two 2s and a 20. Medium: 2 100s and a 1600. High: 2 millions, one 10 trillion, and one quadrillion.

About half think that chicken lives and human lives are roughly comparable; about a quarter think human lives are more valuable; about a quarter think human lives are much more valuable (of the 13 who have responded to this poll, which is much less than the number which responded to the other poll).

comment by [deleted] · 2013-07-26T05:40:15.558Z · LW(p) · GW(p)

100 factory chickens (so 150 chicken-suffering-years)

How does 100 factory chickens add up to 150 chicken-suffering-years? Did you mean months?

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-26T06:07:22.055Z · LW(p) · GW(p)

they live 1.5 years each?

Replies from: Desrtopa
comment by Desrtopa · 2013-07-26T06:16:06.708Z · LW(p) · GW(p)

Chickens factory farmed for meat don't live anywhere near that long. 1.5 months per chicken destined for broiler meat is about the right figure.

Replies from: Jonathan_Graehl, RomeoStevens
comment by Jonathan_Graehl · 2013-07-27T04:27:45.425Z · LW(p) · GW(p)

Yes, I took 1.5yr from another comment, which which I guess might be for egg layers or the natural lifespan. I really should have specified lifespan in the poll.

comment by RomeoStevens · 2013-07-26T08:20:50.827Z · LW(p) · GW(p)

they go from baby to full grown that fast? I had no idea.

Replies from: None
comment by [deleted] · 2013-07-26T12:08:42.683Z · LW(p) · GW(p)

They are often given substances to make them grow fast and big, this often leads to problems like their legs breaking.

Replies from: Jabberslythe
comment by Jabberslythe · 2013-07-27T00:05:56.606Z · LW(p) · GW(p)

They are also bred to mature faster and this can lead to similar problems I think. Manipulating the lighting to affect their circadian rhythm also helps make them mature faster.

comment by [deleted] · 2013-07-26T04:26:06.916Z · LW(p) · GW(p)

One additional QALY for whom? A human stranger? A human friend? Me?

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-26T04:57:14.206Z · LW(p) · GW(p)

I was thinking of the average human. So 1 part you, 20 parts friend, 10 parts family, 50 parts colleague, 6 billion parts stranger. Of course it shouldn't matter, since I said economic constraints don't apply. Assume everyone gets a QALY and 6 billion times your answer in chickens are farmed.

comment by BlueSun · 2013-07-23T22:08:13.110Z · LW(p) · GW(p)

A question I have is how to evaluate the morality of the two options:

  • A) Make it so that an animal is born, then later cause it considerable suffering
  • B) Change the conditions so that the animal never exists

If everyone went vegetarian, the animal population would likely be greatly diminished and it isn't obvious to me that I'd choose option B over option A if I were on the menu. Are there some standard objections to the idea that option A is better than option B?

One quick objection might be that it proves too much. If John Beatmykids told me he wouldn't have kids unless he was permitted to beat them, I wouldn't give him a pass to beat any future children. Another objection might be that there's always a choice C, but here I don't see another option as realistic.

Replies from: Xodarap, None, peter_hurford
comment by Xodarap · 2013-07-23T22:49:46.013Z · LW(p) · GW(p)

This is a great argument, and is known as the "Logic of the Larder" (for reasons I have never comprehended). This paper goes into more detail than you probably care about; the main point is that your guess:

the animal population would likely be greatly diminished

Isn't generally true, because wild animals have a much greater density than farm animals.

comment by [deleted] · 2013-07-23T22:32:30.508Z · LW(p) · GW(p)

Have you read much about the lives of farm animals? In general once people do I think they agree that these are lives that are not worth living. There's plenty of footage on the web too.

Replies from: davidpearce, drnickbone
comment by davidpearce · 2013-07-24T10:18:13.527Z · LW(p) · GW(p)

Indeed so. Factory-farmed nonhuman animals are debeaked, tail-docked, castrated (etc) to prevent them from mutilating themselves and each other. Self-mutilitary behaviour in particular suggests an extraordinarily severe level of chronic distress. Compare how desperate human beings must be before we self-mutilate. A meat-eater can (correctly) respond that the behavioural and neuroscientific evidence that factory-farmed animals suffer a lot is merely suggestive, not conclusive. But we're not trying to defeat philosophical scepticism, just act on the best available evidence. Humans who persuade ourselves that factory-farmed animals are happy are simply kidding ourselves - we're trying to rationalise the ethically indefensible.

Replies from: drnickbone
comment by drnickbone · 2013-07-24T12:12:32.757Z · LW(p) · GW(p)

This seems to address one of my points raised here.

Self-mutilation is certainly a proxy for very low or negative quality of life, even if directly suicidal behaviour is not available (because the animal can't form a concept of suicide as a way out). If the docking, castrating etc. is to prevent mutilation of other nearby animals, that's a bit different of course.

I'm very wary of deeming any life to be of negative quality unless there is very compelling evidence that the life-form itself feels the same way.

Also, see my other comment: what happens if a few changes to farming practice can make the quality of life positive, even if just barely so? Does the objection to meat really go away?

Replies from: davidpearce
comment by davidpearce · 2013-07-24T13:43:03.443Z · LW(p) · GW(p)

drnickbone, the argument that meat-eating can be ethically justified if conditions of factory-farmed animals are improved so their lives are "barely" worth living is problematic. As it stands, the argument justifies human cannibalism. Breeding human babies for the pot is potentially ethically justified because the infants in question wouldn't otherwise exist - although they are factory- farmed, runs this thought-experiment, their lives are at least "barely" worth living because they don't self-mutilate or show the grosser signs of psychological trauma. No, I'm sure you don't buy this argument - but then we shouldn't buy it for nonhuman animals either.

Replies from: Jiro, drnickbone, Jiro
comment by Jiro · 2013-07-24T16:05:38.137Z · LW(p) · GW(p)

For evolutionary reasons, humans have instinctive reactions to both human infants and cannibalism that are unrelated to whether a course of action is really ethical, so claiming that something is bad because it justifies eating infants is often a cheat.

And if we actually started eating infants, the existence of those instincts would mean that it would be done mostly by people who lack those instincts because of brain malfunction. This would in practice lead to people with brain malfunctions controlling the project, which would quickly extend it to unethical areas regardless of whether the original version is ethical.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T16:19:43.549Z · LW(p) · GW(p)

So you're saying that farming human infants can be ethically justified in contrived circumstances? I agree. But most people wouldn't, which suggests that this might not be their true rejection. A continued practice of farming animals would send the wrong message to people who aren't consequentialists, it would send the message that treating beings differently simply because of their species-membership is normal/okay, which could have very bad consequences for nonhumans in the long run.

Replies from: Jiro
comment by Jiro · 2013-07-24T17:48:57.235Z · LW(p) · GW(p)

But most people wouldn't, which suggests that this might not be their true rejection.

If most people don't agree with farming human infants, but they do agree with other things for which similar arguments can be made, that does not imply that their reasons for accepting the other things aren't genuine. It may instead imply that their reasons for rejecting the farming of human infants aren't genuine. Given the existence of powerful human instincts related to both infants and cannibalism, I find the latter explanation to be more likely than the former.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T18:08:03.375Z · LW(p) · GW(p)

The question is not whether they themselves would farm the infants, but whether they see an ethical objection to doing it in hypothetical circumstances where those indirect reasons you mention are removed. Imagine we'd ask people the following question:

Suppose we discover an isolated island where the islanders farm infants and where the whole society is completely fine it. You have a magic button that could remodel the society in a way stopping that practice. All else would remain equal, i.e. no one on that island would become better or worse off because of the button. Would you push it or are you indifferent? And if you would push it, how much money would you be willing to pay for pushing it? Furthermore, we specify that after pushing or not pushing the button, you will forget about the island.

My guess is that people would pay money for this, which suggests that it's not just their emotional dispositions that are responsible for their judgment but rather the underlying moral principles they are following.

Replies from: Jiro
comment by Jiro · 2013-07-24T19:30:51.149Z · LW(p) · GW(p)

Given the strong instinctual aversion to doing such things, anyone who is willing to do them probably has a brain malfunction. (Note that "believing they are ethical" is not the same as "willing to do them".) Most people would consider an island whose inhabitants have brain malfunctions to be something to be stopped. And you can't postulate a society of people who don't have brain malfunctions and yet like to do such things unless they are not human.

Furthermore, most people asked such a thing will be unable to separate their instinctual reactions from ethical judgments. I suspect most people if told of an island where people eat shit, would be willing to make some non-zero expenditure to stop it. That doesn't mean they're basing their judgment on moral principles.

comment by drnickbone · 2013-07-24T15:59:06.224Z · LW(p) · GW(p)

Hmm, I can't see any obvious utilitarian approach under which a cannibal society would be justified.

First, it would have to be a non-human society, or a society where humans had been substantially modified to remove their revulsion at eating other humans.

Second, under total utilitarian logic, it looks like there could be more people sustained on a bare subsistence diet (all of them with lives barely worth living) than could be sustained by breeding one bunch of humans to be consumed by other humans. So total utilitarians should reject the cannibal society: ironically, it may not be repugnant enough for the Repugnant Conclusion to hold! Under the same "repugnant" logic, total utilitarians would abolish meat eating and eradicate wild animals, whenever that led to an increase in the human population.

Average utilitarians would also reject the cannibal society, since they could improve the welfare of an average human by just not breeding the cannibal victims. It's less clear to me what average utilitarians should do about farm animals and wildlife. This depends on whether these animals are included in the average at equal weight with humans, or whether there are different weighting factors. If equal weighting, then eradicating all non-human animal life would increase the average welfare of what's left. This is another sort of repugnant conclusion of course.

However, none of these is the strongest reason for rejecting a cannibal scenario. The strongest reason appears to be the Kantian one: it's wrong to treat human beings as means to an end. Whereas there seems to be no similar Kantian injunction against treating animals as means to an end.

It's interesting that there is this asymmetry, which does initially look like outright speciesism. However, the crucial asymmetry is probably between agents who can be expected to be bound by a shared set of moral rules (including the rule of not using each other) and other beings who are not and cannot be bound by the same rules. If there were non-human animals, with whom we could agree to share a moral code, then the code could say it is wrong to use them as means to an end as well.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-01-06T15:07:17.376Z · LW(p) · GW(p)

First, it would have to be a non-human society, or a society where humans had been substantially modified to remove their revulsion at eating other humans.

Humans have no general aversion to eating other humans. Same as they have no general aversion to killing other humans. Their were enough societies that routinely ate killed enemies. Humans do have aversion to killing and eating anyone/anything the feel empathy for. But whom you feel empathy for is strongly socialized. Sure empathy with children is strong but children of enemies were also often killed.

Don't commit the 'typical society fallacy' of projecting and generalizing your (societies) values. Our society exists because it is more stable and competitive than tribal societies which played a less efficient competitive game. This means that a having humane values is a winning strategy for a society. But it is not 'right'. It is just ethical.

Otherwise I basically agree with the utilitarian reasoning. Note that utilitarianism isn't neccessarily the only possible approach.

Replies from: Lumifer
comment by Lumifer · 2014-01-06T21:53:49.367Z · LW(p) · GW(p)

Humans have no general aversion to eating other humans.

That is not true because cannibalism is rare. Moreover, many cases of cannibalism are ritual and symbolic.

they have no general aversion to killing other humans.

That is not true either. Being psychopathic is not a human norm. Clearly, humans can and do kill other humans when they feel the need for it, but "aversion" is a very weak work. I have no problem saying that humans do have a general aversion to killing other humans and that they manage to overcome that aversion rather easily.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-01-06T22:39:01.338Z · LW(p) · GW(p)

That is not true because cannibalism is rare.

Agreed. Rare it is. But then you agree that it does occur voluntary in normal healthy adult humans.

Moreover, many cases of cannibalism are ritual and symbolic.

That only qualifies it but doesn't exclude it. In the opposite it means that it can be sufficiently 'normal' to have become part of tradition and customs.

Being psychopathic is not a human norm.

That has nothing to do with psychosis. It just means that killing other people can be quite nomal for human tribes. I recommend having a look at the Yanomamö:

http://www.artofmanliness.com/2013/06/10/the-yanomamo-and-the-origins-of-male-honor/

ADDED QUOTES:

45% of these tribesmen had slain at least one other man.

Warfare has been the most important single force shaping the evolution of political society in our species.

But women have always been the most valuable single resource that men fight for and defend.

Replies from: Lumifer
comment by Lumifer · 2014-01-07T01:49:06.920Z · LW(p) · GW(p)

it does occur voluntary in normal healthy adult humans

The range of behavior that historically has occurred "voluntary in normal healthy adult humans" is very very wide.

That has nothing to do with psychosis.

Not psychosis but psychopathy.

killing other people can be quite nomal for human tribes.

Yes, sometimes. However I stand by my assertion in the parent post.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-01-07T06:59:38.112Z · LW(p) · GW(p)

The range of behavior that historically has occurred "voluntary in normal healthy adult humans" is very very wide.

Indeed. That is exactly the point.

Not psychosis but psychopathy.

That's what I meant. My fingers just typed something differnt.

I stand by my assertion in the parent post.

Which one exactly?

Humans have no general aversion to eating other humans.

The point is that killing/eating other humans/animals is nothing special. It is part of human behavior in so far as it is no outlier or random/accidental (mis)behavior but in the normal action continuum well integrated with suitable affects moderating it. That is the reason why it can be socially moderated/ritualized/tabooed.

And this doesn't say anything about large-society-ethics. But large-society-ethics has to consider this part of human wiring/complex utility function.

I also stand by my assertion. Anyway I don't think that we really disagree about the facts. I just guess that you seem to infer that I derive relevantly different ethics from it.

Replies from: Lumifer
comment by Lumifer · 2014-01-07T15:18:46.504Z · LW(p) · GW(p)

That is exactly the point.

But that's not what you said. You said that humans "have no general aversion to eating other humans".

Humans, for example, do have an aversion to death, pain, and hunger -- and yet suicides, self-flagellation, and fasts are are recurring motif in human history.

Maybe it's a language issue. "Have an aversion to X" does not mean "will never ever do X". It means "would prefer not to do X, but will do it if necessary".

Which one exactly?

This: "... that humans do have a general aversion to killing other humans and that they manage to overcome that aversion rather easily."

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-01-07T21:54:12.245Z · LW(p) · GW(p)

Maybe it's a language issue.

Looks a bit so.I meant it a bit more like repugnance or atrocity. Rereading the dialog it is also not clear whether the stress is on "general" or "aversion". Nonetheless I'd think that your "would prefer not to do X, but will do it if necessary" is still too strong given the example of the Yanomamö. At least it is not strong enough to allow cooperation of any the villages within 'recorded history'. How about "would prefer not to do X to an enemy, if the risk is too high" or "would prefer not to do X to an outsider if indifferent". Though even that may be too weak. I think there is not really an aversion itstead killing is countered primarily by empathy (which is a strong emotion easily activated by living beings) and risk (physical and social).

comment by Jiro · 2014-01-06T22:30:57.480Z · LW(p) · GW(p)

Most people would object to breeding brainless human babies for the pot, even though by definition brainless human babies are not people, cannot feel or suffer, and can be treated as objects because they are objects.

This is not because breeding brainless human babies would be wrong. It's because our species has an instinctive aversion to cannibalism and an instinctive tendency to treat anything with baby-like physical features as people (which also accounts for the many anti-abortion arguments that depend on the physical attributes of the fetus).

comment by drnickbone · 2013-07-24T12:02:04.877Z · LW(p) · GW(p)

I'm interested if that is the real objection though.

Presumably it is possible to design a more humane farming system such that the quality of farmed animal life is > 0 (i.e. these are genuinely lives worth living). Presumably it is also possible to legislate to enforce such a system on meat producers.

And it may well be easier to do that than to persuade everyone to give up eating meat, or to persuade them voluntarily to eat humane meat (at a higher price). So that on consequentialist grounds, campaigning for "humane farms" legislation is a better strategy.

But that's not what the original poster is advocating. I'm not sure why.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T13:12:25.849Z · LW(p) · GW(p)

I'm not sure whether that would be feasible. The current rate of meat consumption in affluent countries is already straining the global amount of resources, and projections suggest that meat consumption is on the rise. Increasing animal welfare while keeping production constant (or even scaling it) will be even more inefficient and will require even more resources. So this only seems feasible if you reduce the overall rate of consumption, and how would you do that more effectively than by promoting vegetarianism or something similar?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-25T03:07:58.523Z · LW(p) · GW(p)

The current rate of meat consumption in affluent countries is already straining the global amount of resources,

If this were true, I'd expect it to be reflected in the price of meat.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-25T03:12:19.150Z · LW(p) · GW(p)

Unless governments subsidize the hell out of it.

comment by Peter Wildeford (peter_hurford) · 2013-07-23T22:23:43.658Z · LW(p) · GW(p)

Are there some standard objections to the idea that option A is better than option B?

The reason to prefer option B over option A is the standard considerations of "suffering is bad". On most consequentialist considerations, a life of entirely suffering is not worth living. Would you want to exist if the only thing that would happen to you is torture and then death?

Your example with John Beatmykids is a good one.

~

Another objection might be that there's always a choice C, but here I don't see another option as realistic.

Choice C might be to raise animals that are engineered to not feel pain.

comment by Timothy Telleen-Lawton (erratim) · 2015-01-06T17:08:58.571Z · LW(p) · GW(p)

It's important to note that supply and demand aren't perfectly linear. If you reduce your demand for meat, the suppliers will react by lowering the price of meat a little bit, making it so more people can buy it. Since chickens dominate the meat market, we'll adjust by the supply elasticity of chickens, which is 0.22 and the demand elasticity of chickens, which is -0.52, and calculate the change in supply, which is 0.3. Taking this multiplier, it's more accurate to say you're saving 7.8 land animals a year or more. Though, there are a lot of complex considerations in calculating elasticity, so we should take this figure to have some uncertainty.

I think the calculations would be simpler and more accurate to assume that long term supply is in fact flat, so that eating one fewer animal causes ~one fewer to be produced in the long term. A more complete argument here.

If true, this would strengthen your overall point and make people even more empowered to reduce suffering!

comment by shminux · 2013-07-29T21:10:35.602Z · LW(p) · GW(p)

Not sure if this has been linked before. Some quotes:

A week tomorrow, at an exclusive west London venue, the most expensive beefburger in history will be nervously cooked and served before an invited audience. Costing somewhere in the region of £250,000, the 5oz burger will be composed of synthetic meat, grown in a laboratory from the stem cells of a slaughtered cow.

One assessment, published in 2011 by scientists from Oxford University, estimated that cultured meat uses far less energy than most other forms, apart from chicken, and some 45 per cent less energy than beef, the most environmentally destructive meat.

They also found that synthetic meat needs 99 per cent less land than livestock, between 82 and 96 per cent less water, and produces between 78 and 95 per cent less greenhouse gas. In terms of relative environmental damage, there was no contest.

People for the Ethical Treatment of Animals (Peta), which runs a scheme offering a prize of $1m (£660,000) for the first person or organisation to produce artificial chicken meat, said that cultured meat would be ethically acceptable if it meant less slaughtering.

"We do support lab-grown meat if it means fewer animals are eaten. Anything that reduces the suffering of animals would be welcome," said Ben Williamson, a Peta spokesman.

comment by TabAtkins · 2013-07-27T15:42:32.867Z · LW(p) · GW(p)

As a mostly-vegetarian person myself, I find this article's primary moral point very unconvincing.

Yes, factory farms are terrible, and we should make them illegal. But not all meat is raised on factory farms. Chickens and cattle who are raised ethically (which can still produce decent yields, though obviously less than factory farms) have lower levels of stress hormones than comparable wild animals. We can't measure happiness directly in these low-light animals, but stress hormones are a very good analogue for an enjoyable life, and we know that high levels are directly linked to poorer health outcomes (and thus likely suffering).

It's simply not that hard to raise food animals in a way that makes them better off than wild animals, and so unless you're strongly in the "reform nature" transhumanist strain, ethical animal farming is at least somewhat of a positive over not farming at all.

(I'm personally vegetarian by ecological reasons, and abstain from eating some animals due to moral compunction against eating things likely to be sentient.)

Replies from: peter_hurford, MTGandP, MugaSofer, Jabberslythe
comment by Peter Wildeford (peter_hurford) · 2013-07-27T18:25:02.603Z · LW(p) · GW(p)

But not all meat is raised on factory farms.

This is correct. But the vast, vast majority of meat the typical consumer is likely to run into is raised on factory farms, so it's essentially true to equate meat with factory farmed meat.

comment by MTGandP · 2013-07-27T17:32:50.055Z · LW(p) · GW(p)

It's simply not that hard to raise food animals in a way that makes them better off than wild animals

And yet it's extraordinarily difficult to actually find meat from animals that were raised truly humanely. See this comment.

Also, I think the standard one should apply is whether an animal has a good life, not whether it as a life better than it would if it were in the wild. If you have a life that is very not worth living, it would better to not exist than to move up to having a life that is only moderately not worth living.

Ninjaedit: Actually, I think I misunderstood your point about farm animals having lives better than wild animals. Are you saying that it's worth it to have non-factory-farmed farm animals when their lives are better than those of comparable wild animals, because they displace the existence of those wild animals?

comment by MugaSofer · 2013-07-29T07:00:14.591Z · LW(p) · GW(p)

Well, there's even more debate over the criteria for "this entity's death is sad" than "this entity's suffering is sad". Since, as other posters have noted, the massively overwhelming majority of meat is factory-farmed, this point still seems pretty important while being much easier to show.

comment by Jabberslythe · 2013-07-28T19:42:49.249Z · LW(p) · GW(p)

Chickens and cattle who are raised ethically (which can still produce decent yields, though obviously less than factory farms) have lower levels of stress hormones than comparable wild animals.

Do you happen to have a source for this? Not that I particularly doubt this, but it would be useful information.

comment by Jiro · 2013-07-26T18:52:15.246Z · LW(p) · GW(p)

To those who say that vegetarianism is too hard, I’d like to simply challenge you to just try it for a few days.

People who say that vegetarianism is too hard generally don't mean that being too hard is the only reason they won't do it.

Replies from: MugaSofer, peter_hurford, DxE
comment by MugaSofer · 2013-07-29T07:03:03.639Z · LW(p) · GW(p)

Well, I can't see the true meaning in their hearts or whatever, but I have definitely had people admit that vegetarianism is morally obligatory, only to claim they are completely incapable of doing anything about this because it just tastes so good. (I was raised vegetarian, so I of course simply don't know what I'm missing.)

comment by Peter Wildeford (peter_hurford) · 2013-07-26T20:41:43.872Z · LW(p) · GW(p)

I have seen people pretty startled by the ease of vegetarianism once they take it on as a challenge, however.

comment by DxE · 2013-07-26T18:57:13.060Z · LW(p) · GW(p)

Agreed. The proper translation of "too hard" is usually "I don't care."

Replies from: ygert
comment by ygert · 2013-07-26T22:48:53.691Z · LW(p) · GW(p)

That is to say, "the difficulty is higher than the amount I care."

comment by lavalamp · 2013-07-25T00:58:21.814Z · LW(p) · GW(p)

This appears to be an argument for buying ethically raised meats instead of factory farmed meats, not an argument for never eating meat.

Replies from: MTGandP
comment by MTGandP · 2013-07-25T03:04:54.943Z · LW(p) · GW(p)

Here is a comment that addresses this point.

comment by pianoforte611 · 2013-07-24T14:06:57.081Z · LW(p) · GW(p)

Consider the two groups of animals.

Group A consists of factory farmed animals which suffer a total of X units of pain in their lives. Group B consists of animals in the wild that also suffer a total of X units of pain in their lives*

We could try to reduce suffering by preventing Group A's existence (your suggestion), or we could try to reduce suffering by preventing Group B's existence. Ignoring convenience why should we choose your option?

*I used the groups so as to address the fact that the individual animals may suffer different amounts.

Replies from: Lukas_Gloor, peter_hurford, Solitaire, MTGandP
comment by Lukas_Gloor · 2013-07-24T14:37:05.763Z · LW(p) · GW(p)

Why not choose both as long as this doesn't lead to unwanted side-effects? It gets interesting when the two are mutually exclusive. If it turns out that eating more meat reduces the amount of wild animals that are suffering, then that would imo be the best argument against vegetarianism. It is hard to estimate what the effects of global warming will be on wild animal populations though. And even if the argument goes through, I think the biggest benefit from raising the issue of vegetarianism comes from promoting concern for the interests/suffering of nonhumans. To the extent that current memes determine the trajectory of the far future, this would dominate over the direct impact of personal consumption.

Replies from: pianoforte611
comment by pianoforte611 · 2013-07-24T15:31:26.971Z · LW(p) · GW(p)

Why not choose both as long as this doesn't lead to unwanted side-effects?

Exactly my question. Why the concern over group A and almost no concern over group B?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T17:36:59.956Z · LW(p) · GW(p)

Lots of people care about the suffering of wild animals. The facebook group "reducing wild animal suffering" currently has 500+ members and many are part of the rationalist community.

Replies from: pianoforte611
comment by pianoforte611 · 2013-07-24T18:04:27.530Z · LW(p) · GW(p)

Thank you, this is news to me. The page is fairly non-descript though, do you know what sorts of measures they are taking to reduce animal suffering in the wild? Most of what I saw was actually only addressing human caused animal suffering.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T18:23:01.592Z · LW(p) · GW(p)

The general consensus is that at this stage, it's most important to raise awareness about wild animal suffering so future generations are likely to do something about the issue. This is done by spreading anti-speciesism and by countering the view that whatever is natural is somehow good or that nature "has a plan". It seems especially important to try to change the paradigm in ecology and conservation biology in order to focus more attention on the largest source of suffering on the planet. Some altruists also focus on this issue because of concerns about space colonisation, for instance, future humans might want to colonise the universe with Darwinian life or do ancestor simulations, which would be very bad from an anti-speciesist point of view.

Some imagined long-term solutions for the problem of wild animal suffering range from a welfare state for elephants to reprogramming predators to reducing biomass, but right now people are mainly trying to raise awareness for more intuitive interventions such as vaccinating wild animals against diseases (which is already done in some cases for the benefit of humans), not reintroducing predators to regions for human aesthetic reasons, and helping individual animals in distress as opposed to obeying the common anti-interventionist policies in wildlife parks.

Replies from: pianoforte611, davidpearce, Jonathan_Graehl
comment by pianoforte611 · 2013-07-25T02:57:05.983Z · LW(p) · GW(p)

Upvoted for specificity. I appreciate that the people in this movement are taking altruistic vegetarianism to its logical conclusion.

comment by davidpearce · 2013-07-27T11:26:08.061Z · LW(p) · GW(p)

Obamacare for elephants probably doesn't rank highly in the priorities of most lesswrongers. But from an anthropocentric perspective, isn't an analogous scenario for human beings - i.e. to stay free living but not "wild" - the most utopian outcome if the MIRI conception of an Intelligence Explosion comes to pass?

comment by Jonathan_Graehl · 2013-07-25T02:33:34.031Z · LW(p) · GW(p)

Be wary of Facebook groups whose consensus is "it's most important to promote awareness at this stage".

That said, I like the group/concept. It's interesting to ponder, and a welcome counterpart to "reduce farmed animal suffering".

Replies from: MTGandP
comment by MTGandP · 2013-07-25T03:12:24.448Z · LW(p) · GW(p)

Be wary of Facebook groups whose consensus is "it's most important to promote awareness at this stage".

I was just thinking about how I agree with you, but I realized that I don't know why. What's wrong with promoting awareness? Even though I find it intuitively unappealing, I think the reason why it's usually ineffective is because most interventions are ineffective. I don't see any other reason. Sometimes (e.g. when fundraising), promoting awareness is extremely effective.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-25T03:36:56.992Z · LW(p) · GW(p)

I don't know about you, but my explanation for being leery is: what Facebook groups do I expect to encounter? Answer: those that devote a large amount of effort to promoting themselves. (I also expect to encounter Facebook groups that are popular/worthy, but note that the anthropic reason I gave first applies no matter whether the group is actually good). Be skeptical of things that come to your attention through Facebook - at least beware privileging the hypothesis.

I agree that awareness promotion can be good, but another instinct tells me that Facebookers love to conclude that the best thing they can do is share/like/etc. - it's like finding the cheapest way possible to feel like a good person.

Replies from: MTGandP
comment by MTGandP · 2013-07-25T04:36:19.932Z · LW(p) · GW(p)

I agree that awareness promotion can be good, but another instinct tells me that Facebookers love to conclude that the best thing they can do is share/like/etc. - it's like finding the cheapest way possible to feel like a good person.

Yes, the "share/like/etc" phenomenon. I do think there's a big difference between "share this video because this will somehow help those child soldiers in some indefinite way" versus "get more people to care about this issue, but also we have no idea how to actually fix it so we can't really recommend anything beyond that." Many supports of reducing wild-animal suffering want to actually solve the problem, but it looks like the best way to do that is to bring the problem to the attention of more people who will potentially be able to help solve it.

It's a very different situation from, say, malaria, where we already know that donating to AMF is among the best things to do. But now that I think about it, a video promoting AMF that got popular on Facebook would probably elicit a lot of new donations.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-25T07:43:31.674Z · LW(p) · GW(p)

Sure, and if the purpose of a group is to reduce animal suffering and voluntary changes in individual consumption patterns are the most effective route, then the likes/shares are presumably accompanied by those people using less farmed animal products.

comment by Peter Wildeford (peter_hurford) · 2013-07-24T15:01:31.686Z · LW(p) · GW(p)

The reason is Group A seems more feasible to change at the moment. Though I am deeply interested in considerations of wild animal suffering as well. I don't see why you need to focus on one or the other.

Also, Group A at least has a clear action to take -- eating less meat. Group B does not have a clear action.

Replies from: pianoforte611
comment by pianoforte611 · 2013-07-24T15:28:45.966Z · LW(p) · GW(p)

I specified ignoring convenience. Is the lack of a clear action for Group B your true rejection? Would you actually try to minimize suffering in wild animals if you knew how to?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-24T21:14:26.765Z · LW(p) · GW(p)

I would definitely try to minimize suffering in wild animals if I knew how to. Would you?

And why would you ignore convenience?

Replies from: pianoforte611
comment by pianoforte611 · 2013-07-24T21:26:08.814Z · LW(p) · GW(p)

I'm interested in the intrinsic value of reducing suffering, which is why I posed the question. I wanted to know if you thought that the suffering of animals raised by humans is worse than the suffering of wild animals, all else being equal.

If you truly do care about the suffering of wild animals then I appreciate your consistency. I am not particularly bothered by fish getting eaten by sharks or zebras getting eaten by lions. I'm curious though, if you had sufficient resources, would you attempt to convert carnivorous animals to herbivores as well?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-24T22:04:24.259Z · LW(p) · GW(p)

I'm curious though, if you had sufficient resources, would you attempt to convert carnivorous animals to herbivores as well?

Yes. Predation seems quite painful. Wouldn't you agree?

Replies from: pianoforte611
comment by pianoforte611 · 2013-07-25T03:00:10.765Z · LW(p) · GW(p)

I think it is non-obvious that reducing predation is a worthwhile use of resources. I do appreciate your consistency in applying your altruistic principles though.

comment by Solitaire · 2014-01-05T23:03:58.482Z · LW(p) · GW(p)

Perhaps the ultimate rational position for the continued survival of humans and the reduction of suffering would be to have no animals at all and turn all available land mass over to trees for oxygen and the growing of crops for some kind of sustainably producible, nutritionally perfect food (perhaps a further developed version of the Soylent reference above), but pure rationality aside, don't we also value something that can't so easily be quantified about wild animals and the wild environment? I for one take great pleasure from the diversity of life exhibited on our planet. I would feel pretty depressed if I knew the future survival of life was predicated on such cold, unappealing utility alone.

Replies from: hyporational
comment by hyporational · 2014-01-06T09:41:11.237Z · LW(p) · GW(p)

don't we also value something that can't so easily be quantified about wild animals and the wild environment?

This an interestingly common position (that I share) considering how little time people spend in the nature. What exactly is it that I value, some vague idea about wildlife that can't be had without diverse wildlife existing somewhere out there? I like to watch nature documentaries, but I'm not sure what exactly I value in them.

Replies from: Solitaire
comment by Solitaire · 2014-01-06T12:24:22.947Z · LW(p) · GW(p)

I do agree with you that many people have a romanticised idea of the natural world that probably has little to do with the reality; they appreciate the polished, TV-friendly aesthetics of nature documentaries without actually spending much time beyond their urban boundaries. I come at it from the perspective of someone who grew up in the countryside and love the feeling of being in wild places far more than in a city, so I suppose I have a different perspective. Personally I find busy cities really bring me down and leave me yearning for space and greenery.

comment by MTGandP · 2013-07-24T23:03:53.756Z · LW(p) · GW(p)

This doesn't directly address your question, but I think it's relevant nonetheless. Here is an excellent article in The New York Times about reducing predation.

comment by [deleted] · 2013-08-01T08:45:53.720Z · LW(p) · GW(p)

I tried it in my youth but due to being a picky eater and not really planning it well, the doctor told my parents to give me more meat after I became anemic.

I'm interested in starting it up again in the future, when I learn a bit more about cooking and everything.

The weird thing is that I already ignore almost all meats most of the time (I pretty much eat only fish) so I don't know how going the extra mile and cutting them out completely could have much of an effect...

Replies from: peter_hurford, MTGandP, Drayin, Mestroyer
comment by Peter Wildeford (peter_hurford) · 2013-08-01T13:23:40.775Z · LW(p) · GW(p)

I'm passing along advice I heard from a friend. I cannot vouch for it's accuracy or my friend's expertise. Follow at your own risk:

If anaemia is the main stumbling block, all the major vegan protein sources are also high in iron: lentils, chickpeas, beans. Avoid spinach, since there seems to be a good chance it hinders absorption. Do, however, get vast amounts of Vitamin C, which facilitates absorption: eat an orange a day, squeeze fresh lemon juice into as many dishes as possible, and eat plenty of broccoli, which has respectable vitamin C and iron content.

comment by MTGandP · 2013-08-01T14:59:53.530Z · LW(p) · GW(p)

There are lots of resources on the Internet about veg health. Vegan Health is an informative website that's run by nutritionists who specialize in vegan diets. Here is their article on iron.

comment by Drayin · 2013-08-01T14:19:53.928Z · LW(p) · GW(p)

When I first went Veg I became anemic, now I take an iron pill daily and that seems to fix the problem completely, I also eat a cereal which is high in iron (additionally any sort of vegan meat substitute often is fortified with iron).

comment by Mestroyer · 2013-08-01T11:11:35.885Z · LW(p) · GW(p)

Fish are smaller than most of the alternative animals. The oft-neglected individual to meat ratio is more than a reasonable ratio between subjective probabilities that the animal in question is sentient.

There's also this.

comment by [deleted] · 2013-07-24T01:25:47.443Z · LW(p) · GW(p)

You know those Chick-fil-A advertisements with the cows beseeching you to eat more chicken? The ironic thing is, if you eat more chickens, there will actually be more chickens in the world, and if you eat fewer cows, there will be fewer cows in the world. It's just supply and demand. The survival of cows and chickens is controlled by the farmers, who are profit-oriented. If it stops being profitable to raise cows for the slaughter, then cows won't be raised at all.

Or consider this: suppose everyone in the world right now switches to vegetarianism. All the cows and chickens on the farms will die. The farmers will have no incentive to feed them. They'll kill the cows for their leather and the chickens for their...I have no idea, and any animals they don't kill will be left to starve, with all their independent survival ability bred and raised out of them.

I would be willing to become vegetarian were it not for my belief that the only way to keep cows alive is to eat them. How do you speak to that concern?

Is it better never to have been born than to be born, raised in cruel conditions, and then slaughtered? The answer is not obvious to me.

Replies from: MTGandP, Lukas_Gloor, Xodarap
comment by MTGandP · 2013-07-24T01:31:53.917Z · LW(p) · GW(p)

There are some other comments in this thread to that effect. In short, it's worth keeping animals alive if their lives are worth living. In the case of animals on factory farms, their lives are so horrible that they're probably not worth living. To find more comments on this thread, ctrl-F "not worth living".

comment by Lukas_Gloor · 2013-07-24T03:24:30.117Z · LW(p) · GW(p)

If a being constantly wants to get out of its current state, i.e. if it lives in constant agony, how could that be preferable to non-existence? Maybe if there was an overriding will to live (installed perhaps by an evil AI programmer or by evolution) then one could attempt to make a case for this, but wouldn't such an unfortunate state of affairs still be bad for the being in question? When you talk about "cruel conditions", are you trying to imagine them vividly? Have you watched footage from factory farms? I'm just curious because I'm genuinely puzzled by how much people's intuitions can differ.

Should we all start eating mice/rats instead of cows if this increases the amount of animal sentience by several orders of magnitude?

Replies from: None
comment by [deleted] · 2013-07-24T08:42:40.436Z · LW(p) · GW(p)

I see now that the question of whether it is better never to be born than to be raised cruelly is a distraction and misleading. What I'm really trying to get at is, what happens to the animals after most people become vegetarians? The most obvious answer is that they all die, both because people will kill them to squeeze any remaining profits out of them that they can, and because people will stop trying to keep them alive. Even if humans keep a few cows and chickens in a zoo somewhere, it still looks like most of the species will die. How do advocates of vegetarianism address the problem of what you do with the animals after everybody becomes vegetarian? This question is what keeps me from becoming vegetarian.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T11:54:37.024Z · LW(p) · GW(p)

It depends on one's reasons for vegetarianism. Personally I'm vegetarian because it prevents suffering. I don't value a species, I value individuals. A species is just a categorization that cannot feel pain or pleasure. We can imagine a continuous line-up of daughter, mother, grandmother and so on, up to the point of the last common ancestor of humans and cows, and then forwards in time again to modern cows. Within that line-up, there would be thousands of species, and virtually all of them went extinct already. A common definition of "species" is that groups of animals belong to different species if they cannot have fertile offspring together. I don't see how this is a relevant criterion for awarding moral concern to species rather than individuals. And as for the individual cows, yes, they would die eventually (and then we might as well eat them), but so they would if we keep breeding more cows for food-purposes, so I don't quite see the point.

Replies from: None
comment by [deleted] · 2013-07-24T12:39:24.672Z · LW(p) · GW(p)

In that case, what's your plan to prevent the suffering of all the animals that will die should too many people switch to vegetarianism?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T13:08:45.904Z · LW(p) · GW(p)

Those very animals will also die if people don't switch to vegetarianism, and then new animals will be bred and they will die too.

Replies from: None
comment by [deleted] · 2013-07-24T13:27:19.467Z · LW(p) · GW(p)

Yes, but it's bad. You're trying to stop animal death and suffering that is a product of our carnivorous habits by getting people to stop eating meat. But animals will also die and suffer as a result of us ceasing our carnivorous habits. What is your plan for preventing that?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-07-24T13:59:03.605Z · LW(p) · GW(p)

Fewer animals means fewer deaths and suffering. I don't need to solve every single problem in the universe if I want to do something good. Hopefully though, a future AI will be able to reengineer whole ecosystems so the sentient beings in them won't have the biological capacity for suffering anymore.

comment by Xodarap · 2013-07-24T13:18:39.030Z · LW(p) · GW(p)

I would be willing to become vegetarian were it not for my belief that the only way to keep cows alive is to eat them.

I think this is a great point, but it has the opposite conclusion. Agriculture is the leading cause of habitat loss and meat consumption causes more greenhouse gas emissions than the entire transportation sector.

If we want to keep animals from going extinct, we have to eat less meat.

Replies from: None
comment by [deleted] · 2013-07-24T13:32:15.102Z · LW(p) · GW(p)

Cows, chickens, and other animals whose existence are entirely dependent on humans will go extinct if we stop eating them.

The problem of protecting animals, who are simply in our way and have nothing to offer us or any ability to protect themselves, except the few we can enslave and use as fuel for our hordes, is a very hard problem, and "Let's stop eating meat" is not satisfactory. Nor is it even an obviously necessary start.

Replies from: Xodarap
comment by Xodarap · 2013-07-24T22:35:28.728Z · LW(p) · GW(p)

I'm not certain I understand. Are you saying that fewer species will go extinct if people eat meat? Or are you agreeing that being veg is the best way to preserve biodiversity, but that you don't care about biodiversity?

Replies from: smk, Vaniver
comment by smk · 2013-07-27T00:55:22.128Z · LW(p) · GW(p)

I don't particularly care about biodiversity, except if it offers some benefit to people. I suppose it might offer opportunities for increasing knowledge/understanding of biology/chemistry. Why do other people care about it?

comment by Vaniver · 2013-07-24T23:11:47.665Z · LW(p) · GW(p)

I'm not certain I understand. Are you saying that fewer species will go extinct if people eat meat?

The argument as I understand it is that profitable species are safeguarded like any other asset. If butterflies are disappearing for some reason, the response from most of society is a collective shrug. If honeybees are disappearing for some reason, the response from most of society is low-level anxiety, and several expert specialists devote significant time to understanding its cause and stopping it.

comment by [deleted] · 2013-09-04T16:40:30.479Z · LW(p) · GW(p)

I really don't get it. Why should I care about any suffering at all, in the first place? upd: Life is absurd and everybody dies, y'know?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-09-04T18:25:50.055Z · LW(p) · GW(p)

I don't think there's any argument I could give you to make you care. Though, I would suggest that you actually already do care at least about some suffering. And maybe you care about being consistent, and therefore are open to certain thought experiments? I'm not sure. I'm a moral anti-realist, or at least a moral externalist about moral motivation.

Replies from: None
comment by [deleted] · 2013-09-05T00:45:30.403Z · LW(p) · GW(p)

Yes, I do care naturally. But if I don't find any reason to, I'm going to try to suppress it as much as I can.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-09-05T02:22:11.400Z · LW(p) · GW(p)

That seems strange. Why then bother do anything at all?

Replies from: None
comment by [deleted] · 2013-09-05T12:02:10.856Z · LW(p) · GW(p)

I don't know. The thing that sounds most plausible to me is because I like it and I certainly don't like to think about sufferings of other creatures.

comment by Grant · 2013-07-24T05:52:57.202Z · LW(p) · GW(p)

I admit to being perplexed by this and some other pro-altruism posts on LW. If we're trying to be rationalists, shouldn't we come out and say: "I don't often care about other's suffering, especially of those people I don't know personally, but I do try and signal that I care because this signaling benefits me. Sometimes this signaling benefits others too, which is nice".

I agree everyone likely benefits from a society structured to reward altruism. We all might be in need of altruism one day. But there seems to be a disconnect between the prose of articles like this one and what I thought was the general rationalist belief that altruism in extended societies largely exists for signaling reasons.

Also, the benefits of altruism seem significantly less substantial when the targets are animals. Outside of personal experience animals are just unable to return any favors. If I save the lives of some children in Africa, I can hope those people contribute to the global economy and help make the world a better place for my children. Unfortunately the same cannot be said about my food.

I realize the article starts with the conditional statement "if one cares about suffering", so my comments above aren't really a critique. A more direct critique would be "who really cares about suffering?". If we only care about signaling altruism then I think we should just come out and say that.

I like animals and have owned many pets, but I do not care about the suffering of animals far outside my personal experience. If I was surrounded by people who cared about such things then I likely would learn to as well; to do otherwise would signal barbarism. I might also learn to care if I was interested in signaling moral superiority over my peers.

Replies from: Eliezer_Yudkowsky, Kaj_Sotala, peter_hurford
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-24T06:08:30.862Z · LW(p) · GW(p)

what I thought was the general rationalist belief that altruism in extended societies largely exists for signaling reasons.

That's, um, not a general rationalist belief.

Replies from: DanielLC
comment by DanielLC · 2013-07-26T04:54:42.794Z · LW(p) · GW(p)

More accurately, we evolved to be altruistic for signalling reasons. However, we don't really care why we evolved to be altruistic. We just care about others.

comment by Kaj_Sotala · 2013-07-24T08:13:43.659Z · LW(p) · GW(p)

I don't often care about other's suffering, especially of those people I don't know personally, but I do try and signal that I care because this signaling benefits me

Remember the evolutionary-cognitive boundary. "We have evolved behaviors whose function is to signal altruism rather than to genuinely cause altruistic behavior" is not the same thing as "we act kind-of-altruistically because consciously or unconsciously expect it to signal favorable things about us".

If you realize that evolution has programmed you to do something for some purpose, then embracing that evolutionary goal is certainly one possibility. But you can also decide that you genuinely care about some other purpose, and use the knowledge about yourself to figure out how to put yourself in situations which promote the kind of purpose that you prefer. Maybe I know that status benefits cause me to more reliably take altruistic action than rational calculation about altruism does, so I seek to put myself in communities where rationally calculating the most altruistic course of action and then taking that action is high status. And I also try to make this more high status in general, so that even people who genuinely only care about the status benefits end up taking altruistic actions.

Note that choosing to embrace most kinds of selfishness is no less arbitrary and going against evolution's goals than choosing to embrace altruism. What evolution really cares about is inclusive fitness: if you're going by the "oh, this is what evolution really intended you to do" route, then for the sake of consistency you should say "oh, evolution really intended us to have lots of surviving offspring, so I should ensure that I make regular egg/sperm donations and also get as many women as possible pregnant / spend as much time as possible being pregnant myself".

Most people don't actually want that, no matter what evolution designed us to do. So they rather choose to act selfishly, or altruistically, or some mixture of the two, or in some way that doesn't really map to the selfish/altruistic axis at all, and if that seems to go beyond the original evolutionary purposes of the cognitive modules which are pushing us in whatever direction we do end up caring about, then so what?

And of course, talking about the "purpose" or "intention" of evolution in the first place is anthropomorphism. Evolution doesn't actually care about anything, and claims like "we don't really care about altruism" are only shorthand for "we come equipped with cognitive modules which, when put in certain situations, push us to act in particular ways which - according to one kind of analysis - do not reliably correlate with the achievement of altruistic acts while more reliably correlating with achieving status; when put in different situations, the analysis may come out differently". That's a purely empirical fact about yourself, not one which says anything about what you should care about.

Replies from: Grant
comment by Grant · 2013-07-24T16:54:02.675Z · LW(p) · GW(p)

Thank you for the explanation. I was trying to play the devil's advocate a bit and I didn't think my comment would be well-received. I'm glad to have gotten a thoughtful reply.

Thinking about it some more, I was not meaning to anthropomorphize evolution, just point out homo-hypocritus. On any particular value of a person's, we have:

  • What they tell people about it.
  • How they act on it.
  • How they feel about it.

I feel bad about a lot of suffering (mostly that closest to me, of course). However its not clear to me that what I feel is any more "me" than what I do or what I say.

Most everyone (except psychopaths) feels bad about suffering, and tells their friends the same, but they don't do much about it unless its close to their personal experience. Evolution programmed us to be hypocritical. However in this context its not clear to me why we'd chose to act on our feelings instead of feel like our actions (stop caring about distant non-cute animals), or why we'd chose to stop being hypocritical at all. We have lots of examples throughout history of large groups of people ceasing to care about suffering of certain groups, often due to social pressures. I think the tide can swing both ways here.

So I have trouble seeing how these movements would work without social pressures and appeals to self-interest. I guess there's already a lot of pro-altruism social pressure on LW?

Edit: as a personal example, I feel more altruistic than I act, and act more altruistic than I let on to others. I do this because I've only gotten disutility from being seen as a nice guy, and have refrained from a lot of overt altruism because of this. I think I'd need a change in micro-culture to change my behavior here; appeals to logic aren't going to sway me.

Replies from: army1987, Solitaire
comment by A1987dM (army1987) · 2013-07-26T19:29:06.293Z · LW(p) · GW(p)

I've only gotten disutility from being seen as a nice guy

Any examples?

Replies from: Grant
comment by Grant · 2013-08-05T03:19:04.646Z · LW(p) · GW(p)

"Only" was a gross exaggeration. I'm not sure why I typed it.

I think my examples are pretty typical though. Charitable people get lobbied by people who want charity. This occurs with both personal and extended charity. In my case it gets me bugged into spending more time on other people's technical problems (e.g. open-source software projects) than I'd like.

I haven't contributed to many charities, but the ones I have seem to have put me on mailing and call lists. I also once contributed to a political candidate for his anti-war stance, and have been rewarded with political spam ever since. I'm not into politics at all so its rather unwelcome.

comment by Solitaire · 2014-01-06T13:40:02.487Z · LW(p) · GW(p)

Most everyone (except psychopaths) feels bad about suffering, and tells their friends the same, but they don't do much about it unless its close to their personal experience.

I'm not sure how much truth there is in this generalisation. Countless environmental activists, conservationists and humanitarian workers across the globe willingly give their time and energy to causes that have little or nothing to do with satisfying their own local needs or wants. Whilst they may not be in the majority, there are nevertheless a significant minority. I doubt many of them would be happy to be told they are only 'signalling altruism' to appear better in the eyes of their peers.

On the other hand, I suppose you could argue the case that such people have X-altruistic personalities and that perhaps that isn't a desirable quality in terms of creating a hypothetical perfect society.

comment by Peter Wildeford (peter_hurford) · 2013-07-24T06:16:34.703Z · LW(p) · GW(p)

Sometimes, pleas for altruism are exactly what they seem. Not everything is a covert attempt at signaling. Trying to say that altruism is not serving self-interested reasons is kind of missing the point.

comment by Zaine · 2013-07-24T08:14:37.497Z · LW(p) · GW(p)

Obtaining optimal health is an unsolved problem. With optimal health, a human will live longer. This human weights probably sentient life as worth more than probably non-sentient life. According to this human's values, the amount of probably non-sentient life this human must consume in order to obtain optimal health does not justify consumption in and of itself. As a human will live longer with optimal health, this human also has more time they can devote to offsetting their consumption, in the end making their human life worth more in net than the cumulative probably non-sentient lives consumed in sustaining optimal health.

The more resources required optimal health, the greater the burden on the human to offset the negative externalities produced utilising those resources.

Replies from: peter_hurford, Zaine
comment by Peter Wildeford (peter_hurford) · 2013-07-24T08:46:48.529Z · LW(p) · GW(p)

I'm confused about what you're saying.

If what I think you're saying is what you're saying, then I disagree with you that either (1) nonhuman animals are probably non-sentient or (2) sentience shouldn't matter, depending on what you meant by "sentient".

I also think that vegetarianism cannot provide optimal health (but so can a diet that involves meat, as can veganism).

Replies from: Zaine
comment by Zaine · 2013-07-24T09:41:08.668Z · LW(p) · GW(p)

For item 1, that's fine.

I'm only presenting an argument from the perspective of one who wants to live well and longer, but also wishes to leave a positive impact upon the world; my goal was to raise concerns someone from this mindset would like to see addressed, but ended up arguing (perhaps repugnantly) in favour of the mindset instead.
Let me know if that doesn't help clear confusion.

Probably non-sentient lives are not limited to non-human animals, but marine and plant life, as well as human animals in extreme interpretations.

For item 2, sentience means self-awareness, and refers to the distinction between, for example, depression caused by mere neuro-adaptation of neurotransmitter signalling to external stimuli, and a depressive state furthered by the ability to reflect upon one's depressive situation - internal stimuli.

You might have a typo in the latter-most statement.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-24T15:04:01.761Z · LW(p) · GW(p)

I'm sorry, I'm still confused.

1.) Do you think nonhuman animals can suffer? If not, why not?

2.) If yes to #1, do you think that suffering is something you might care about? If not, why not?

Replies from: Zaine
comment by Zaine · 2013-07-24T21:30:42.368Z · LW(p) · GW(p)

1) The question is whether they can experience the subjective realisation of, "Because of this situation, I am experiencing negative emotions. I dislike this situation, but there is no escape," and thus increase their suffering by adding negative internal stimuli - appreciation and awareness of their existence - to already existing negative external stimuli. This is a stricter condition some may have for caring about other creatures to an inconvenient degree. For a fictional example, Methods!Harry refused to eat anything when he considered the possibility that all other life is sentient. To be charitable, assume he is aware that pinching a rabbit's leg will trigger afferent nociceptive (pain) neurons, which will carry a signal to the brain, leading to the experience of pain. Your cited research demonstrates this. It does not demonstrate, however, whether the subject has the awareness to reflect upon the factors that contribute to their suffering, such that their reflection can contribute it by further adding negative stimuli, negative stimuli that is generated only by that organism's selfsame reflection. Causing misery to a probably non-sentient creature did not give Methods!Harry hesitation, but causing misery to a probably sentient creature did; hopefully this helps elucidate the mindset of one ascribing to this stricter condition of care.

2) If a human considers that they themselves satisfy the above condition, then they will be more inclined to attribute more worth to fellow humans than other creatures of a dubious status. That said, they will still realise that misery is not a pleasant experience regardless of one's capacity for self-reflection, and should be prevented and stopped if possible. One must thus argue to this person that it should behove their moral selves to exert effort towards mitigating or decreasing that misery, and that the exertion will not detriment this person's endeavours to reduce the misery of humans.

This person cares more about optimising the good they can achieve while living, which leads them to take pains to live longer; the longer they live, the more good they can achieve. One must convince this person that either non-human animals have the capacity for self-reflection to the degree specified above, or that caring about the misery of non-human animals and acting upon that care does not adversely affect their net ability to introduce good to the world; id est, in the latter condition, acting upon that care must not adversely affect this person's lifespan, quality of life, capacity to help humans, or must only do so by small enough margin to justify the sacrifice.

These are things I think a rational agent making a comfortable salary should think about, assuming they desire to optimise the quantity of good they effect in the world. To someone whose objective is convincing the masses to do the most good they possibly can, this doesn't matter, as arguing for both vegetarianism and giving substantial sums to the AMF only have a potential conflict of interest to the party seeking optimal quality of life and greatest possible life-span.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-25T16:24:19.106Z · LW(p) · GW(p)

To be charitable, assume he is aware that pinching a rabbit's leg will trigger afferent nociceptive (pain) neurons, which will carry a signal to the brain, leading to the experience of pain. Your cited research demonstrates this. It does not demonstrate, however, whether the subject has the awareness to reflect upon the factors that contribute to their suffering, such that their reflection can contribute it by further adding negative stimuli, negative stimuli that is generated only by that organism's selfsame reflection.

To be fair, you can't demonstrate this for any human either. That's the problem with consciousness.

Replies from: Zaine
comment by Zaine · 2013-07-25T21:44:48.759Z · LW(p) · GW(p)

Naturally; we're working from the same fabric.

comment by Zaine · 2013-07-24T08:19:05.972Z · LW(p) · GW(p)

If optimal health requires strict consumption of only sea-vegetables and coconut oil, one must offset the resources required their sustainable, scalable harvesting. If optimal health requires eating meat procured from animals eating only their native food sources in their native habitat, killed while their hunter whispers sweet nothings and severs their vertebrae at the nape with a swift, sure, and gentle strike, one must offset the costs required making the operation sustainable, scalable, and global warming-friendly - perhaps by inventing meat-vats, solving global warming, or discovering a means of feasible space colonisation.

comment by shminux · 2013-07-23T22:41:02.946Z · LW(p) · GW(p)

I hope that we can have a sober, non mind-killing discussion about this topic, since it’s possibly quite important.

You are not setting a good example by staking a position and then finding supporting arguments. The rest of your post is also rife with cognitive biases. Quite disappointing, really.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-23T22:51:08.825Z · LW(p) · GW(p)

You are not setting a good example by staking a position and then finding supporting arguments.

That's not what I did. Why do you think I did that?

The rest of your post is also rife with cognitive biases.

Name three.

comment by DxE · 2013-07-24T16:32:00.666Z · LW(p) · GW(p)

The extended discussion here is unnecessary. Violence against helpless children is a very simple issue. And it is wrong. Period.

Anyone who says otherwise is:

  • thoughtlessly parroting a bigoted culture;
  • a monster; or
  • both.
Replies from: Articulator
comment by Articulator · 2013-07-24T17:28:03.733Z · LW(p) · GW(p)

I'm a nihilist. Where do I fall on your hopelessly constrained list?