Ethical Diets

post by pcm · 2015-01-12T23:38:33.864Z · LW · GW · Legacy · 23 comments

Contents

  footnotes
None
23 comments

[Cross-posted from my blog.]

I've seen some discussion of whether effective altruists have an obligation to be vegan or vegetarian.

The carnivores appear to underestimate the long-term effects of their actions. I see a nontrivial chance that we're headed toward a society in which humans are less powerful than some other group of agents. This could result from slow AGI takeoff producing a heterogeneous society of superhuman agents. Or there could be a long period in which the world is dominated by ems before de novo AGI becomes possible. Establishing ethical (and maybe legal) rules that protect less powerful agents may influence how AGIs treat humans or how high-speed ems treat low-speed ems and biological humans [0]. A one in a billion chance that I can alter this would be worth some of my attention. There are probably other similar ways that an expanding circle of ethical concern can benefit future people.

I see very real costs to adopting an ethical diet, but it seems implausible that EAs are merely choosing alternate ways of being altruistic. How much does it cost MealSquares customers to occasionally bemoan MealSquares use of products from apparently factory-farmed animals? Instead, it seems like EAs have some tendency to actively raise the status of MealSquares [1].

I don't find it useful to compare a more ethical diet to GiveWell donations for my personal choices, because I expect my costs to be mostly inconveniences, and the marginal value of my time seems small [2], with little fungibility between them.

I'm reluctant to adopt a vegan diet due to the difficulty of evaluating the health effects and due to the difficulty of evaluating whether it would mean fewer animals living lives that they'd prefer to nonexistence.

But there's little dispute that most factory-farmed animals are much less happy than pasture-raised animals. And everything I know about the nutritional differences suggests that avoiding factory-farmed animals improves my health [3].

I plan not to worry about factory-farmed invertebrates for now (shrimp, oysters, insects), partly because some of the harmful factory-farm practices such as confining animals to cages not much bigger than the animals in question aren't likely with animals that small.

So my diet will consist of vegan food plus shellfish, insects, wild-caught fish, pasture-raised birds/mammals (and their eggs/whey/butter). I will assume vertebrate animals are raised in cruel conditions unless they're clearly marked as wild-caught, grass-fed, or pasture-raised [4].

I've made enough changes to my diet for health reasons that this won't require large changes. I already eat at home mostly, and the biggest change to that part of my diet will involve replacing QuestBars with a home-made version using whey protein from grass-fed cows (my experiments so far indicate it's inconvenient and hard to get a decent texture). I also have some uncertainty about pork belly [5] - the pasture-raised version I've tried didn't seem as good, but that might be because I didn't know it needed to be sliced very thin.

My main concern is large social gatherings. It has taken me a good deal of willpower to stick to a healthy diet under those conditions, and I expect it to take more willpower to observe ethical constraints.

A 100% pure diet would be much harder for me to achieve than an almost pure diet, and it takes some time for me to shift my habits. So for this year I plan to estimate how many calories I eat that don't fit this diet, and aim to keep that less than 120 calories per month (about 0.2%) [6]. I'll re-examine the specifics of this plan next Jan 1.

Does anyone know a convenient name for my planned diet?

 

footnotes

 

0. With no one agent able to conquer the world, it's costly for a single agent to repudiate an existing rule. A homogeneous group of superhuman agents might coordinate to overcome this, but with heterogeneous agents the coordination costs may matter.

1. I bought 3 orders of MealSquares, but have stopped buying for now. If they sell a version whose animal products are ethically produced (which I'm guessing would cost $50/order more), I'll resume buying them occasionally.

2. The average financial value of my time is unusually high, but I often have trouble estimating whether spending more time earning money has positive or negative financial results. I expect financial concerns will be more important to many people.

3. With the probable exception of factory-farmed insects, oysters, and maybe other shellfish.

4. In most restaurants, this will limit me to vegan food and shellfish.

5. Pork belly is unsliced bacon without the harm caused by smoking.

6. Yes, I'll have some incentive to fudge those estimates. My experience from tracking food for health reasons suggests possible errors of 25%. That's not too bad compared to other risks such as lack of willpower.

23 comments

Comments sorted by top scores.

comment by Larks · 2015-01-13T02:17:15.324Z · LW(p) · GW(p)

While I think trying to set up equilibria that will be robust against a multipolar takeoff is interesting, I don't think your example is a conclusion you would come to if you weren't already concerned about animal rights.

Much more plausible is that we should start the strict enforcement of property rights. Any level of redistribution could result in all wealth held by flesh-and-blood humans being 'redistributed' to uploads in light of their huge number.

Replies from: pcm, freeze, None
comment by pcm · 2015-01-13T18:45:41.456Z · LW(p) · GW(p)

This post is definitely an attempt to answer the question 'What should I eat?', not "What's the best thing I can do about multipolar takeoff?". I didn't mean to imply that my concerns over multipolar takeoff are the only reason for my change in diet. I focused on that because others have given it too little attention.

I would certainly like to do more to increase respect for property rights, but the obvious approaches involve partisan politics that already attract lots of effort on both sides.

comment by freeze · 2015-09-03T15:32:01.112Z · LW(p) · GW(p)

I don't think your example is a conclusion you would come to if you weren't already concerned about property rights.

comment by [deleted] · 2015-01-13T04:12:02.704Z · LW(p) · GW(p)

Eliminating redistribution to ems will have little impact. As long as labor has a significant value which can be used to purchase capital (ie money), ems will be able to produce so much more labor than humans that they will quickly grow to dominate society. They don't need our charity to crush us like bugs.

Replies from: Larks
comment by Larks · 2015-01-15T02:53:46.644Z · LW(p) · GW(p)

If humans kept significant wealth we could live off the interest. There's a big difference between 'no longer dominate society' and 'all die of starvation after having our wealth stripped from us'.

comment by MrMind · 2015-01-13T08:04:29.177Z · LW(p) · GW(p)

If the way an AGI treats us would depend upon the way we treat animals, the problem of a Friendly AI would already be partially solved. But there's no way to think it will: if you don't want an AI to treat the way you treat a cow, then don't program it that way.

Replies from: pcm, freeze
comment by pcm · 2015-01-13T19:08:12.433Z · LW(p) · GW(p)

If you're certain that the world will be dominated by one AGI, then my point is obviously irrelevant.

If we're uncertain whether the world will be dominated by one AGI or by many independently created AGIs whose friendliness we're uncertain of, then it seems like we should both try to design them right and try to create a society where, if no single AGI can dictate rules, the default rules for AGI to follow when dealing with other agents will be ok for us.

comment by freeze · 2015-09-03T15:34:46.685Z · LW(p) · GW(p)

You seem to allude to the fact that it really isn't that easy. In fact, if it is truly an AGI then by definition we can't just box in its values in that way/make one arbitrary change to its values.

Instead, I would say if you don't want an AI to treat us like we treat cows, then just stop eating cow flesh/bodily fluids. This seems a more robust strategy to shape the values of an AI we create, and furthermore it prevents an enormous amount of suffering and improves our own health.

comment by ZankerH · 2015-01-13T07:06:05.511Z · LW(p) · GW(p)

The amount of suffering introduced by factory-farming is entirely negligible compared to the amount of wild-animal suffering that's been taking place as long as life has existed, continues to take place, and will continue to take place unless we cause a wholesale extinction of the Earth's biosphere.

Unless you're prepared to eradicate animal life, no personal choice you ever make will have a meaningful impact on the amount of suffering in the universe.

Replies from: DanielFilan, Gunnar_Zarncke, freeze
comment by DanielFilan · 2015-01-13T09:56:13.069Z · LW(p) · GW(p)

Your second sentence doesn't follow from the first. Just because there is an enormous amount of suffering in the world doesn't mean that you can't alleviate a meaningful amount. The only way this is true is if by "meaningful" you mean as a proportion of the total amount of suffering, which doesn't really make sense - the fact that others are suffering doesn't make a good act any less good.

Replies from: ZankerH
comment by ZankerH · 2015-01-13T11:41:19.666Z · LW(p) · GW(p)

That's only the case if you care about individual animals, as opposed to animal suffering in general.

comment by Gunnar_Zarncke · 2015-01-13T22:14:53.528Z · LW(p) · GW(p)

I followed up on that link and found it extremely interesting and thorough.

But my first impression was that it was meant as a very sophisticated job to take the animal rights movement apart from the inside by using their own argument to drive it to its logical absurd conclusion. The longer I read the more I realized that it was meant dead serious. Someone really means to apply empathy (which after all is an evolved trait) to its utmost extreme. Kind of like an AI that had no other values would.

I posted something like this as a risk of Unfriendly Natural Intelligence earlier but wouldn't have guessed at how far it really can be taken.

There are a few sentences where one could think that he should see the outside view:

we should also remember that many other humans value wilderness, and it’s good to avoid making enemies or tarnishing the suffering-reduction cause by pitting it too strongly opposed to other things people care about.

evolutionary pressure pushes prey species to avoid drawing attention to their suffering and to pretend as though nothing is wrong

But it doesn't hit home. I can almost hear him answering the question "but you evolved to feel empathy too" with the UFAIs answer "yes I know, but nontheless the suffering must be reduced." (and an actual suffering reducing UFAIs would probably go on to eradicate all life as that minimizes suffering because lower life forms have net negative lifes and dominate the sum so killing them is necessary - that the humans have to go too because of lack of food etc. is only a minor term).

comment by freeze · 2015-09-03T15:37:49.329Z · LW(p) · GW(p)

Not necessarily. https://xkcd.com/1338/

If you assume that suffering is roughly proportional to number of neurons, then you should care disproportionately about mammal suffering, or even large animals in general; most animals are wild, but they are mostly insects which don't necessarily experience as much suffering each.

comment by James_Miller · 2015-01-13T00:27:19.236Z · LW(p) · GW(p)

Genetic experimentation on cows gives them super-human intelligence, and they quickly come to rule the earth. Alas, these cows' moral beliefs cause them to care about human welfare as much as the average human cares about cows. I hope these cows develop a taste for human flesh and milk so they have an incentive to keep lots of us around.

Replies from: DanielLC
comment by DanielLC · 2015-01-13T00:48:20.157Z · LW(p) · GW(p)

If they were nice enough to give us the internet or something, I'd agree, but if I'm going to spend my whole life crammed up against other people with nothing to do, I think I'd prefer not to live.

Replies from: fubarobfusco
comment by fubarobfusco · 2015-01-13T02:10:33.424Z · LW(p) · GW(p)

I think you've described an increasing number of office environments ...

Replies from: DanielLC
comment by DanielLC · 2015-01-13T02:17:05.519Z · LW(p) · GW(p)

You don't spend your whole life there. And they do give you something to do. Also, they give you internet.

Replies from: None
comment by [deleted] · 2015-01-13T04:31:25.245Z · LW(p) · GW(p)

I don't think s/he was serious.

comment by [deleted] · 2015-01-15T18:57:05.140Z · LW(p) · GW(p)

Does anyone know a convenient name for my planned diet?

Orthorexia, the belief that in addition to nutrient values food also contains ethical values. An orthorexic says humans should or should not eat certain things in certain ways because of rights; rights of the eater, rights of the eaten. That rights exist and are known and are universal and can be acted on are all considered axiomatic. Orthorexia can be religious (Genesis 1:30, Quran Chapter 5 Verse 96, Anguttara Nikaya 3.38 Sukhamala Sutta, etc.) and it can be secular (as appears in your case).

Orthorexia holds axiomatic (1) that the emotions of the individual are the same emotions that other humans feel, and (2) that universal human emotions are shared by some or all other living beings. By the first axiom, what is good for the individual is good for others and what is bad for the individual is bad for others. But the good of the individual does not necessarily apply to others; to claim so is an error of category. That which is good for me is that which is good for me. By the second axiom, the unnecessary bridging of the emotions of the individual to others is expanded to the non-human. But the good of humans, even if known and actionable, would not necessarily and naturally be the good of non-humans. I for one would push the 'destroy all human-preying tapeworms' button if given the chance. Bad for all human-preying tapeworms, good for this human.

Orthorexia claims that decisions about food have an influence outside the eater and the eaten, but orthorexia is selective about how far that influence is measured. In the long run - the long, long, long run - it is not possible for any (or all) humans to eat (or not eat) anything and have it influence things more or less than any other choices made in an infinite universe. Given enough time and space (and in an infinite universe that's what we have), all possibilities occur. Orthorexia measures the influence of food choices out far enough to get the desired positive or negative reading then stops.

It is a fine thing to have ideas about what one does and does not eat, and to act on those ideas, and to put those ideas before others for criticism. Orthorexia looks for foundations that are not there and outcomes that do not appear. I suggest it might be more simple, accurate and honest to say what one prefers or dislikes and leave it at that.

I offer a grand prize to the first or loudest reply that misrepresents the above in the following way: I claim rights (in food and elsewhere) do not exist, therefore I claim I have the right (in food and elsewhere) to do something (eat this or that, kill this or that, etc.). Example: 'you say animals have no right to not be eaten, so you must be saying you have the right to eat them.' That is false, but a grand prize is offered for making that misrepresentation.

comment by Zarm · 2017-06-26T22:58:40.532Z · LW(p) · GW(p)

Eat less meat.

comment by Agathodaimon · 2015-01-16T23:50:19.866Z · LW(p) · GW(p)

It is better for one's health and the planet in terms of emissionsto eat less meat as well.

comment by keflexxx · 2015-01-13T10:59:36.534Z · LW(p) · GW(p)

there's no convenient name unfortunately. you could more or less call it paleo, but paleo is more concerned with the health benefits for the human than the health benefits for the animal.

comment by [deleted] · 2015-01-13T04:30:17.628Z · LW(p) · GW(p)

Your writing style feels very disjointed. You keep switching arguments without making it clear when you're starting from a new direction. With that said, I've had similar thoughts to paragraph 2 (the superintelligence section) before. Indeed, slaughtering an animal is such a huge ethical decision that I think it relates to a ton of other major moral decisions; how will superintelligences react to us being just one such example.

To reverse that argument, how should we treat a superintelligence? The huge separation most create between animals and man implies we should treat them with an almost reverence, but the widespread belief in human equality implies almost no distinction between us (this is my view). Strangely, I suspect a ton of people believe we should treat them as beneath humans. People seem to be more convinced by "the AI-box wouldn't work" arguments than "the AI-box amounts to slavery" arguments. I'm not sure how to make this fit.

My solution: As intelligence increases, so too does the danger they impose, and therefore SIs should have significantly fewer rights. However, this decrease is so gradual that no natural human will ever require a curtailing of their rights. Later genetically engineered and cybernetically enhanced humans will be a different story. I don't believe this at all, so I'm open to alternatives. Does anybody here believe SIs should have fewer rights than humans?