Link: Rob Bensinger on Less Wrong and vegetarianism
post by Sysice · 2014-11-13T17:09:11.021Z · LW · GW · Legacy · 77 commentsContents
77 comments
I'm currently unconvinced either way on this matter. However, enough arguments have been raised that I think this is worth the time of every reader to think a good deal about.
http://nothingismere.com/2014/11/12/inhuman-altruism-inferential-gap-or-motivational-gap/
77 comments
Comments sorted by top scores.
comment by Salemicus · 2014-11-17T17:13:02.477Z · LW(p) · GW(p)
In the comments of his post, RobBB claimed that by going vegetarian, you'll cause 1500 fewer animals to be killed than otherwise. Regardless of the exact number, it strikes me that this is a highly tendentious way of putting the issue. It would surely be more accurate to say that by going vegetarian, you will caused 1500 fewer animals to be born than otherwise.
It is wrong to call a utilitarian argument for vegetarianism "air-tight" when it doesn't even consider this point.
comment by Punoxysm · 2014-11-13T20:04:25.754Z · LW(p) · GW(p)
I don't see why you get downvoted.
I am strongly convinced by arguments for vegetarianism.
I mean, I still eat meat but that's just because of my moral decrepitude.
Replies from: DanielFilan, deskglass↑ comment by DanielFilan · 2014-11-13T22:54:19.536Z · LW(p) · GW(p)
It is likely easier than you think to cut out meat and other animal products from your diet. When I went vegan, it basically involved changing from one set of tasty dishes to another, and I don't think I lost out much from a taste perspective (that being said, I did this at the same time as moving from catered university accommodation, so possibly YMMV). Here is a website which purports to give you all the knowledge you need to make the transition. This is something that you can start doing today, and I urge you to do so.
↑ comment by deskglass · 2014-11-15T01:22:43.828Z · LW(p) · GW(p)
Exactly. I suspect a disproportionate share of people on LW agree that their eating habits are immoral, but eat the way they do anyway and are willing to indirectly be a part of "torturing puppies behind closed doors." That is, they are more likely to be honest to themselves about what they are doing, but aren't that much more likely to care enough to stop (which is different from being "morally indifferent").
comment by Manfred · 2014-11-13T22:27:06.670Z · LW(p) · GW(p)
All the work is done in the premises - which is a bad sign rhetorically, but at least a good sign deductively. If I thought cows were close enough to us that there was a 20% chance that hurting a cow was just as bad as hurting a human, I would definitely not want to eat cows.
Unfortunately for cows, I think there is an approximately 0% chance that hurting cows is (according to my values) just as bad as hurting humans. It's still bad - but its badness is some quite smaller number that is a function of my upbringing, cows' cognitive differences from me, and the lack of overriding game theoretic concerns as far as I can tell. I don't think of cows as "mysterious beings with some chance of being Sacred," I think of them as non-mysterious cows with some small amount of sacredness.
Replies from: shminux, dthunt↑ comment by Shmi (shminux) · 2014-11-14T00:04:19.684Z · LW(p) · GW(p)
I don't even know what 20% means in this context. That 5 cows = 1 person? Not even a rabid vegan would probably claim that.
Replies from: Manfred, MrMind↑ comment by Manfred · 2014-11-14T02:19:00.318Z · LW(p) · GW(p)
Pretty sure the unpacking goes like "I think it is 20% likely that a moral theory is 'true' (I'm interpreting 'true' as "what I would agree on after perfect information and time to grow and reflect") in which hurting cows is as morally bad as hurting humans."
Replies from: shminux↑ comment by Shmi (shminux) · 2014-11-14T04:11:03.693Z · LW(p) · GW(p)
Right, sure. but does it not follow that, if you average over all possible worlds, 5 cows have the same moral worth as 1 human?
↑ comment by MrMind · 2014-11-17T09:16:58.405Z · LW(p) · GW(p)
I personally know at least one rabid vegan for whom 1 cow > 1 person.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2014-11-17T19:42:02.651Z · LW(p) · GW(p)
Why ">" and not "="? Is this true for other animals too or are cows special?
Replies from: Raemon, MrMind↑ comment by Raemon · 2014-11-18T05:02:54.900Z · LW(p) · GW(p)
Tentative guess: Humans are considered to have negative value because (among other things) they kill cows (carbon footprint, etc)
Also they might just not be rational.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-18T05:21:44.224Z · LW(p) · GW(p)
Humans are considered to have negative value
Kill them all.
Replies from: Raemon↑ comment by Raemon · 2014-11-18T05:39:32.556Z · LW(p) · GW(p)
I've seen it argued.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-18T05:46:03.645Z · LW(p) · GW(p)
Notably by Agent Smith from the Matrix.
People who argue this can start with themselves.
Replies from: Raemon↑ comment by Raemon · 2014-11-18T05:50:51.562Z · LW(p) · GW(p)
I think there's a pretty solid case for that being a non-optimal solution, even if you've bought all their other premises. (There's not enough of them for a single or even mass suicides to inspire other people to do so, and then they'd just lose the longterm memetic war)
Replies from: Lumifer↑ comment by MrMind · 2014-11-18T08:16:58.078Z · LW(p) · GW(p)
Well... the example ran away like this: "If there was a fire and I was given the option of saving just the cow or just the person, I would save the cow". Presumably it would be the same with a pig or a dog.
This is a trasposed version of the trolley situation: 'I would not actively kill any human, but given the choice, I consider a cow to be more valuable'.
The motivating reason was something on the line of "humans are inherently evil, while animals are incapable of evil".
↑ comment by dthunt · 2014-11-20T14:56:22.469Z · LW(p) · GW(p)
Well, how comparable are they, in your view?
Like, if you'd kill a cow for a 10,000 dollars (which could save a number of human lives), but not fifty million cows for 10,000 dollars, you evidently see some cost associated with cow-termination. If you, when choosing methods, could pick between methods that induced lots of pain, versus methods that instantly terminated the cow-brain, and have a strong preference toward the less-painful methods (assuming they're just as effective), then you clearly value cow-suffering to some degree.
The reason I went basically vegan is I realized I didn't have enough knowledge to run that calculation, but I was fairly confident that I was ethically okay with eating plants, sludges, and manufactured powders, and most probably the incidental suffering they create, while I learned about those topics.
I am basically with you on the notion that hurting a cow is better than hurting a person, and I think horse is the most delicious meat. I just don't eat it any-more. (I'd also personally kill some cows, even in relatively painful ways, in order to save a few people I don't know.)
Replies from: Lumifer↑ comment by Lumifer · 2014-11-20T16:08:11.748Z · LW(p) · GW(p)
versus methods that instantly terminated the cow-brain
This triggered a question to bubble up in my brain.
How much time of pure wireheading bliss do you need to give to a cow brain in order to feel not guilty about eating steak?
Replies from: Azathoth123, dthunt↑ comment by Azathoth123 · 2014-11-21T05:50:21.740Z · LW(p) · GW(p)
Given my attitude towards wire-heading generally, that would probably make me feel more guilty.
↑ comment by dthunt · 2014-11-20T16:45:19.752Z · LW(p) · GW(p)
I REALLY like this question, because I don't know how to approach it, and that's where learning happens.
So it's definitely less bad to grow cows with good life experiences than with bad life experiences, even if their ultimate destiny is being killed for food. It's kind of like asking if you'd prefer a punch in the face and a sandwich, or just a sandwich. Really easy decisions.
I think it'd be pretty suspicious if my moral calculus worked out in such a way that there was no version of maximally hedonistic existence for a cow that I could say that the cow didn't have a damned awesome life and that we should feel like monsters for allowing it to have existed at all.
That having been said, if you give me a choice between cows that have been re-engineered such that their meat is delicious even after they die of natural causes, and humans don't artificially shorten their lives, and they stand around having cowgasms all day - and a world where cows grow without brains - and a world where you grew steaks on bushes -
I think I'll pick the bush-world, or the brainless cow world, over the cowgasm one, but I'd almost certainly eat cow meat in all of them. My preference there doesn't have to do with cow-suffering. I suspect it has something to do with my incomplete evolution from one moral philosophy to another.
I'm kind of curious how others approach that question.
comment by Shmi (shminux) · 2014-11-13T20:29:01.339Z · LW(p) · GW(p)
I think RobbBB does not understand a typical omnivore's (me!) point of view. He also makes irrational conclusions about the ways to reduce the amount of suffering of (potentially somewhat sentient) animals.
Yes, cattle suffer, so do chickens, to a lesser degree. They likely do not suffer in the same way people do. Certainly eggs are not likely to suffer at all. Actually, even different people suffer differently, the blanket moral prohibition against cannibalism is just an obvious Schelling point.
So it would be preferable to not create, raise, slaughter and eat animals if there was an alternative source of meat with the same nutritional and taste properties omnivores are used to. Maybe some day. Until then we should strive to minimize needless suffering, at a marginal cost to the consumers.
So, if you are an effective altruist who includes cows and chickens in the potential list of the entities who should be protected from suffering what do you do? Write blogs aimed at an extremely limited audience who do not appear to be overly receptive, anyway? That's not very "effective", is it? How about working to develop and make feasible new alternatives to "torturing animals"? For example:
support/participate in the research to produce vat-grown meat
expose existing cattle/chicken abuse in farms and slaughterhouses
support/participate in the research to develop a species of farm animals who are physically unable to suffer.
Certainly if a headless chicken can survive for a while, it should be feasible to breed/genetically modify them to not have the brain structures responsible for suffering. Or maybe it's as easy as injecting eggs with some substance which stifles the formation of pain centers.
As SSC notes,
Society is really hard to change. [...] biology is gratifyingly easy to change.
Yet I know of no effective animal altruists who spend majority of their efforts figuring out and working on the task which is likely to provide the greatest payoff. Pity.
Replies from: deskglass, Raemon, Kaj_Sotala, solipsist, Richard_Kennaway↑ comment by deskglass · 2014-11-15T01:33:02.752Z · LW(p) · GW(p)
Certainly eggs are not likely to suffer at all.
It's typically the chickens laying the eggs that people are concerned about. And maybe to a lesser extent the male chickens of the chicken breed used for egg production. (Maybe you're already clear on that, but I have spoken to people who were confused by veganism's prohibition on eating animal products in addition to animals.)
They likely do not suffer in the same way people do.
It doesn't seem safe to assume that their suffering is subjectively less bad than our suffering. Maybe it's worse - maybe the experience of pain and fear is worse when you can only feel it and can't think about it. Either way, I don't see why you'd err on the side of 'It's an uncertain thing so lets keep doing what we're doing and diminish the potential harms when we can' rather than 'It's not that unlikely that we're torturing these things, we should stop in all ways that don't cost us much.'
But yes, creating vat-grown meat and/or pain-free animals should be a priority.
Replies from: dthunt↑ comment by dthunt · 2014-11-20T15:13:13.818Z · LW(p) · GW(p)
So, there's a heuristic that I think is a decent one, which is that less-conscious things have less potential suffering. I feel that if you had a suffer-o-meter and strapped it to the heads of paramecia, ants, centipedes, birds, mice, and people, they'd probably rank in approximately that order. I have some uncertainty in there, and I could be swayed to a different belief with evidence or an angle I had failed to consider, but I have a hard time imagining what those might be.
I think I buy into the notion that most-conscious doesn't strictly mean most-suffering, though - if there were a slightly less conscious, but much more anxious branch of humanoids out there, I think they'd almost certainly be capable of more suffering than humans.
↑ comment by Raemon · 2014-11-13T21:54:05.305Z · LW(p) · GW(p)
LW folk generally are proponents of Vat-Meat.
On one hand, I agree with you that it's probably not that effective to specifically court the LW demographic. That said, EA-Animal-Rights people are usually in favor of vat-grown meat (there are companies working on it. To my knowledge they are not seeking donations (although Modern Meadow is hiring, if you happen to have relevant skills)
"expose existing cattle/chicken abuse in farms and slaughterhouses" is a mainstay vegan tactic. Robbie's article was prompted by Brienne's article which was specifically arguing against videos that did that (especially if they use additional emotional manipulation tactics)
Replies from: geeky, shminux↑ comment by geeky · 2014-11-14T16:24:50.024Z · LW(p) · GW(p)
Just as a data point: the emotional manipulation tactics (i.e graphic videos) were effective against me. (Mostly because I was unfamiliar with the process before. I didn't know what happened) They tend to be effective in people especially sensitive to graphic images, I think, but I realize that in general it's not a tremendously effective way across the population spectrum. If it was, everyone (or at least everyone who has watched those videos) would probably be vegetarian at this point. This is not the case.
Replies from: Lumifer, Jiro↑ comment by Lumifer · 2014-11-14T16:40:46.506Z · LW(p) · GW(p)
Just as a data point: the emotional manipulation tactics (i.e) videos were effective against me.
As another data point, emotional manipulations tactics are HIGHLY counterproductive against me. I dislike being emotionally manipulated and when I see attempts to do so my attitude towards the cause worsens considerably.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-11-17T12:12:45.141Z · LW(p) · GW(p)
Can you name three cases when you changed your mind on something important as a result of someone convincing you, by any means.
↑ comment by Jiro · 2014-11-14T16:33:59.906Z · LW(p) · GW(p)
the emotional manipulation tactics (i.e) videos were effective against me
Are videos intended to produce a visceral reaction against gay sex or abortion also effective against you?
Replies from: geeky↑ comment by geeky · 2014-11-14T17:02:28.556Z · LW(p) · GW(p)
Those are very different contexts (but the answer is no, they are not effective against me). I don't make decisions based on purely visceral reactions, nor do I advise it. I think there may have been some miscommunication... I was saying that those tactics don't generally work, that I do not recommend them, even if I happened to be an exception.
↑ comment by Shmi (shminux) · 2014-11-13T23:19:58.855Z · LW(p) · GW(p)
"generally proponents" doesn't sound nearly like "putting lots of efforts into". As I said, an effective animal altruist would dedicate some serious time to figuring out better ways to reduce animal suffering. Being boxed in the propaganda-only mode certainly doesn't seem like an effective approach. If you are serious about the issue, go into an Eliezer mode and try to do the impossible. Especially since it's a lot less impossible than what he aspires to achieve.
Replies from: Sysice↑ comment by Sysice · 2014-11-14T11:00:25.078Z · LW(p) · GW(p)
You seem to be saying that people can't talk, think about, or discuss topics unless they're currently devoting their life towards that topic with maximum effectiveness. That seems... incredibly silly.
Your statements seem especially odd considering that there are people currently doing all of the things you mentioned (which is why you knew to mention them).
↑ comment by Shmi (shminux) · 2014-11-14T16:05:18.239Z · LW(p) · GW(p)
Oh sure, talk is fine and dandy, just don't pretend to be effective or rational in any way.
Replies from: Vulture↑ comment by Kaj_Sotala · 2014-11-15T05:32:23.853Z · LW(p) · GW(p)
- expose existing cattle/chicken abuse in farms and slaughterhouses [...]
Yet I know of no effective animal altruists who spend majority of their efforts figuring out and working on the task which is likely to provide the greatest payoff.
Note that one of Animal Charity Evaluators' two top charities is Mercy for Animals, which has a track record of exposing abuse.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-11-15T07:09:13.465Z · LW(p) · GW(p)
Yeah, I agree, abuse exposure is actually happening, which is good. At least it reduces the unnecessary torture, if not the amount of slaughter for food.
↑ comment by solipsist · 2014-11-14T01:00:39.286Z · LW(p) · GW(p)
So, if you are an effective altruist who includes cows and chickens in the potential list of the entities who should be protected from suffering what do you do? Write blogs aimed at an extremely limited audience who do not appear to be overly receptive, anyway? That's not very "effective", is it?
I don't think this gives due respect to the premise. Imagine yourself in a world where attitudes towards meat eating were similar to ours, but the principal form of livestock were human. You'd like to reduce the number of people being raised as meat. Would arguing your ethical position on a site called LessWrong be worth your time, even if most people there weren't very receptive?
Replies from: shminux↑ comment by Shmi (shminux) · 2014-11-14T04:08:54.396Z · LW(p) · GW(p)
No, what would be worth my time is to figure out how to make less sentient animals taste like humans. Maybe popularize pork, or something.
↑ comment by Richard_Kennaway · 2014-11-22T16:07:39.139Z · LW(p) · GW(p)
So it would be preferable to not create, raise, slaughter and eat animals if there was an alternative source of meat with the same nutritional and taste properties omnivores are used to.
There is textured vegetable protein. Ok, it's not molecule-equivalent to meat, but it's supposed to imitate the physical sensation of eating meat. It was invented fifty years ago. For anyone who wants to eat meat without eating meat, there's an answer. So is there any reason to chase after vat-meat?
How close the imitation is, I don't know. I'm not sure I've ever eaten TVP. But it has to be easier and cheaper to improve on the current product than to develop a way of growing bulk tissue in industrial quantities.
Replies from: drethelincomment by geeky · 2014-11-14T16:08:29.776Z · LW(p) · GW(p)
My reason for vegetarianism is, at its core, a very simple one. I'm horrified of violence, almost by default. And I tend to be extremely empathetic. I'm emotionally motivated to treat animals with kindness before I am intellectually motivated. The discrepancy in lw might depend on personality differences. Or sometimes you can get very bogged down in the intellectual minutia trying to sort everything out, and end up reaching a plateau or inaction (i.e, the default).
comment by Lumifer · 2014-11-14T16:21:26.549Z · LW(p) · GW(p)
First, I am not a big fan of having the top-level posts consist of nothing but a link.
Second, the article takes "the intellectual case against meat-eating is pretty air-tight" as its premise. That premise is not even wrong as it confuses values and logic (aka rationality).
Full disclosure: I am a carnivore.
Replies from: RobbBB, RowanE↑ comment by Rob Bensinger (RobbBB) · 2014-11-15T21:37:45.692Z · LW(p) · GW(p)
I'm assuming that the LessWrongers interested in 'should I be a vegan?' are at least somewhat inclined toward effective altruism, uilitarianism, compassion, or what-have-you. I'm not claiming a purely selfish agent should be a vegan. I'm also not saying that the case is purely intellectual (in the sense of having nothing to do with our preferences or emotions); I'm just saying that the intellectual component is correctly reasoned. You can evaluate it as a hypothetical imperative without asking whether the antecedent holds.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-16T20:07:11.292Z · LW(p) · GW(p)
the LessWrongers interested in 'should I be a vegan?'
I am sorry, where is this coming from?
I'm just saying that the intellectual component is correctly reasoned
At this level of argument there isn't much intellectual component to speak of. If your value system already says "hurting creatures X is bad", the jump to "don't eat creatures X" doesn't require great intellectual acumen. It's just a direct, first-order consequence.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2014-11-16T22:46:25.783Z · LW(p) · GW(p)
I didn't say it requires great intellectual acumen. In the blog post we're talking about, I called the argument "air-tight", "very simple", and "almost too clear-cut". I wouldn't have felt the need to explicitly state it at all, were it not for the fact that Eliezer and several other LessWrong people have been having arguments about whether veganism is rational (for a person worried about suffering), and about how confident we can be that non-humans are capable of suffering. Some people were getting the false impression from this that this state of uncertainty about animal cognition was sufficient to justify meat-eating. I'm spelling out the argument only to make it clear that the central points of divergence are normative and/or motivational, not factual.
↑ comment by RowanE · 2014-11-15T19:33:53.261Z · LW(p) · GW(p)
That bit reads to me as just a heading of one section of the article - a paragraph later it lays out the argument which is described as being "pretty air-tight". Which argument does assume one has a particular kind of ethical system, but that's not really the same thing as making the confusion you describe, especially when it's an ethical system shared and trumpeted by many in the community.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-16T20:10:29.951Z · LW(p) · GW(p)
Which argument does assume one has a particular kind of ethical system, but that's not really the same thing as making the confusion you describe
Under this logic I can easily say "the intellectual case for killing infidels is pretty air-tight" or "the intellectual case for torturing suspects is pretty air-tight" because hey, we abstracted the values away!
Replies from: RowanE↑ comment by RowanE · 2014-11-17T11:30:31.410Z · LW(p) · GW(p)
Well, yeah, if you have an essay about infidel-killing, having the subheading for the part where you lay out the case for doing so describe said case as "pretty air-tight" isn't exactly a heinous offence.
And you're kind of skipping over considerations of what values Less Wrong tends to have. There's a lot of effective altruism material, members of the community are disproportionately consequentialist, are you expecting little asides throughout the article saying "of course, this doesn't apply to the 10% of you who are egoists"?
Replies from: Lumifer↑ comment by Lumifer · 2014-11-17T16:11:26.493Z · LW(p) · GW(p)
describe said case as "pretty air-tight" isn't exactly a heinous offence
The question isn't about the offence, the question is whether you would agree with this thesis in the context of an essay about islamic jihad.
There's a lot of effective altruism material, members of the community are disproportionately consequentialist
Neither of these leads to vegetarianism. Consequentialism has nothing to do with it and EA means being rational (=effective) about helping others, but it certainly doesn't tell you how wide the circle of those you should help must be.
Replies from: RowanE↑ comment by RowanE · 2014-11-18T20:40:33.265Z · LW(p) · GW(p)
I accept that neither of the things I listed logically lead to accepting the value claim made in the argument (other than that the effective altruism movement generally assumes one's circle is at least as wide as "all humans", considering the emphasis on charities working a continent away), but I still feel quite confident that LessWrongers are likely, and more likely than the general population, to accept said value claim - unless you want to argue about expected values, the assumption made seems to be "the width of the reader's circle extends to all (meaningfully) sentient beings", which is probably a lot more likely in a community like ours that reads a lot of sci-fi.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-18T20:51:09.825Z · LW(p) · GW(p)
I still feel quite confident that LessWrongers are likely, and more likely than the general population, to accept said value claim
Oh, sure, the surveys will tell you so directly.
But "more likely than the general population" is pretty far from "doesn't apply to the 10% of you who are egoists".
comment by Natha · 2014-11-14T04:02:37.962Z · LW(p) · GW(p)
Aside from painting "LessWrong types" in really broad, unflattering strokes, I thought the author made several good points. Note though that I am a ~15 year vegetarian (and sometime vegan) myself and I definitely identify with his argument, so there's the opportunity for subjective validation to creep in. I also find many perference-utlitarian viewpoints persuasive, though I wouldn't yet identify as one.
I think the 20% thing and the 1-in-20 thing were just hypothetical, so we shouldn't get too hung up on them; I think his case is just as strong without any numbers. There is some uncertainty about the continuum of animal cognition and how it relates to their capacity to suffer.
My own personal voice-inside-my-head reasons for vegetarianism can be summarized as follows: "I am an animal, but a unique kind of animal who can understand what it means to feel pain and to die and who doesn't want that to happen to himself or to any other animals. My unique kind of animal can also live a happy, healthy life at very little personal expense without causing other animals to feel pain or to die." Thus, Rob's first 4 premises (particularly 2 and 3) resonated with me.
I don't believe other animals, even other mammals, have anything like human consciousness. Nor do I believe they should be accorded human rights. But I know that at the end of the day, biologically I am a mammal; if you're warm-blooded and you've got hair and a neocortex, then I'm really going to avoid hurting/killing you. If you have a spine and a pulse, I'm giving you the benefit of the doubt.
Replies from: Jiro↑ comment by Jiro · 2014-11-14T16:28:42.663Z · LW(p) · GW(p)
There is some uncertainty about the continuum of animal cognition and how it relates to their capacity to suffer.
Having a small uncertainty about animal suffering and then saying that because of the large number of animals we eat, even a small uncertainty is enough to make eating animals bad, is a variation on Pascal's Mugging.
Replies from: RobbBB, DanielFilan↑ comment by Rob Bensinger (RobbBB) · 2014-11-15T21:45:58.264Z · LW(p) · GW(p)
Yeah, this is why I used the number '1-in-20'. It's somewhat arbitrary, but it serves the function of ruling out Pascal-level uncertainty.
↑ comment by DanielFilan · 2014-11-14T22:01:33.933Z · LW(p) · GW(p)
I can understand why you shouldn't incentivise someone to possibly torture lots of people by being the sort of person who gives in to Pascal's mugging (in the original formulation). That being said, here you seem to be using Pascal's mugging to refer to doing anything with high expected utility but low probability of success. Why is that irrational?
Replies from: Jiro↑ comment by Jiro · 2014-11-15T02:28:29.039Z · LW(p) · GW(p)
Actually, I'm using it to refer to something which has high expected utility, low probability of success, and a third criterion: you are uncertain about what the probability really is. A sweepstakes with 100 tickets has a 1% chance of winning. A sweepstakes which has 2 tickets but where you think there's a 98% chance that the person running the sweepstakes is a fraudster also has a 1% chance of winning, but that seems fundamentally different from the first case.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-11-15T10:56:22.205Z · LW(p) · GW(p)
you are uncertain about what the probability really is
I think this is a misunderstanding of the idea of probability. The real world is either one way or another, either we will actually win the sweepstakes or we won't. Probability comes into the picture in our heads, telling us how likely we think a certain outcome is, and how much we weight it when making decisions. As such, I don't think it makes sense to talk about having uncertainty about what a probability really is, except for the case of a lack of introspection.
Also, going back to Robby's post:
We don’t know enough about how cattle cognize, and about what kinds of cognition make things moral patients, to assign a less-than-1-in-20 subjective probability to ‘factory-farmed cattle undergo large quantities of something-morally-equivalent-to-suffering’.
This seems like an important difference to what you're talking about. In this case, the probabilities are bounded below by a not-ridiculously-small number, that (Robby claims) is high enough that we should not eat meat. If you grant that your probability does in fact obey such a bound, and that that bound suffices for the case for veg*nism, then I think the result follows, whether or not you call it a Pascal's mugging.
Replies from: Jiro↑ comment by Jiro · 2014-11-15T19:26:56.088Z · LW(p) · GW(p)
If you don't like the phrase "uncertainty about the probability", think of it as a probability that is made up of particular kinds of multiple components.
The second sweepstakes example has two components, uncertainty about which entry will be picked and uncertainty about whether the manager is honest. The first one only has uncertainty about which entry will be picked. You could split up the first example mathematically (uncertainty about whether your ticket falls in the last two entries and uncertainty about which of the last two entries your ticket is) but the two parts you get are conceptually much closer than in the second example.
. In this case, the probabilities are bounded below by a not-ridiculously-small number, that (Robby claims) is high enough that we should not eat meat.
Like the possibility that the sweepstakes manager is dishonest, "we don't know enough about how cattle cognize" is all or nothing; if you do mulitple trials, the distribution is a lot more lumpy. If all cows had exactly 20% of the capacity of humans, then five cows would have 100% in total. If there's a 20% chance that cows have as much as humans and an 80% chance that they have nothing at all, that's still a 20% chance, but five cows would have a lumpy distribution--instead of five cows having a guaranteed 100%, there would be a 20% chance of having 500% and an 80% chance of nothing.
In some sense, each case has a probability bounded by 20% for a single cow. But in the first case, there's no chance of 0%, and in the second case, not only is there a chance of 0%, but the chance of 0% doesn't decrease as you add more cows. The implications of "the probability is bounded by 20%" that you probably want to draw do not follow in the latter case.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-11-15T21:28:18.704Z · LW(p) · GW(p)
the two parts you get are conceptually much closer than in the second example.
I still don't see why this matters? To put things concretely, if I would be willing to buy the ticket in the first sweepstakes, why wouldn't I be willing to do so in the second? Sure, the uncertainty comes from different sources, but what does this matter for me and how much money I make?
The implications of "the probability is bounded by 20%" that you probably want to draw do not follow in the latter case.
If I understand you correctly, you seem to be drawing a slightly distinction here than I thought you were, claiming that the distinction is between 100% probability of a cow consciousness that is 20% as intense as human consciousness, as opposed to a 20% probability of a cow consciousness that is 100% as intense as human consciousness (for some definition of intensity). Am I understanding you correctly?
In any case, I still think that the implications that I want to draw do in fact follow. In the latter case, I would think that eating meat has a 20% chance of producing a really horrible effect, and an 80% chance of being mildly convenient for you, so you definitely shouldn't eat meat. Is there something that I am missing?
ETA: Again, to put things more concretely, consider theory X: that whenever 50 loaves of bread are bought, someone creates a human, keeps them in horrible conditions, and then kills them. Your probability for theory X being true is 20%. If you remove bread from your diet, you will have to learn a whole bunch of new recipes, and your diet might be slightly low in carbohydrates. Do you think that it is OK to continue eating bread? If not, your disagreement with the case for veg*nism is a different assessment of the facts, rather than a condemnation of the sort of probabilistic reasoning that is used.
Replies from: Jiro↑ comment by Jiro · 2014-11-16T02:48:23.776Z · LW(p) · GW(p)
I imagine the line of reasoning you want me to use to be something like this:
"Well, the probability of cow sentience is bounded by 20%, so you shouldn't eat cows."
"How do you get to that conclusion? After all, it's not certain. In fact, it's less certain than not. The most probable result, at 80%, is that no damage is done to cows whatsoever."
"Well, you should calculate the expectation. 20% large effect + 80% no effect is still enough of a bad effect to care about."
"But I'm never going to get that expectation. I'm either going to get the full effect or nothing at all."
"If you eat meat many times, the damage done will add up. Although you could be lucky if you only do it once and cause no damage, if you do it many times you're almost certain to cause damage. And the average amount of damage done will be equal to that expectation multiplied by the number of trials."
If there's a component of uncertainty over the probability, that last step doesn't really work, since many trials are still all or nothing when combined.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-11-16T03:40:02.350Z · LW(p) · GW(p)
I wouldn't say the last step that you attribute to me. Firstly, if I were going to talk about the long run, I would say that in the long run, you should maximise expected utility because you'll probably get a lot of utility that way. That being said, I don't want to talk about the long run at all, because we don't make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn't work in that case, although I would urge you to not eat the bacon omelette. (In addition, the line of reasoning that I would actually want you to use would involve attributing >50% probability of cow, chicken, pig, sheep, and fish sentience, but that's beside the point).
Rather, I would make a case like this: when you make a choice under uncertainty, you have a whole bunch of possible outcomes that could happen after the choice is made. Some of these outcomes will be better when you choose one option, and some will be better when you choose another. So, we have to weigh up which outcomes we care about to decide which choice is better. I claim that you should weigh each outcome in proportion to your probability of it occurring, and the difference in utility that the choice makes. Therefore, even if you only assign the "cows are sentient" or "theory X is true" outcomes a probability of 20%, the bad outcomes are so bad that we shouldn't risk them. The fact that you assign probability >50% to no damage happening isn't a suffcient condition to establish "taking the risk is OK".
Replies from: Jiro↑ comment by Jiro · 2014-11-16T09:19:12.522Z · LW(p) · GW(p)
That being said, I don't want to talk about the long run at all, because we don't make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn't work in that case, although I would urge you to not eat the bacon omelette.
The point is that given the way these probabilities add up, not only wouldn't that work for a single bacon omelette, it wouldn't work for a lifetime of bacon omelettes. They're either all harmful or all non-harmful.
Therefore, even if you only assign the "cows are sentient" or "theory X is true" outcomes a probability of 20%, the bad outcomes are so bad that we shouldn't risk them.
Your reasoning doesn't depend on the exact number 20. It just says that the utility of the outcome should be multiplied by its probability. If the probability was 1% or 0.01% you could say exactly the same thing and it would be just as valid. In other words, your reasoning proves too much; it would imply accepting Pascal's Mugging. And I don't accept Pascal's Mugging.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-11-16T11:01:43.698Z · LW(p) · GW(p)
The point is that given the way these probabilities add up, not only wouldn't that work for a single bacon omelette, it wouldn't work for a lifetime of bacon omelettes. They're either all harmful or all non-harmful.
I know. Are you implying that we shouldn't maximise expected utility when we're faced with lots of events with dependent probabilities? This seems like an unusual stance.
Your reasoning doesn't depend on the exact number 20... If the probability was 1% or 0.01% you could say exactly the same thing and it would be just as valid.
My reasoning doesn't depend on the exact number 20, but the probability can't be arbitrarily low either. If the probability of cow sentience were only 1/1,000,000,000,000, then the expected utility of being veg*n would be lower than that of eating meat, since you would have to learn new recipes and worry about nutrition, and that would be costly enough to outweigh the very small chance of a very bad outcome.
In other words, your reasoning proves too much; it would imply accepting Pascal's Mugging. And I don't accept Pascal's Mugging.
Again, this depends on what you mean by Pascal's Mugging. If you mean the original version, then my reasoning does not necessarily imply being mugged, since the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices (if you're an average American, approximately 200, only 30 if you don't eat seafood, and only 1.4 if you also don't eat chicken or eggs, according to this document), and nobody can boost this number in response to you claiming that you have a really small probability of them being sentient.
However, if by Pascal's Mugging you mean "maximising expected utility when the probability of success is small but bounded from below and you have different sources of uncertainty", then yes, you should accept Pascal's Mugging, and I have never seen a convincing argument that you shouldn't. Also, please don't call that Pascal's Mugging, since it is importantly different from its namesake.
Replies from: Jiro↑ comment by Jiro · 2014-11-16T17:45:32.696Z · LW(p) · GW(p)
Are you implying that we shouldn't maximise expected utility when we're faced with lots of events with dependent probabilities? This seems like an unusual stance.
I would limit this to cases where the dependency involves trusting an agent's judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.
the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices
You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.
It's true that in this case you are arbitrarily picking the small figure rather than the large figure as in a typical Pascal's Mugging, but it still amounts to picking the right figure to get the right answer.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-11-17T01:27:51.738Z · LW(p) · GW(p)
I would limit this to cases where the dependency involves trusting an agent's judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another. Rather, we are just stating an argument and letting you judge how persuasive you think that argument is.
You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.
The probability that non-human animals suffer can't be arbitrarily large (since it's trivially bounded by 1), and for the purposes of the pro-veganism argument it can't be arbitrarily small, as explained in my previous comment, making this argument decidedly non-Pascalian. Furthermore, I'm not picking your probability that non-human animals suffer, I'm just claiming that for any reasonable probability assignment, veganism comes out as the right thing to do. If I'm right about this, then I think that the conclusion follows, whether or not you want to call it Pascalian.
Replies from: Jiro↑ comment by Jiro · 2014-11-17T02:39:51.453Z · LW(p) · GW(p)
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another.
Human bias serves the role of personal gain in this case. (Also, the nature of vegetarianism makes it especially prone to such bias.)
The probability that non-human animals suffer can't be arbitrarily large (since it's trivially bounded by 1),
It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-11-17T04:54:30.954Z · LW(p) · GW(p)
It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
You are talking as if I am setting your probability that non-human animals are wrong. I am not doing that: all that I am saying is that for any reasonable probability assignment, you get the conclusion that you shouldn't eat non-human animals or their secretions. If this is true, then eating non-human animals or their secretions is wrong.
Replies from: Jiro↑ comment by Jiro · 2014-11-17T09:50:15.879Z · LW(p) · GW(p)
You are talking as if I am setting your probability that non-human animals are wrong.
You are arbitrarily selecting a number for the probability that animals suffer. This number can be chosen by you such that when multiplied by the number of animals people eat, it always results in the conclusion that the expected damage is enough that people should not eat animals.
This is similar to Pascal's Mugging, except that you are choosing the smaller number instead of the larger number.
for any reasonable probability assignment, you get the conclusion that you shouldn't eat non-human animals
This is not true. For instance, a probability assignment of 1/100000000 to the probability that animals suffer like humans would not lead to that conclusion. However, 1/100000000 falls outside the range that most people think of when they think of a small but finite probability, so it sounds unreasonable even though it is not.
comment by Salemicus · 2014-11-17T16:30:41.383Z · LW(p) · GW(p)
I wonder how RobbBB, and other vegans, feel about lions on the Serengeti. When they kill gazelles, is that morally wrong? Obviously, they aren't going to be dissuaded by your blog posts, but in a utilitarian framework, I would think that suffering caused by lions' carnivorous tastes is just as "bad" as that caused by humans. Should we put all carnivores in zoos and feed them meat substitutes? Or should lions be free to hunt, regardless of the suffering it may cause the gazelle, because that's their nature?
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2014-11-17T19:45:29.130Z · LW(p) · GW(p)
People who approach veganism from utilitarian ideas would group this question in with a bunch of others under wild animal suffering. The general idea is that suffering is just as bad whether human caused or natural, though it's often hard to figure out what actions most reduce suffering (for example, if we killed all the predators there would be lots more prey animals, but if they tend to have lives that are on average worse than not living at all then this would be a bad thing.)
Replies from: Lumifer↑ comment by Lumifer · 2014-11-17T19:50:20.611Z · LW(p) · GW(p)
(for example, if we killed all the predators there would be lots more prey animals, but if they tend to have lives that are on average worse than not living at all then this would be a bad thing.)
Wouldn't that logic lead you to killing all predators or all prey depending on the answer to the question of whether the prey has lives not worth living? If "no", kill prey, if "yes", kill predators. In any case you're committed to a lot of killing.
comment by maxikov · 2014-11-14T00:29:07.422Z · LW(p) · GW(p)
This article heavily implies that every LessWronger is a preference utilitarian, and values the wellbeing, happiness, and non-suffering of ever sentient (i.e. non-p-zombie) being. Neither of that is fully true for me, and as this ad-hoc survey - https://www.facebook.com/yudkowsky/posts/10152860272949228 - seems to suggest, I may not be alone in that. Namely, I'm actually pretty much OK with animal suffering. I generally don't empathize all that much, but there a lot of even completely selfish reasons to be nice to humans, whereas it's not really the case for animals. As for non-human intelligent beings - I'll figure that once I meet them, or the probability of such encounter gets somewhat realistic; currently there's too much ambiguity about them.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2014-12-02T07:17:46.216Z · LW(p) · GW(p)
I was mainly talking about LessWrongers who care about others (for not-purely-selfish reasons). This is a much milder demand than preference utilitarianism. I'm surprised to hear you don't care about others' well-being -- not even on a system 2 level, setting aside whether you feel swept up in a passionate urge to prevent suffering.
Let me see if I can better understand your position by asking a few questions. Assuming no selfish benefits accrued to you, would you sacrifice a small amount of your own happiness to prevent the torture of an atom-by-atom replica of you?
Replies from: maxikov↑ comment by maxikov · 2014-12-02T21:15:17.982Z · LW(p) · GW(p)
We may be using different definitions of "care". Mine is exactly how much I'm motivated to change something after I became aware that it exists. I don't find myself extremely motivated to eliminate the suffering of humans, and much less for animals. Therefore, I conclude that my priorities are probably different. Also, at least to some extent I'm either hardwired or conditioned to empathize and help humans in my immediate proximity (although definitely to a smaller extent than people who claim to have sleepless nights after observing the footage of suffering), but it doesn't generalize well to the rest of humans and other animals.
As for saving the replica, I probably will, since it definitely belongs to the circle of entities I'm likely to empathize with. However, the exact details really depend on whether I classify my replica as myself or as my copy, which I don't have a good answer to. Fortunately, I'm not likely to encounter this dilemma in foreseeable future, and probably by the time it's likely to occur, I'll have more information to answer this question better. Furthermore, especially in this situation, and in much more realistic situations of being nice to people around me, there are almost always selfish benefits, especially in the long run. However, in the situations where every person around me is basically a bully, who perceives niceness as weakness and the invitation to bully more, I frankly don't feel all that much compassion.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2014-12-03T02:41:09.921Z · LW(p) · GW(p)
Yes, I'm using 'care about X' to mean some combination of 'actually motivated to promote X's welfare' and 'actually motivated to self-modify, if possible, to promote X's welfare'. If I could, I'd take a pill that makes me care enough about non-humans to avoid eating them; so in that sense I care about non-humans, even if my revealed preferences don't match my meta-preferences.
Meta-preferences are important because I frequently have conflicting preferences, or preferences I need to cultivate over time if they're to move me, or preferences that serve me well in the short term but poorly in the long term. If I just do whatever I 'care about' in the moment at the object level, unreflectively, without exerting effort to shape my values deliberately, I end up miserable and filled with regret.
In contrast, I meta-want my deepest wants to be fairly simple, consistent, and justifiable to other humans. Even if I'm not feeling especially sympathy-laden on a particular day, normative elegance and consistency suggests I should care about the suffering of an exact replica of myself just as much as I care about the suffering inside my own skull. This idea generalizes to endorse prudence for agents that are less similar to me but causally result from me (my future selves) and to endorse concern for agents that will never be me but can have states that resemble mine, including my suffering. I have more epistemic warrant for thinking humans instantiate such states than for thinking non-humans do, but I'm pretty sure that a more informed, in-control-of-his-values version of myself would not consider it similarly essential that moral patients have ten fingers, 23 chromosome pairs, etc. (Certainly I don't endorse decision procedures that would disregard my welfare if I had a different chromosome or finger count, whereas I do endorse procedures that disregard me should I become permanently incapable of experiencing anything.)
If I wish I were a nicer and more empathic person, I should just act like a nicer and more empathic person, to the extent I'm able.
Replies from: maxikov↑ comment by maxikov · 2014-12-03T04:57:18.435Z · LW(p) · GW(p)
I would distinguish several levels of meta-preferences.
On level 1, an agent has a set of object-level preferences, and wants to achieve the maximum cumulative satisfaction of them over the lifetime. To do that, the agent may want sometimes to override the incentive to maximize the satisfaction at each step if it is harmful in the long run. Basically, it's just switching from a greedy gradient descent to something smarter, and barely requires any manipulations with object-level preferences.
On level 2, the agent may want to change their set of object-level preferences in order to achieve higher satisfaction, given the realistic limits of what's possible. A stupid example: someone who wants one billion dollars but cannot have it may want to start wanting ten dollars instead, and be much happier. More realistic example: a person who became disables may want to readjust their preference and accommodate new limitations. Applying this strategy to its logical end has some failure modes (e.g. the one described in Three Worlds Collide, or, more trivially, opiates), but it still sort of make sense for a utility-driven agent.
On level 3, the agent may want to add or remove some preferences, regardless of the effect of that on the total level of satisfaction, just for their own sake.
Wanting to care more about animals seems to be level-3 meta-preference. In a world where this preference is horribly dissatisfied, where animals are killed at the rate of about one kiloholocaust per year, that clearly doesn't optimize for satisfaction. Consistency of values and motivations - yes, but only if you happen to have consistency as a terminal value in the utility function. That doesn't necessarily have to be the case: in most scenarios, consistency is good because it's useful, because it allows us to solve problems. The lack of compassion to animals doesn't seem to be a problem, unless the inconsistency itself is a problem.
Thus, it seems impossible to make such change without accepting that carrying about animals is good or that having consistent values is good in a morally realist way. Now, I'm not claiming that I'm a complete moral relativist. I'm not even sure that it's possible - so far, all the arguments for moral relativism I've seen are actually realist themselves. However, arguing for switching between different realist-ish moral frameworks seems to be a much harder task.
comment by MrMind · 2014-11-17T09:30:31.486Z · LW(p) · GW(p)
I'm going to comment on the general issue, not on the specific link.
I'm a carnivore, so what I'm going to write is my best approximation at purging my reasoning of cached thoughts and motivated cognition.
I'm not convinced that present-day vegetarianism is not just group signalling.
Of course you wouldn't want aware beings to suffer pointlessly. But from there to vegetarianism there's a long road:
- you should at least try to argue that it's best to never be born than to be born, live a few pleasant years and be killed;
- that me not eating meat is the best way to stop animal suffering, all else being equal, rather than say lobbying for passing a law that prohibits intensive farming;
- that not eating meat wouldn't hurt humanity in the long run (the fact that vegetarians need creatine supplementation to be intellectually on par with carnivores is especially frightening for me).
All these points are taken from granted in the article, but they are far from being so.