Effective Altruism Through Advertising Vegetarianism?

post by Peter Wildeford (peter_hurford) · 2013-06-12T18:50:31.353Z · LW · GW · Legacy · 553 comments

Contents

  Introduction
  Other Estimations
  Pamphlets Per Dollar
  Conversions Per Pamphlet
    Facebook Study
    Pamphlet Study
  Vegetarian Years Per Conversion
  Animals Saved Per Vegetarian Year
  Days Lived Per Animal
  Accounting For Biases
  Additional People Are Being Reached
  Accounting for Product Elasticity
  Putting It All Together
  Market Saturation and Diminishing Marginal Returns?
  The Conjunction Fallacy?
  Conversion and The 100 Yard Line
  Three Places I Might Donate Before Donating to Vegan Outreach
  Conclusion
  Footnotes
None
553 comments

Abstract: If you value the welfare of nonhuman animals from a consequentialist perspective, there is a lot of potential for reducing suffering by funding the persuasion of people to go vegetarian through either online ads or pamphlets.  In this essay, I develop a calculator for people to come up with their own estimates, and I personally come up with a cost-effectiveness estimate of $0.02 to $65.92 needed to avert a year of suffering in a factory farm.  I then discuss the methodological criticism that merits skepticism of this estimate and conclude by suggesting (1) a guarded approach of putting in just enough money to help the organizations learn and (2) the need for more studies should be developed that explore advertising vegetarianism in a wide variety of media in a wide variety of ways, that include decent control groups.

-

Introduction

I start with the claim that it's good for people to eat less meat, whether they become vegetarian -- or, better yet, vegan -- because this means less nonhuman animals are being painfully factory farmed.  I've defended this claim previously in my essay "Why Eat Less Meat?".  I recognize that some people, even those who consider themselves effective altruists, do not value the well-being of nonhuman animals.  For them, I hope this essay is interesting, but I admit it will be a lot less relevant.

The second idea is that it shouldn't matter who is eating less meat.  As long as less meat is being eaten, less animals will be farmed, and this is a good thing.  Therefore, we should try to get other people to also try and eat less meat.

The third idea is that it also doesn't matter who is doing the convincing.  Therefore, instead of convincing our own friends and family, we can pay other people to convince people to eat less meat.  And this is exactly what organizations like Vegan Outreach and The Humane League are doing.  With a certain amount of money, one can hire someone to distribute pamphlets to other people or put advertisements on the internet, and some percentage of people who receive the pamphlets or see the ads will go on to eat less meat.  This idea and the previous one should be uncontroversial for consequentialists.

But the fourth idea is the complication.  I want my philanthropic dollars to go as far as possible, so as to help as much as possible.  Therefore, it becomes very important to try and figure out how much money it takes to get people to eat less meat, so I can compare this to other estimations and see what gets me the best "bang for my buck".


Other Estimations

I have seen other estimates floating around the internet that try to estimate the cost of distributing pamphlets, how many conversions each pamphlet produces, and how much less meat is ate via each conversion.  Brian Tomasik calculates $0.02 to $3.65 [PDF] per year of nonhuman animal suffering prevented, later $2.97 per year, and then later $0.55 to $3.65 per year.

Jess Whittlestone provides statistics that reveal an estimate of less than a penny per year[1]. 

Effective Animal Activism, a non-profit evaluator for animal welfare charities, came up with an estimate [Excel Document] of $0.04 to $16.60 per year of suffering averted, that also takes into account a variety of additional variables, like product elasticity.

Jeff Kaufman uses a different line of reasoning, by estimating how many vegetarians there are and guessing how many of them came via pamphlets, estimates it would take $4.29 to $536 to make someone vegetarian for one year.  Extrapolating from that using at a rate of 255 animals saved per year and a weighted average of 329.6 days lived per animal (see below for justification of both assumptions), would give $0.02 to $1.90 per year of suffering averted[2].

A third line of reasoning, also by Jeff Kaufman, was to measure the amount of comments on the pro-vegetarian websites advertised in these campaigns and found that 2-22% of them were about an intended behavior change (eating less meat, going vegetarian, or going vegan), depending on the website.  I don't think we can draw any conclusions from this, but it's interesting.

To make my calculations, I decided to make a calculator.  Unfortunately, I can't embed it here, so you'd have to open it in a new tab as a companion piece.

I'm going to start by using the following formula: Years of Suffering Averted per Dollar = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Days lived / animal)

Now, to get estimations for these variables.


Pamphlets Per Dollar

How much does it cost to place the advertisement, whether it be the paper pamphlet or a Facebook advertisement?  Nick Cooney, head of the Humane League, says the cost-per-click of Facebook ads is 20 cents.

But what about the cost per pamphlet?  This is more of a guess, but I'm going to go with <a href="">Vegan Outreach's suggested donation of $0.13 per "Compassionate choices" booklet.

However, it's important to note that this cost must also include opportunity cost -- leafleters must forego the ability to use that time to work a job.  This means I must include an opportunity cost of say $8/hr on top of that, making the actual cost $0.27 assuming a pamphlet is given out each minute of volunteer time, meaning 3.7 people are reached per dollar from pamphlets.  For Facebook advertisements, the opportunity cost is trivial.


Conversions Per Pamphlet

This is the estimate with the biggest target on it's head, so to speak.  How many people do we get to actually change their behavior with a simple pamphlet or Facebook advertisement?  Right now, we have three lines of evidence:

Facebook Study

Humane League did A $5000 Facebook advertisement campaign.  They bought ads that look like this...

 

...and sent people to websites (like this one or this one) with auto-playing videos that start playing and show the horrors of factory farming.

Afterward, there was another advertisement run to people who "liked" the video page, offering a 1 in 10 chance of winning a free movie ticket in order to take a survey.  Everyone who emailed in asking for a free vegetarian starter kit were also emailed a survey.  104 people took the survey and there were 32 reported vegetarians[3] and 45 people reported, for example, that their chicken consumption decreased "slightly" or "significantly".

7% of visitors liked the page and 1.5% of visitors ordered a starter kit.  Assuming all the other people went away from the video not changing their consumption, this survey would lead us to (very tenuously) think about 2.6% of people seeing the video will become a vegetarian[4].

(Here's the results of the survey in PDF.)

Pamphlet Study

A second study discussed in "The Powerful Impact of College Leafleting (Part 1)" and "The Powerful Impact of College Leafleting: Additional Findings and Details (Part 2)" looked specifically at pamphlets.

Here, Humane League staff visited two large East Coast state schools and distributed leaflets.  They then returned two months later and surveyed people walking by.  Those who remember receiving a leaflet earlier were counted.  They found about 2% of those receiving a pamphlet went vegetarian.

Vegetarian Years Per Conversion

But once a pamphlet or Facebook advertisement captures someone, how long will they stay vegetarian?  One survey showed vegetarians refrain from eating meat for an average of 6 years or more.  Another study I found says 93% of vegetarians stay vegetarian for at least three years.

 

Animals Saved Per Vegetarian Year

And once you have a vegetarian, how many animals do they save per year?  CountingAnimals says 406 animals saved per year.

The Humane League suggests 28 chickens, 2 egg industry hens, 1/8 beef cow, 1/2 pig, 1 turkey, and 1/30 dairy cow per year (total = 31.66 animals), and does not provide statistics on fish.  This agrees with CountingAnimals on non-fish totals.

Days Lived Per Animal

One problem, however, is that saving a cow that could suffer for years is different from saving a chicken that suffers for only about a month.  Using data from Farm Sanctuary plus World Society for the Protection of Animals data on fish [PDF], I get this table:

Animal Number Days Alive
Chicken (Meat) 28 42
Chicken (Egg) 2 365
Cow (Beef) 0.125 365
Cow (Milk) 0.033 1460
Fish 225 365

This makes the weighted average 329.6 days[5].

 

Accounting For Biases

As I said before, our formula was Years of Suffering Averted = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Days lived / animal).

Let's plug these values in... Years of Suffering Averted per Dollar = 5 * 0.02 * 3 * 255.16 * 329.6/365 = 69.12.

Or, assuming all this is right (and that's a big assumption), it would cost less than 2 cents to prevent a year of suffering on a factory farm by buying vegetarians.

I don't want to make it sound like I'm beholden to this cost estimate or that this estimate is the "end all, be all" of vegan outreach.  Indeed, I share many of the skepticisms that have been expressed by others.  The simple calculation is... well... simple, and it needs some "beefing up", no pun intended.  Therefore, I also built a "complex calculator" that works on a much more complex formula[6] that is hopefully correct[7] and will provide a more accurate estimation.

 

The big, big deal for the surveys is concern for bias.  The most frequently mentioned bias is social desirability bias, or people who say they reduced meat just because they want to please the surveyor or look like a good person, which actually happens a lot more on surveys than we'd like.

To account for this, we'll have to figure out how inflated answers are because of this bias and then scale the answers down by that amount.  Nick Cooney who says that he's been reading studies that about 25% to 50% of people who say they are vegetarian actually are, though I don't yet have the citations.  Thus, if we find out that an advertisement creates two meat reducers, we'd scale that down to one reducer if we're expecting a 50% desirability bias.

 

The second bias that will be a problem for us is non-response bias, as those who don't reduce their diet are less likely to take the survey and therefore less likely to be counted.  This is especially true in the Facebook study, which only measures people who "liked" or requested a starter kit, showing some pro-vegetarian affiliation.

We can balance this out by assuming everyone who didn't take the survey went on to have no behavior change whatsoever.  Nick Cooney's Facebook Ad Survey is for the 7% of people who liked the page (and then responded to the survey), and obviously those who liked the page are more likely to reduce their consumption.  I chose an optimistic value of 90% to consider the survey completely representative of the 7% who liked the page, and then a bit more for those who reduced their consumption but did not like the page.  My pessimistic value was 95%, assuming everyone who did not like the survey went unchanged and assuming a small response bias among those who liked the page but chose not to take the survey.

For the pamphlets, however, there should be no response bias since the entire population of college students was surveyed from randomly, and no one was said to reject taking the survey.

 

Additional People Are Being Reached

In the Facebook survey, those who said they reduced their meat consumption were also asked if they influenced any of their friends and family to also reduce eating meat, and found that they usually produced 0.86 additional reducers.

This figure seems very high, but I do strongly expect the figure to be positive -- people who reduce eating meat will talk about it sometimes, essentially becoming free advertisements.  I'd be very surprised if they ended up being a net negative.

 

Accounting for Product Elasticity

Another way to boost the effectiveness of the estimate is to be more accurate about what happens when someone stops eating meat.  The change isn't from the actual refusal to eat, but rather from the reduced demand for meat, which leads to a reduced supply.  Following the laws of economics, however, this reduction won't necessarially be one-for-one, but rather depend on the elasticity of product demand and supply.  By getting this number, we can find out how much meat is reduced for every meat not demanded.

My guesses in the calculator come from the following sources, some of which are PDFs: Beef #1Beef #2Dairy #1Dairy #2Pork #1, Pork #2Egg #1, Egg #2PoultrySalmon, and for all fish.

 

Putting It All Together

Implementing the formula on the calculator, we end up with an estimate of $0.03 to $36.52 to reduce one year of suffering on a factory farm based on the Facebook ad data and an estimate of $0.02 to $65.92 based on the pamphlet data.

Of course, many people are skeptical of these figures.  Perhaps surprisingly, so am I.  I'm trying to strike a balance between being an advocate of vegan outreach as a very promising path for making the world a better place, while not losing sight of the methodological hurdles that have not yet been met, and open to the possibility that I'm wrong about this.

The big methodological elephant in the room is that my entire cost estimate depends on having a plausible guess for how likely someone is to change their behavior based on seeing an advertisement.

I feel slightly reassured because:

  1. There are two surveys for two different media, and they both provide estimates of impact that agree with each other.
  2. These estimates also match anecdotes from leafleters about approximately how many people come back and say they went vegetarian because of a pamphlet.
  3. Even if we were to take the simple calculator and drop the "2% chance of getting four years of vegetarianism" assumption down to, say, a pessimistic "0.1% chance of getting one year" conversion rate, the estimate is still not too bad -- $0.91 to avert a year of suffering.
  4. More studies are on the way.  Nick Cooney is going to do a bunch more to study leaflets, and Xio Kikauka and Joey Savoie have publicly published some survey methodology [Google Docs].

That said, the possibility for desirability bias in the survey is a large concern as long as the surveys continue to be from overt animal welfare groups and continue to clearly state that they're looking for reductions in meat consumption.

Also, so long as surveys are only given to people that remember the leaflet or advertisement, there will be a strong possibility of response bias, as those who remember the ad are more likely to be the ones who changed their behavior.  We can attempt to compensate for these things, but we can only do so much.

Furthermore, and more worrying, there's a concern that the surveys are just measuring normal drift in vegetarianism, without any changes being attributable to the ads themselves.  For example, imagine that every year, 2% of people become vegetarians and 2% quit.  Surveying these people at random and not capturing those who quit will end up finding a 2% conversion rate.

How can we address these?  I think all three problems can be solved with a decent control group, whether it be a group of people that receive a leaflet not about vegetarianism, or no leaflet at all.  Luckily, Kikauka and Savoie's survey intend to do just that.

Jeff Kaufman has a good proposal for a survey design I'd like to see implemented in this area.

 

Market Saturation and Diminishing Marginal Returns?

Another concern is that there are diminishing marginal returns to these ads.  As the critique goes, there are only so many people that will be easily swayed by the advertisement, and once all of them are quickly reached by Facebook ads and pamphlets, things will dry up.

Unlike the others, I don't think this criticism works well.  After all, even if it were true, it still would be worthwhile to take the market as far as it will go, and we can keep monitoring for saturation and find the point where it's no longer cost-effective.

However, I don't think the market has been tapped up yet at all.  According to Nick Cooney [PDF], there are still many opportunities in foreign markets and outside the young, college kid demographic.

 

The Conjunction Fallacy?

The conjunction fallacy is a classic fallacy that reminds us that no matter what, the chance of event A happening can never be smaller than the chance of event A happening, followed by event B.  For example, the probability that Linda is a bank teller will always be larger than (or equal to) the probability that Linda is a bank teller and a feminist.

What does this mean for vegetarian outreach?  Well, for the simple calculator, we're estimating five factors.  In the complex calculator, we're estimating 90 factors.  Even if each factor is 99% likely to be correct, the chance that all five are right is 95%, and the chance that all 50 are right is only 60%.  If each factor is only 90% likely to be correct, the complex calculator will be right with a probability of 0.5%!

This is a cause for concern, but I don't think there's any way around this.  It's just an inherent problem with estimation.  Hopefully we'll be balanced by (1) using the different bounds and (2) hoping underestimates and overestimates will cancel each other out.

 

Conversion and The 100 Yard Line

Something we should take into account that helps the case for this outreach rather than hurts it is the idea that conversions aren't binary -- someone can be pushed by the ad to be more likely to reduce their meat intake as opposed to fully converted.  As Brian Tomasik puts it:

Yes, some of the people we convince were already on the border, but there might be lots of other people who get pushed further along and don’t get all the way to vegism by our influence. If we picture the path to vegism as a 100-yard line, then maybe we push everyone along by 20 yards. 1/5 of people cross the line, and this is what we see, but the other 4/5 get pushed closer too. (Obviously an overly simplistic model, but it illustrates the idea.)

This would be either very difficult or outright impossible to capture in a survey, but is something to take into account.

 

Three Places I Might Donate Before Donating to Vegan Outreach

When all is said and done, I like the case for funding this outreach.  However, I think there are three other possibilities along these lines that I find more promising:

Funding the research of vegan outreach: There needs to be more and higher-quality studies of this before one can feel confident enough in the cost-effectiveness of this outreach.  However, initial results are very promising, and the value of information of more studies is therefore very high.  Studies can also find ways to advertise more effectively, increasing the impact of each dollar spent.  Right now, however, it looks like all ongoing studies are fully funded, but if there were opportunities to fund more, I would jump on it.

Funding Effective Animal Activism: EAA is an organization pushing for more cost-effectiveness in the domain of nonhuman animal welfare and is working to further evaluate what opportunities are the best, Givewell-style.  Giving them more money can potentially attract a lot more attention to this outreach, and get it more scrutiny, research, and money down the line.

Funding Centre for Effective Altruism: Overall, it might just be better to get more people involved in the idea of giving effectively, and then getting them interested in vegan outreach, among other things.

 

Conclusion

Vegan outreach is a promising, though not fully studied, method of outreach that deserves both excitement and skepticism.  Should one put money into it?  Overall, I'd take a guarded approach of putting in just enough money to help the organizations learn, develop better cost-effective measurements and transparency, and become more effective.  It shouldn't be too long before this area will become studied well enough to have good confidence in how things are doing.

More studies should be developed that explore advertising vegetarianism in a wide variety of media in a wide variety of ways, with decent control groups.

I look forward to seeing how this develops.  Don't forget to play around with my calculator.

-

 

Footnotes

[1]: Cost effectiveness in years of suffering prevented per dollar = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Years lived / animal).

Plugging in 80K's values... Cost effectiveness = (Pamphlets / dollar) * 0.01 to 0.03 * 25 * 100 * (Years lived / animal)

Filling in the gaps with my best guesses... Cost effectiveness = 5 * 0.01 to 0.03 * 25 * 100 * 0.90 = 112.5 to 337.5 years of suffering averted per dollar
I personally think 25 veg-years per conversion on average is possible but too high; I personally err from 4 to 7.
[2]: I feel like there's an error in this calculation or that Kaufman might disagree with my assumptions of number of animals or days per animal, because I've been told before that these estimates with this method are supposed to be about an order of magnitude higher than other estimates.  However, I emailed Kaufman and he seemed to not find any fault with the calculation, though he does think the methodology is bad and the calculation should not be taken at face value.
[3]: I calculated the number of vegetarians by eyeballing about how many people said they no longer eat fish, which I'd guess only a vegetarian would be willing to give up.
[4]: 32 vegetarians / 104 people = 30.7%.  That population is 8.5% (7% for likes + 1.5% for the starter kit) of the overall population, leading to 2.61% (30.7% * 8.5%).
[5]: Formula is [(Number Meat Chickens)(Days Alive) + (Number Egg Chickens)(Days Alive) + (Number Beef Cows)(Days Alive) + (Number Milk Cows)(Days Alive) + (Number Fish)(Days Alive)] / (Total Number Animals).  ...Plugging things in: [(28)(42) + (2)(365) + (0.125)(365) + (0.033)(1460) + (225)(365)] / 255.16] = 329.6 days

[6]:
Cost effectiveness in amount of days prevented per dollar = (People Reached / Dollar + (People Reached / Dollar * Additional People Reached / Direct Reach * Response Bias * Desirability Bias)) * Years Spent Reducing * (((Percent Increasing Beef * Increase Value) + (Percent Staying Same with Beef * Staying Same Value) + (Percent Decreasing Beef Slightly * Decrease Slightly Value) + (Percent Decreasing Beef Significantly * Decrease Significantly Value) + (Percent Eliminating Beef * Elimination Value) + (Percent Never Ate Beef * Never Ate Value)) * Normal Beef Consumption * Beef Elasticity * (Average Beef Lifespan + Days of Suffering from Beef Slaughter)) + (((Percent Increasing Dairy * Increase Value) + (Percent Staying Same with Dairy * Staying Same Value) + (Percent Decreasing Dairy Slightly * Decrease Slightly Value) + (Percent Decreasing Dairy Significantly * Decrease Significantly Value) + (Percent Eliminating Dairy * Elimination Value) + (Percent Never Ate Dairy * Never Ate Value)) * Normal Dairy Consumption * Dairy Elasticity * (Average Dairy Lifespan + Days of Suffering from Dairy Slaughter)) + (((Percent Increasing Pig * Increase Value) + (Percent Staying Same with Pig * Staying Same Value) + (Percent Decreasing Pig Slightly * Decrease Slightly Value) + (Percent Decreasing Pig Significantly * Decrease Significantly Value) + (Percent Eliminating Pig * Elimination Value) + (Percent Never Ate Pig * Never Ate Value)) * Normal Pig Consumption * Pig Elasticity * (Average Pig Lifespan + Days of Suffering from Pig Slaughter)) + (((Percent Increasing Broiler Chicken * Increase Value) + (Percent Staying Same with Broiler Chicken * Staying Same Value) + (Percent Decreasing Broiler Chicken Slightly * Decrease Slightly Value) + (Percent Decreasing Broiler Chicken Significantly * Decrease Significantly Value) + (Percent Eliminating Broiler Chicken * Elimination Value) + (Percent Never Ate Broiler Chicken * Never Ate Value)) * Normal Broiler Chicken Consumption * Broiler Chicken Elasticity * (Average Broiler Chicken Lifespan + Days of Suffering from Broiler Chicken Slaughter)) + (((Percent Increasing Egg * Increase Value) + (Percent Staying Same with Egg * Staying Same Value) + (Percent Decreasing Egg Slightly * Decrease Slightly Value) + (Percent Decreasing Egg Significantly * Decrease Significantly Value) + (Percent Eliminating Egg * Elimination Value) + (Percent Never Ate Egg * Never Ate Value)) * Normal Egg Consumption * Egg Elasticity * (Average Egg Lifespan + Days of Suffering from Egg Slaughter)) + (((Percent Increasing Turkey * Increase Value) + (Percent Staying Same with Turkey * Staying Same Value) + (Percent Decreasing Turkey Slightly * Decrease Slightly Value) + (Percent Decreasing Turkey Significantly * Decrease Significantly Value) + (Percent Eliminating Turkey * Elimination Value) + (Percent Never Ate Turkey * Never Ate Value)) * Normal Turkey Consumption * Turkey Elasticity * (Average Turkey Lifespan + Days of Suffering from Turkey Slaughter)) + (((Percent Increasing Farmed Fish * Increase Value) + (Percent Staying Same with Farmed Fish * Staying Same Value) + (Percent Decreasing Farmed Fish Slightly * Decrease Slightly Value) + (Percent Decreasing Farmed Fish Significantly * Decrease Significantly Value) + (Percent Eliminating Farmed Fish * Elimination Value) + (Percent Never Ate Farmed Fish * Never Ate Value)) * Normal Farmed Fish Consumption * Farmed Fish Elasticity * (Average Farmed Fish Lifespan + Days of Suffering from Farmed Fish Slaughter)) + (((Percent Increasing Sea Fish * Increase Value) + (Percent Staying Same with Sea Fish * Staying Same Value) + (Percent Decreasing Sea Fish Slightly * Decrease Slightly Value) + (Percent Decreasing Sea Fish Significantly * Decrease Significantly Value) + (Percent Eliminating Sea Fish * Elimination Value) + (Percent Never Ate Sea Fish * Never Ate Value)) * Normal Sea Fish Consumption * Sea Fish Elasticity * Days of Suffering from Sea Fish Slaughter) * Response Bias * Desirability Bias
[7]: Feel free to check the formula for accuracy and also check to make sure the calculator implements the formula correctly.  I worry that the added accuracy from the complex calculator is outweighed by the risk that the formula is wrong.

-

Edited 18 June to correct two typos and update footnote #2.

Also cross-posted on my blog.

553 comments

Comments sorted by top scores.

comment by CarlShulman · 2013-06-12T21:21:41.086Z · LW(p) · GW(p)

Nick Cooney who says that he's been reading studies that about 25% to 50% of people who say they are vegetarian actually are, though I don't yet have the citations. Thus, if we find out that an advertisement creates two meat reducers, we'd scale that down to one reducer if we're expecting a 50% desirability bias

This doesn't follow. The intervention is increasing the desirability bias, so the portion of purported vegetarians who are actually vegetarian is likely to change, in the direction of a lower proportion of true vegetarianism. It's plausible that 90%+ of the marginal purported vegetarians are bogus. Consider ethics and philosophy professors, who are significantly more likely to profess that eating meat is wrong:

There is no statistically detectable difference between the ethicists and either group of non-ethicists. (The difference between non-ethicists philosophers and the comparison professors was significant to marginal, depending on the test.)

Conclusion? Ethicists condemn meat-eating more than the other groups, but actually eat meat at about the same rate. Perhaps also, they're more likely to misrepresent their meat-eating practices (on the meals-per-week question and at philosophy functions) than the other groups.

A different frame: the claim here is that facebook ads for vegetarianism are unbelievably effective. We can decompose supporting arguments for that into "facebook ads are unbelievably effective" and "vegetarianism is incredibly easy to proselytize."

For comparison, estimates from randomized trials of get-out-the-vote campaigns (where one can actually measure changes in turnout, as votes are counted) are in the tens to hundreds of dollars per marginal voter turned out (before adjustments for other biases, etc (quotes below)).

Some other differences between vegetarianism and voting:

  • There is a much stronger moral consensus about voting than vegetarianism
  • Vegetarianism is a sustained costly effort, whereas voting is a one-time event
  • There are more GOTV campaigns, so vegetarian ads may face lower-hanging fruit
  • Images of animals may or may not be more effective than GOTV reminders/arguments

One handy reference is Donald Green and Alan Gerber's Get Out the Vote, which reviews dozens of experiments bearing on the cost-effectiveness of get-out-the-vote (GOTV) efforts.

The key results are summarized in a table on page 139 (viewable on the Google Books preview linked). The strongest well-confirmed effect is for door-to-door GOTV drives, which average 14 voters contacted to induce one vote (plus spillover effects), with a cost per vote of $29 (including spillover effects) assuming that staff time costs $16/hour for staff. Phone banks require more contacts per vote, but are cheaper per contact, with Green and Gerber estimating the cost per vote at $38 for campaign volunteer callers, and $90 for untrained commercial callers.

In recent years, the U.S. political parties have adjusted their GOTV strategy in line with these experiments, and turnout has increased. For instance, in 2004 Green and Gerber predicted predicted that the parties would increase GOTV spending by some $200 million using methods averaging $50 per vote, for an increase in turnout of 4 million, and the turnout data seems consistent with that. This money was concentrated in swing states, and in 2004 turnout increased 9% to 63% in the twelve most competitive states, while increasing 2% to 53% in the twelve least competitive states (while clearly leaving many potential voters home).

ETA:

Cattle have a bit less than 1/3rd the brain mass of humans, chickens hundreds of times less, and fish down more than an additional order of magnitude relative to body size (moreso by cortex). If you weight expected value by neurons, which is made plausible by thinking about things like split-brain patients and local computations in nervous systems, that will drastically change the picture and reduce cost-effectiveness.

Personally, I would care more about a day's experience for a cow than for a small feed fish with orders of magnitude less neural capacity.

Replies from: peter_hurford, peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-13T03:45:50.744Z · LW(p) · GW(p)

This is actually a really good point that makes me less confident in the effectiveness of vegetarianism advocacy.

Replies from: CarlShulman, CarlShulman
comment by CarlShulman · 2013-06-13T06:18:20.582Z · LW(p) · GW(p)

An additional point:

Cattle have a bit less than 1/3rd the brain mass of humans, chickens about 1/40th, and fish are down more than an order of magnitude (moreso by cortex). If you weight expected value by neurons, which is made plausible by thinking about things like split-brain patients and local computations in nervous systems, that will drastically change the picture.

My quick back-of-the envelope (which didn't take into account the small average size of the mostly feed fish involved, and thus reduced neural tissue) is that making this adjustment would cut the cost-effectiveness metric by a factor of at least 400 times, and plausibly 1000+ times. This reflects the fact that fish make up most of the life-days in the calculation, and also have comparatively tiny and simple nervous systems. Personally, I would pay more to ensure a painless death for a cow than for a small feed fish with orders of magnitude less neural capacity.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-13T06:21:45.984Z · LW(p) · GW(p)

If you weight expected value by neurons

Ah, but now I can turn myself into a utility monster by artificially enlarging my brain! Game over.

Replies from: ciphergoth, CarlShulman
comment by Paul Crowley (ciphergoth) · 2013-06-14T21:16:55.981Z · LW(p) · GW(p)

We're trying to work out how to make progress on moral questions today, not trying to lay down a rule for all eternity that future agents can't game.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-14T21:53:30.279Z · LW(p) · GW(p)

It was a joke.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-06-15T13:09:29.402Z · LW(p) · GW(p)

Oops, sorry!

comment by CarlShulman · 2013-06-13T06:22:56.435Z · LW(p) · GW(p)

Or by having kids. Or copying your uploaded self. Or re-engineering your nervous system in other ways...

comment by CarlShulman · 2013-06-13T03:55:48.251Z · LW(p) · GW(p)

The bit about desirability bias, or the fact that the optimistic estimates involve claiming that vegetarian ads are vastly more effective than other kinds of moralized behavior-change ads with more accurate measurements of effect?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-13T04:42:55.215Z · LW(p) · GW(p)

Both points. The question "why should vegetarianism advocacy be so much more effective than get out the vote advocacy?" is a good point. Since the study quality for get out the vote advocacy is so much higher, we should expect vegetarianism advocacy to end up about the same.

On the other hand, I do think vegetarianism advocacy is a lot more psychologically salient (pictures of suffering) than any case that can be made for voting. I've personally distributed some pro-voting pamphlets, and they're not very compelling at all.

Replies from: Brian_Tomasik
comment by Brian_Tomasik · 2013-06-13T05:17:14.364Z · LW(p) · GW(p)

Good points, Carl! Jonah Sinick actually made the GOTV argument to me on a prior occasion, citing your essay on the topic.

One additional consideration is that nearly everyone knows about voting, but many people don't know about the cruelty of factory farms. This goes along with the low-hanging-fruit point.

I would not be surprised if, after tempering the figures by this outside-view prior, it takes a few hundred dollars to create a new veg year. Even if so, that's at most 1-2 orders of magnitude different from the naive conservative estimate.

comment by Peter Wildeford (peter_hurford) · 2013-06-13T06:33:10.711Z · LW(p) · GW(p)

This is something I've considered a lot, though chicken also dominate the calculations along with fish. I'm not currently sure if I value welfare in proportion to neuron count, though I might. I'd have to sort that out first.

A question at this point I might ask is how good does the final estimate have to be? If AMF can add about 30 years of healthy human life for $2000 by averting malaria and a human is worth 40x that of a chicken, then we'd need to pay less than $1.67 to avert a year of suffering for a chicken (assuming averting a year of suffering is the same as adding a year of healthy life, which is a messy assumption).

Replies from: RobertWiblin, CarlShulman, KatieHartman
comment by RobertWiblin · 2013-06-14T23:24:06.657Z · LW(p) · GW(p)

I think some weighting for the sophistication of a brain is appropriate, but I think the weighting should be sub-linear w.r.t. the number of neurones; I expect that in simpler organisms, a larger share of the brain will be dedicated to processing sensory data and generating experiences. I would love someone to look into this to check if I'm right.

Replies from: CarlShulman
comment by CarlShulman · 2013-06-15T18:12:10.917Z · LW(p) · GW(p)

I agree on that effect, I left out various complications. A flip side to that would be the number of cortex neurons (and equivalents). These decrease rapidly in simpler nervous systems.

We don't object nearly as much to our own pains that we are not conscious of and don't notice or know about, so weighting by consciousness of pain, rather than pain/nociception itself, is a possibility ( I think that Brian Tomasik is into this).

comment by CarlShulman · 2013-06-13T06:50:06.365Z · LW(p) · GW(p)

A question at this point I might ask is how good does the final estimate have to be?

First, there are multiple applications of accurate estimates.

The unreasonably low estimates would suggest things like "I'm net reducing factory-farming suffering if I eat meat and donate a few bucks, so I should eat meat if it makes me happier or healthier sufficiently to earn and donate an extra indulgence of $5 ."

There are some people going around making the claim, based on the extreme low-ball cost estimates, that these veg ads would save human lives more cheaply than AMF by reducing food prices. With saner estimates, not so, I think.

Second, there's the question of flow-through effects, which presumably dominate in a total utilitarian calculation anyway, if that's what you're into. The animal experiences probably don't have much effect there, but people being vegetarian might have some, as could effects on human health, pollution, food prices, social movements, etc.

To address the total utilitarian question would require a different sort of evidence, at least in the realistic ranges.

Replies from: Louie
comment by Louie · 2013-06-16T10:24:35.060Z · LW(p) · GW(p)

The unreasonably low estimates would suggest things like "I'm net reducing factory-farming suffering if I eat meat and donate a few bucks, so I should eat meat if it makes me happier or healthier sufficiently to earn and donate an extra indulgence of $5 ." There are some people going around making the claim, based on the extreme low-ball cost estimates.

Correct. I make this claim. If vegetarianism is that cheap, it's reasonable to bin it with other wastefully low-value virtues like recycling paper, taking shorter showers, turning off lights, voting, "staying informed", volunteering at food banks, and commenting on less wrong.

comment by KatieHartman · 2013-06-17T03:03:17.394Z · LW(p) · GW(p)

If AMF can add about 30 years of healthy human life for $2000 by averting malaria and a human is worth 40x that of a chicken, then we'd need to pay less than $1.67 to avert a year of suffering for a chicken (assuming averting a year of suffering is the same as adding a year of healthy life, which is a messy assumption).

This might be a minor point, but I don't think it's necessarily a given that one year of healthy, average-quality life offsets one year of factory farm-style confinement. If we were only discussing humans, I don't think anyone would consider a year under those conditions to be offset by a healthy year.

comment by Viliam_Bur · 2013-06-12T19:50:59.739Z · LW(p) · GW(p)

You could also reduce meat consumption by advertising good vegetarian meal recipes.

(Generally, the idea is that you can reduce eating meat even without explicitly promoting not eating meat.)

Replies from: peter_hurford, freeze
comment by Peter Wildeford (peter_hurford) · 2013-06-12T21:10:57.231Z · LW(p) · GW(p)

Are you suggesting that one simply advertise the existence of good vegetarian recipes without mentioning surrounding reasons for reducing meat?

This is already a strong component in existing advocacy, though none of it mentions recipes alone. Leading pamphlets like "Compassionate Choices" and "Even if You Like Meat" have recipe sections at the end of the book. Peter Singer's book Animal Liberation has recipes. Vegan Outreach has a starter guide section with lots of recipes.

As far as I know, the videos used on the internet don't directly mention recipes, but do point to ChooseVeg.com which has tons of recipes and essentially advertises vegetarianism via a recipe-based argument. Another recent campaign, The Seven Day Vegan Challenge also advertises based on a lot of recipes.

Replies from: SaidAchmiz, AspiringRationalist, Raemon
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T00:07:53.156Z · LW(p) · GW(p)

Are you suggesting that one simply advertise the existence of good vegetarian recipes without mentioning surrounding reasons for reducing meat?

I agree with Viliam_Bur that this may be effective, and here's why.

I bake as a hobby (desserts — cakes, pies, etc.). I am not a vegetarian; I find moral arguments for vegetarianism utterly unconvincing and am not interesting in reducing the suffering of animals and so forth.

However, I often like to try new recipes, to expand my repertoire, hone my baking skills, try new things, etc. Sometimes I try out vegan dessert recipes, for the novelty and the challenge of making something that is delicious without containing eggs or dairy or white sugar or any of the usual things that go into making desserts taste good.[1]

More, and more readily available, high-quality vegan dessert recipes would mean that I substitute more vegan dessert dishes for non-vegan ones. This effect would be quite negated if the recipes came bundled with admonitions to become vegan, pro-vegan propaganda, comments about how many animals this recipe saves, etc.; I don't want to be preached to, which I think is a common attitude.

[1] My other (less salient) motivation for learning to make vegan baked goods is to be prepared if I ever have vegan/vegetarian friends who can't eat my usual stuff (hasn't ever been the case so far, but it could happen).

Replies from: Viliam_Bur, GordonAitchJay, Swimmer963, Douglas_Knight
comment by Viliam_Bur · 2013-06-13T07:21:31.259Z · LW(p) · GW(p)

Thanks, this is what I tried to say. Reducing suffering is far, eating well is near.

Also, if a book or a website comes with vegetarian/vegan propaganda, I would assume those people are likely to lie or exaggerate. No propaganda -- no suspicion.

This may be just about vegetarians around me, but often people who are into vegetarianism are also into other forms of food limitations, so I often find their food unappealing. They act like an anti-advertisement to vegetarian food. (Perhaps there is an unconscious status motive here: the less people join them, the more noble they are. Which is not how an effective altruist should think.) On the other hand, when I go to some Indian or similar ethnic restaurant, I love the food. It tastes well, it has different components and good spice. I mean, what's wrong about using spice? If your goal is to reduce animal suffering, nothing. But if your goal is to have a weirdest diet possible (no meat, no cooking, no taste, everything compatible with the latest popular book or your horoscope), spice is usually on the list of forbidden components.

In short, vegetarianism is often not about not eating animals. So if you focus on "good meal (without meat)" part, and ignore the vegetarianism, you may win people like me. Even if I don't promise to give up meat completely, I can reduce its consumption simply because tasty meals without meat outcompete tasty meals with meat on my table.

Replies from: amcknight
comment by amcknight · 2013-06-18T23:59:02.307Z · LW(p) · GW(p)

This may be just about vegetarians around me, but often people who are into vegetarianism are also into other forms of food limitations

I think I've noticed this a bit since switching to a vegan(ish) diet 4 months ago. My guess is that once a person starts making diet restrictions, it becomes much easier to make diet restrictions, and once a person starts learning where their food comes from, it becomes easier to find reasons to make diet restrictions (even dumb reasons).

comment by GordonAitchJay · 2013-06-13T11:59:42.075Z · LW(p) · GW(p)

What were the moral arguments for vegetarianism that you found utterly unconvincing? Where did you hear or read these?

Are you interested in reducing the suffering of humans? If so, why?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T14:26:06.724Z · LW(p) · GW(p)

What were the moral arguments for vegetarianism that you found utterly unconvincing? Where did you hear or read these?

The ones that say we should care about what happens to animals and what animals experience, including arguments from suffering. I've heard them in lots of places; the OP has himself posted an example — his own essay "Why Eat Less Meat?"

Are you interested in reducing the suffering of humans?

Yeah.

If so, why?

I think if you unpacked this aspect of my values, you'd find something like "sapient / self-aware beings matter" or "conscious minds that are able to think and reason matter". That's more or less how I think about it, though converting that into something rigorous is nontrivial. "Matter" here is used in a broad sense; I care about sapient beings, think that their suffering is wrong, and also consider such beings the appropriate reference class for "veil of ignorance" type arguments, which I find relevant and at least partly convincing.

My caring about reducing human suffering has limits (in more than one dimension). It is not necessarily my highest value, and interacts with my other values in various ways, although I mostly use consequentialism in my moral reasoning and so those interactions are reasonably straightforward for the most part.

Replies from: freeze
comment by freeze · 2015-09-03T15:54:25.539Z · LW(p) · GW(p)

Do you think that animals can suffer?

Or, what evolutionary difference do you think gives a difference in the ability to experience consciousness at all between humans and other animals with largely similar central nervous systems/brains?

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-06-13T12:49:46.796Z · LW(p) · GW(p)

White sugar has animal products in it?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T14:09:29.546Z · LW(p) · GW(p)

Not as such, no, but animal products are used in its manufacture: bone char is used in the sugar refining process (by some manufacturers, though not all), making it not ok for vegans.

Replies from: Swimmer963, army1987
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-06-13T18:15:44.930Z · LW(p) · GW(p)

Wow. I learned something that I did not know before :)

comment by A1987dM (army1987) · 2013-06-14T21:11:59.821Z · LW(p) · GW(p)

I had heard that plenty of times, but I had never bothered to check whether or not that was just an urban legend.

comment by Douglas_Knight · 2013-06-13T16:41:13.943Z · LW(p) · GW(p)

Have you experimented with baking with lard?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T16:54:56.469Z · LW(p) · GW(p)

I have not. Christopher Kimball, in The Dessert Bible, comments that unless you can get leaf lard (the highest grade of lard, which comes from the fat around the pig's kidneys), using lard in dessert recipes is undesirable (results in the dough having a bacon-y taste). I don't think I can get leaf lard here in NYC, and even if I could it would probably be very expensive.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-06-13T17:33:10.492Z · LW(p) · GW(p)

NYC? of course you can. Or mail-order.

But I would start with regular lard in the right recipes.

On a different note, I usually substitute brown sugar for white for the taste.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T17:43:34.261Z · LW(p) · GW(p)

Oh? Do you know any good places to get it in NYC? (Preferably Brooklyn, Manhattan also fine.)

Yes, brown for white sugar is a good substitution sometimes. However it can partially mute the taste of other ingredients, like fresh fruit, so it's not always ideal. Also, brown sugar is definitely more expensive.

Replies from: novalis
comment by novalis · 2013-06-13T18:01:28.790Z · LW(p) · GW(p)

I would be shocked if Ottomanelli's on Bleeker didn't have it leaf lard.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T18:12:40.701Z · LW(p) · GW(p)

The internet tells me they don't carry it, but can special-order it. Mail-order, by the way, looks to come out to $10 / lb., at least., if you can get it; very few places seem to carry it.

Replies from: novalis
comment by novalis · 2013-06-14T00:34:43.540Z · LW(p) · GW(p)

You might have to call them; they will special-order just about anything. The only thing I have failed to find there was rabbit ears (without buying the whole rabbit).

comment by NoSignalNoNoise (AspiringRationalist) · 2013-06-13T02:00:23.800Z · LW(p) · GW(p)

This is already a strong component in existing advocacy, though none of it mentions recipes alone. Leading pamphlets like "Compassionate Choices" and "Even if You Like Meat" have recipe sections at the end of the book. Peter Singer's book Animal Liberation has recipes. Vegan Outreach has a starter guide section with lots of recipes.

Many non-vegetarians are suspicious of organizations that try to convince them to be vegetarian. It might be more effective to promote vegetarian recipes separately from "don't eat meat" efforts.

Incidentally, I would love to know of more (not too difficult) ways to cook tofu.

Replies from: Alicorn
comment by Alicorn · 2013-06-13T02:16:14.142Z · LW(p) · GW(p)

I like to take the firmest tofu I can find (this is usually vacuum-packed, not water-packed) and cut it into slices or little cubes, and then pan-fry it in olive oil with a splash of lemon juice added halfway through till it's golden-brown and chewy. Then I put it in pasta (cubes) or on sandwiches (slices) - the sandwich kind is especially nice with spinach sauteed with cheese and hummus.

comment by Raemon · 2013-06-13T18:25:38.777Z · LW(p) · GW(p)

I think that simply promoting good vegetarian meals would potentially reduce meat consumption among certain groups of people that would be less receptive to accompanying pro-vegetarian arguments. I think it should be part of a vegan-advocacy arsenal (i.e. you do a bunch of different sorts of flyers/ads/promotions, some of which is just recipe spreading without any further context)

However, if one of your goals is to increase human compassion for nonhumans, then recipe spreading is dramatically less useful in the long term. One of the biggest arguments (among LW folk anyway) for animal advocacy is that not only are factory farms (and the wilderness) pretty awful, but that it'll hopefully translate into more humanely managed eco-systems, once we go off terraforming or creating virtual worlds.

(It may turn out to be effective to get people to try out vegan recipes [without accompanying pro-vegan context] and then later on promote actual vegan ideals to the same people, after they've already taken small steps that indirectly bias themselves towards identifying with veganism)

comment by freeze · 2015-09-03T16:30:17.947Z · LW(p) · GW(p)

Perhaps, but consider the radical flank effect: https://en.wikipedia.org/wiki/Radical_flank_effect

Encouraging the desired end goal, the total cessation of meat consumption, may be more effective than just encouraging reduction even in the short to moderate run (certainly the long run) by moving the middle.

comment by KatieHartman · 2013-06-16T12:15:26.192Z · LW(p) · GW(p)

I'm really curious why all of the major animal welfare/rights organizations seem to be putting more emphasis on vegan outreach than on in-vitro meat/genetic modification research. I have a hard time imagining a scenario where any arbitrary (but large) contribution toward vegan outreach leads to greater suffering reduction than the same amount put toward hastening a more efficient and cruelty-free system for producing meat.

Replies from: CAE_Jones, peter_hurford, Jabberslythe, hylleddin, freeze
comment by CAE_Jones · 2013-06-16T12:39:37.862Z · LW(p) · GW(p)

There seems to be, based just on my non-rigorous observations, significant overlap between the Vegan/Vegetarian communities and the "Genetically Modified Foods and big Pharma will turn your babies into money-forging cancer" theorists. Obviously not all Vegans are "chemicals=bad because nature" conspiracy theorists, and not all such conspiracy theorists are vegan, but the overlap seems significant. That vocal overlap group strikes me as likely to oppose lab-grown meat because it's unnatural, and then the conspiracy theories will begin. And the animal rights groups probably don't want to divide up their base any further.

(This comment felt harsh to me as I was writing it, even after I cut out other bits. The feeling I'm getting is very similar to political indignation. If this looks as mind-killd to anyone else, please please correct me.)

Replies from: KatieHartman, freeze
comment by KatieHartman · 2013-06-16T13:05:27.825Z · LW(p) · GW(p)

That seems plausible, though PETA already has a million-dollar prize for anyone who can mass-market an in-vitro meat product. Given their annual revenues (~$30 million) and the cost associated with that kind of project, it seems like they're going about it the wrong way.

From a utilitarian perspective, wireheading livestock might be an even better option - though that probably would be perceived by most animal activists (and people in general) as vaguely dystopian.

Replies from: None, ialdabaoth
comment by [deleted] · 2013-06-17T11:20:34.515Z · LW(p) · GW(p)

Does the technology to reliably and cheaply wirehead farmed animals now exist at all? Without claiming expertise, I find that unlikely.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2013-06-18T14:09:42.384Z · LW(p) · GW(p)

Opium in the feed? Cut their nerves? Some sort of computerised gamma-ray brain surgery? I'm certain that if there were a tiny financial incentive for agribusiness to do it then a way would swiftly be found.

It's not so hard to turn humans into living vegetables. Some sorts of head trauma seem to do it. How hard can it be to make that reliable (or at least reasonably reliable) for cows?

Least convenient world and all that: If we could prevent animal suffering by skilfully whacking calves over the head with a claw hammer, would that be a goal to which the rational vegan would aspire? It would be just as good as killing them, plus pleasure for the meat eaters. Also it would probably be possible to find people who'd enjoy doing it, so that's another plus.

Replies from: Nornagest, Jabberslythe
comment by Nornagest · 2013-06-18T20:00:53.491Z · LW(p) · GW(p)

It's not so hard to turn humans into living vegetables. Some sorts of head trauma seem to do it. How hard can it be to make that reliable (or at least reasonably reliable) for cows?

Probably not that hard. Doing it without ruining the meat or at least reducing yields sounds harder to me, though -- muscles atrophy if they don't get used, and they don't get used if nothing's giving them commands. I'd also expect force-feeding a braindead animal to be more expensive and probably more conducive to health problems than letting it feed itself.

Replies from: gwern
comment by gwern · 2013-06-18T20:48:16.759Z · LW(p) · GW(p)

To continue the 'living vegetables' approach, one could point out that to keep a human in a coma alive and (somewhat) well will cost you somewhere from $500-$3k+. Per day.

Even assuming that animals are much cheaper by taking the bottom of the range and then cutting it by an entire order of magnitude, the 1.5-3 year aging of standard cattles being butchered means 50 1.5 365 = >$27.4k extra expenses.

That's some expensive meat.

comment by Jabberslythe · 2013-06-18T18:52:59.990Z · LW(p) · GW(p)

So just kill all the farm animals painlessly now? Sure that sounds good. But if there will still be farm animal being raised then it seems there still is a problem. Or if you are just talking about ways of making slaughter painless for continuing to factory farm, that sounds better than nothing.

comment by ialdabaoth · 2013-06-17T11:38:10.318Z · LW(p) · GW(p)

though that probably would be perceived by most animal activists (and people in general) as vaguely dystopian.

I find this interesting, because it seems to imply that people have an intuitive sense that eudaimonia applies to animals. I'll have to think about the consequences of this.

comment by freeze · 2015-10-16T20:56:49.260Z · LW(p) · GW(p)

Do you know of any sources for this? In my also non-rigorous experience this is a totally unfounded misperception of veg*nism that people seem to have, founded on nothing but a few quack websites/anti-science blogs.

Consider for instance /r/vegan over at reddit, which is in fact overwhelmingly pro-GMO and ethics rather than health focused. Of course, it is certainly true that the demographics of reddit or that subreddit are much different from that of veg*ns as a whole (or people as a whole). Lesswrong is an even more extreme case of such a limited demographic.

comment by Peter Wildeford (peter_hurford) · 2013-06-17T04:31:06.674Z · LW(p) · GW(p)

A lot of animal welfare/rights organizations provide funding for in-vitro meat / fake meat, though they don't do much to advertise it. The idea is that these meat substitutes won't take off unless they create some demand for them. Vegan Outreach is one of the biggest funders of Beyond Meat and New Harvest.

Replies from: KatieHartman
comment by KatieHartman · 2013-06-19T00:32:16.424Z · LW(p) · GW(p)

I like Beyond Meat, but I think the praise for it has been overblown. For example, the Effective Animal Activism link you've provided says:

[Beyond Meat] mimics chicken to such a degree that renowned New York Times food journalist and author Mark Bittman claimed that it "fooled me badly in a blind tasting".

But reading Bittman's piece, the reader will quickly realize that the quote above is taken out of context:

It doesn’t taste much like chicken, but since most white meat chicken doesn’t taste like much anyway, that’s hardly a problem; both are about texture, chew and the ingredients you put on them or combine with them. When you take Brown’s product, cut it up and combine it with, say, chopped tomato and lettuce and mayonnaise with some seasoning in it, and wrap it in a burrito, you won’t know the difference between that and chicken.

I like soy meat alternatives just fine, but vegans and vegetarians are the market. People who enjoy the taste of meat and don't see the ethical problems with it don't want a relatively expensive alternative with a flavor they have to mask. There's demand for in-vitro meat because there's demand for meat. If you can make a product that tastes the same and costs less, people will buy it.

Maybe it's likely impossible to scale vat meat such that it is actually cheaper to produce, long-term, than meat from conventionally-raised livestock. Has this sort of analysis been done? I'd assume from the numbers New Harvest quotes - 45% reduction in energy use, 95% reduction in water use, etc. - that it is actually possible.

If you put vat meat on a styrofoam plate with a label with a big red barn on it and a cheaper price tag than the stuff next to it, people almost certainly will buy it. If consumers were that discerning about how their meat was produced, they wouldn't buy the stuff that came from an animal that spent its entire life knee-deep in its own excrement.

Replies from: wedrifid, army1987, Osiris
comment by wedrifid · 2013-06-22T01:44:23.407Z · LW(p) · GW(p)

Maybe it's likely impossible to scale vat meat such that it is actually cheaper to produce, long-term, than meat from conventionally-raised livestock.

It seems overwhelmingly unlikely that the optimal method of meat production is to have it walking around eating plant matter and going 'Moo!'.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-22T02:29:55.381Z · LW(p) · GW(p)

Especially for sheep. The training costs would be prohibitive.

comment by A1987dM (army1987) · 2013-06-22T17:48:50.521Z · LW(p) · GW(p)

If you put vat meat on a styrofoam plate with a label with a big red barn on it and a cheaper price tag than the stuff next to it, people almost certainly will buy it.

I dunno -- look at all the brouhaha about genetically modified food.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-22T17:54:32.506Z · LW(p) · GW(p)

That there's a population brouhahaing over GM food doesn't preclude the existence of a population eager to buy cheap tasty-enough meat. Indeed, I expect the populations overlap significantly.

comment by Osiris · 2013-06-19T04:05:10.746Z · LW(p) · GW(p)

I predict a big drop in price soon after vat meat becomes sufficiently popular due to money saved on dealing with useless organs and suffering, as well as a great big leap in profit for any farm that sells "natural cow meat." One is inherently efficient due to it simplfying farming. The other is pretty, however ugly it is for the animals. I do worry about the numbers New Harvest gives, but in the long run, there is hope for this regardless of what the price is initially--the potential for success in feeding humanity cheaply and well is just too great, in my opinion. Seems like I will be pushing "meat in a bucket" whenever possible, and I am not even that into making animals happy.

comment by Jabberslythe · 2013-06-16T19:08:07.972Z · LW(p) · GW(p)

Well if vegan/vegetarian outreach is particularly effective then it may do more to develope lab meat than just donating to lab meat causes themselves (because there would be more people interested in this and similar technologies). Additionally, making people vegan/vegetarian may have a stronger effect in promoting anti speciesism in general which seems like it will be of larger overall benefit than just ending factory farming. This seems like it would happen because thoughts follow actions.

comment by hylleddin · 2013-06-18T22:01:20.033Z · LW(p) · GW(p)

I've wondered about this as well.

We can try to estimate New Harvest's effectiveness using the same methodology attempted for SENS research in the comment by David Barry here. I can't find New Harvest's 990 revenue reports, but it's donations are routed through the Network for Good, which has a total annual revenue of 150 million dollars, providing an upper bound. An annual revenue of less than 1000 dollars is very unlikely, so we can use the geometric mean of $400 000 per year as an estimated annual revenue. There are about 500 000 minutes in a year, so right now $1 brings development just over a minute closer.*

There currently 24 billion chicken, 1 billion cattle, and 1 billion pigs. Assuming the current factory farm suffering rates as an estimate for suffering rates when artificial/substitute meat becomes available, and assuming (as the OP does) that animals suffer roughly equally, then bringing faux meat one minute closer prevents about (25 billion animals)/(500 000 minutes per year) = 50 animal years of suffering.

If we assume that New Harvest has a 10% chance of success, $1 dollar there prevents an expected 5 animal years of suffering, or expressed as in the OP, preventing 1 expected animal year of suffering costs about 20 cents.

So, these (very rough) estimates show about similar levels of effectiveness.

*Assuming some set amount of money is necessary and the bottleneck and you aren't donating enough for diminishing marginal returns.

comment by freeze · 2015-09-03T15:45:41.222Z · LW(p) · GW(p)

There are already meat alternatives (seitan, tempeh, tofu, soy, etc.) which provide a meat-like flavor and texture. It's not immediately obvious that in-vitro meat is necessarily more effective than just promoting or refining existing alternatives.

I suppose for long-run impact this kind of research may be orders of magnitude more useful though.

comment by Vaniver · 2013-06-12T21:30:05.719Z · LW(p) · GW(p)

Something we should take into account that helps the case for this outreach rather than hurts it is the idea that conversions aren't binary -- someone can be pushed by the ad to be more likely to reduce their meat intake as opposed to fully converted.

Eh, don't forget that humans often hate other humans. Exposing an anti-vegetarian to vegetarian advertisements might induce them to increase their meat intake, and an annoying advocate may move someone from neutral to anti-vegetarian. This effect is very unlikely to be captured by surveys- and so while it's reasonable to expect the net effect to be positive, it seems reasonable to lower estimates by a bit.

(Most 'political' moves have polarizing effects; you should expect supporters to like you more, and detractors to like you less, afterwards, which seems like a better model than everyone slowly moving towards vegetarianism.)

Replies from: peter_hurford, army1987
comment by Peter Wildeford (peter_hurford) · 2013-06-13T01:55:50.809Z · LW(p) · GW(p)

Eh, don't forget that humans often hate other humans. Exposing an anti-vegetarian to vegetarian advertisements might induce them to increase their meat intake, and an annoying advocate may move someone from neutral to anti-vegetarian.

If you take a non-vegetarian and make them more non-vegetarian, I don't think much is lost, because you never would have captured them anyway. I suppose they might eat more meat or try and persuade other people to become anti-vegetarian, but my intuition is that this effect would be really small.

But you're right that it would need to be considered.

Replies from: MTGandP
comment by MTGandP · 2013-06-15T22:28:25.074Z · LW(p) · GW(p)

I agree. In addition, I think people who claim that they will eat more meat after seeing a pamphlet or some other promotion for vegetarianism just feel some anger in the moment, but they'll likely forget about it within an hour or so. I can't see someone several weeks later saying to eirself, "I'd better eat extra meat today because of that pamphlet I read three weeks ago."

comment by A1987dM (army1987) · 2013-06-13T16:51:05.967Z · LW(p) · GW(p)

BTW, how comes certain omnivores dislike vegetarians so much? All other things being equal, one fewer person eating meat will reduce its price, about which a meat-eater should be glad. (Similarly, why do certain straight men dislike gay men that much?)

Replies from: Kaj_Sotala, Vaniver, TheOtherDave, Eugine_Nier
comment by Kaj_Sotala · 2013-06-13T19:09:48.598Z · LW(p) · GW(p)

If someone says that they are vegetarian for moral reasons, then it's an implicit (often explicit) claim that non-vegetarians are less moral, and therefore a status grab. If an omnivore doesn't want to become vegetarian nor to lose status, they need to aggressively deny the claim of vegetarianism being more moral.

comment by Vaniver · 2013-06-13T18:38:49.178Z · LW(p) · GW(p)

BTW, how comes certain omnivores dislike vegetarians so much? All other things being equal, one fewer person eating meat will reduce its price, about which a meat-eater should be glad.

Vegetarianism generally includes moral claims as well as preference claims, and responding negatively to conflicting morals is fairly common. Even responding negatively to conflicting preference claims is common. This seems to happen for both tribal reasons (different tastes in music) and possibly practical reasons (drinkers disliking non-drinkers at a party, possibly because of the asymmetric lowering of boundaries).

Similarly, why do certain straight men dislike gay men that much?

Simple tribalism is one explanation. It also seems likely to me that homophobia is a fitness advantage for men in the presence of bisexual / homosexual men. There's also some evidence that, of men who claim to be straight, increased stated distaste for homosexuals is associated with increased sexual arousal by men, which fits neatly with the previous statement- someone at higher risk of pursuing infertile / socially costly relationships should be expected to spend more effort in avoiding them.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-15T07:01:23.599Z · LW(p) · GW(p)

Simple tribalism is one explanation.

(Indeed, I was going to mention religion, but I forgot to. OTOH, I think I've met at least one otherwise quite contrarian person who was homophobic.)

It also seems likely to me that homophobia is a fitness advantage for men in the presence of bisexual / homosexual men.

How so? By encouraging other men to pursue heterosexual relationships, I would increase the demand of straight women and the supply of straight men, which (so long as I'm a straight man myself and the supply of straight women isn't much larger than that of straight men) doesn't sound (from a selfish point of view) like a good thing.

[The first time I wrote this paragraph it pattern-matched sexism because it talked about women as a commodity, so I've edited it so that it talks about both women and men as commodity, so if anything it now pattern-matches extreme cynicism; and I'm OK with that.]

There's also some evidence that, of men who claim to be straight, increased stated distaste for homosexuals is associated with increased sexual arousal by men,

I've heard that cliché, but I had assumed that it was (at least in part) something someone made up to take the piss out of homophobes. Any links?

Replies from: Vaniver
comment by Vaniver · 2013-06-15T08:19:18.295Z · LW(p) · GW(p)

How so?

I mean in the "revulsion to same sex attraction" sense, not the "opposed to gay rights" sense. If a man is receptive to the sexual interest of other men, that makes him less likely to have a relationship with a woman, and thus less likely to have children, and thus is a fitness penalty, and so a revulsion that protects against that seems like a fitness advantage.

Any links?

Here's one.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-15T10:57:13.971Z · LW(p) · GW(p)

I mean in the "revulsion to same sex attraction" sense, not the "opposed to gay rights" sense.

I was thinking about straight men who dislike gay men whether or not they have been hit on by them.

Here's one.

Thanks for the link.

(Anyway... Is someone downvoting this entire subthread?)

comment by TheOtherDave · 2013-06-13T17:04:39.211Z · LW(p) · GW(p)

Are you asking more broadly why people in unmarked cases dislike being treated as though they were a marked case? Or have I overgeneralized, here?

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-15T06:49:16.792Z · LW(p) · GW(p)

I'm asking more broadly why people dislike it when market demand for something they like decreases. (After reading the other replies, I guess that's at least partly because liking stuff with low market demand is considered low-status.)

Replies from: elharo, TheOtherDave, Eugine_Nier
comment by elharo · 2013-06-15T12:48:59.598Z · LW(p) · GW(p)

In at least some cases, network effects come into play. For example, if I prefer a non-mainstream operating system or computer hardware, there will be less support for my platform of choice. For instance, I may like Windows Phone but I can't get the apps for it that I can for the iPhone or Android. Furthermore, my employer may give me a choice of iPhone or Android but not Windows. Thus someone who prefers Windows Phone would want demand for Windows Phone to increase.

Furthermore, supply is not always fixed. For products for which manufacturers can increase output to match demand, increasing demand may increase availability because more retailers will make them available. If economies of scale come into play, increasing demand may also decrease price.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-15T13:00:13.624Z · LW(p) · GW(p)

Good point, though in this particular example, I guess meat eaters aren't anywhere near few enough for these effects to be relevant.

comment by TheOtherDave · 2013-06-15T07:07:37.232Z · LW(p) · GW(p)

OK.
I observe that both of the examples you provide (vegetarians and homosexuals) have a moral subtext in my culture that many other market-demand scenarios (say, a fondness for peanuts) lack. That might be relevant.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-15T07:17:00.501Z · LW(p) · GW(p)

(None of the vegetarians I've met seemed to be particularly bothered when other people ate meat, but as far as I can remember none of them was from the US¹, and from reading other comments in this thread I'm assuming it's different for certain American vegetarians.)


  1. Though I did met a few from an English-speaking country (namely Australia), and there are a few Canadians I met for whom I can't remember off the top of my head whether they ate meat.
Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-15T15:21:51.918Z · LW(p) · GW(p)

Fair enough. If there isn't a moral subtext to vegeterianism in your culture, but omnivores there still dislike vegetarians, that's evidence against my suggestion.

Replies from: army1987, Eugine_Nier
comment by A1987dM (army1987) · 2013-06-15T17:25:19.091Z · LW(p) · GW(p)

I have seen plenty of ‘jokes’ insulting vegetarians in Italian on Facebook; but then again, I've seen at least one about the metric system too, so maybe there are people who translate stuff from English no matter how little sense they make in the target cultural context.

comment by Eugine_Nier · 2013-06-16T05:44:23.384Z · LW(p) · GW(p)

If there isn't a moral subtext to vegeterianism in your culture,

What army said is not the same thing. Most of the vegetarians I know also don't seem particularly bothered when other people ate meat but will nonetheless give moral reasons if asked why they don't eat meat.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-16T06:08:40.168Z · LW(p) · GW(p)

What army said is not the same thing.

In isolation, I completely agree.

In context, though... well, I said that vegetarians have a moral subtext in my culture, and army1987 replied that vegetarians they've met weren't bothered by others eating meat. I interpreted that as a counterexample... that is, as suggesting vegetarians don't have a moral subtext.
If I misinterpreted, I of course apologize, but I can't come up with another interpretation that doesn't turn their comment into a complete nonsequitor, which seems an uncharitable assumption.

If you have a third option in mind for what they might have meant, I'd appreciate you elaborating it.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-16T06:55:49.655Z · LW(p) · GW(p)

If I misinterpreted, I of course apologize, but I can't come up with another interpretation that doesn't turn their comment into a complete nonsequitor, which seems an uncharitable assumption.

Army mistakenly believes that because the vegetarians he's met weren't bothered by others eating meat their vegetarianism does not have a moral subtext.

Replies from: TheOtherDave, army1987
comment by TheOtherDave · 2013-06-16T13:53:15.879Z · LW(p) · GW(p)

With you so far.
I understood army1987 to be going further, and suggesting not only that the vegetarians they've met display vegetarianism without a moral subtext, but also that they are representative of vegetarians in their culture more generally... that is, they aren't some kind of aberrant statistical fluke.
I summarized this as the claim that "there isn't a moral subtext to vegeterianism in [army1987's] culture," which is what you took exception to.
This doesn't seem like a controversial next step to me.

comment by A1987dM (army1987) · 2013-06-16T08:54:40.216Z · LW(p) · GW(p)

mistakenly

How the hell do you know? Have you ever even seen them?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-18T02:41:27.934Z · LW(p) · GW(p)

Sorry, I was adding a possibility to Dave's list not asserting that this was indeed the case.

By the way, have you ever asked them why they're vegetarians?

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-18T14:58:57.290Z · LW(p) · GW(p)

No I haven't -- after all, so far as I'm concerned what people eat is their own business.¹ ISTR that one of them once told me that he disliked the taste of meat, though.


  1. Except insofar as it has externalities, but if anything a vegetarian diet has less externalities than an omnivore one.
Replies from: Jiro
comment by Jiro · 2013-06-18T15:59:16.056Z · LW(p) · GW(p)

Someone who really dislikes the taste of meat but lacks other objections to eating meat should not object to eating byproducts such as gelatin that don't taste like meat, or eating small amounts of meat in a context where they can't taste it as meat. Furthermore, they should refuse to eat vegetarian products intentionally designed to taste like meat. And many meat products just taste different; disliking the taste of meat is a bit like disliking the taste of all products whose names begin with the letter A--it's logically possible, but it's an unusual category for one's sense of taste to so exactly fit.

I suspect a lot of people who "dislike the taste of meat" are really just rationalizing away their desire to be vegetarian for other reasons that they can't rationally defend.

Replies from: army1987, Raemon
comment by A1987dM (army1987) · 2013-06-23T10:46:15.100Z · LW(p) · GW(p)

Someone who really dislikes the taste of meat but lacks other objections to eating meat should not object to eating byproducts such as gelatin that don't taste like meat, or eating small amounts of meat in a context where they can't taste it as meat. Furthermore, they should refuse to eat vegetarian products intentionally designed to taste like meat.

That particular guy didn't seem particularly bothered when he found out that the bread in the sandwiches he had previously eaten contained lard in its ingredients, saying that he hadn't noticed that. I also can't recall him ever eating meat substitutes.

And many meat products just taste different; disliking the taste of meat is a bit like disliking the taste of all products whose names begin with the letter A--it's logically possible, but it's an unusual category for one's sense of taste to so exactly fit.

Then again, not all rock music sounds the same, not all alcoholic beverages taste the same, etc., but there still are people who say they don't like rock music or alcoholic beverages. (But yeah, probably some of them are rationalizing away something.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-25T03:50:08.525Z · LW(p) · GW(p)

Then again, not all rock music sounds the same,

But it is still closer to other rock music than music from other genres.

not all alcoholic beverages taste the same,

I don't believe taste is the reason for most people's objection.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-25T19:26:52.900Z · LW(p) · GW(p)

But it is still closer to other rock music than music from other genres.

Have you tried the exercise on this page?

I don't believe taste is the reason for most people's objection.

Hence the “say they” and the bit in parentheses at the end.

comment by Raemon · 2013-06-18T16:10:10.419Z · LW(p) · GW(p)

I know people that specifically say "I dislike red meat" but still eat chicken and fish, and identify as "sort of vegetarian."

Replies from: Jiro
comment by Jiro · 2013-06-18T19:00:30.087Z · LW(p) · GW(p)

If you presented them with vegetarian fake meat, would they then refuse to eat it because they don't like the taste of meat?

Do they eat bacon? Gelatin? Spaghetti with meat sauce? Soups containing beef broth? Liver? Do those all really have enough of a similar taste that they would really refuse to eat all those things because they "don't like the taste of meat"?

Why do we hardly ever see people who say "I don't like the taste of bread" and refuse to eat not only bread, but fish coated with bread crumbs?

comment by Eugine_Nier · 2013-06-15T08:02:56.471Z · LW(p) · GW(p)

See also economies of scale.

comment by Eugine_Nier · 2013-06-16T06:13:06.276Z · LW(p) · GW(p)

Similarly, why do certain straight men dislike gay men that much

This has to do with the way gay sex interacts with status.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T20:58:39.905Z · LW(p) · GW(p)

Since all of my work output goes to effective altruism, I can't afford any optimization of my meals that isn't about health x productivity. This does sometimes make me feel worried about what happens if the ethical hidden variables turn out unfavorably. Assuming I go on eating one meat meal per day, how much vegetarian advocacy would I have to buy in order to offset all of my annual meat consumption? If it's on the order of $20, I'd pay $30 just to be able to say I'm 50% more ethical than an actual vegetarian.

Replies from: davidpearce, ThrustVectoring, peter_hurford, Mestroyer, Decius
comment by davidpearce · 2013-06-13T12:16:32.785Z · LW(p) · GW(p)

Eliezer, is that the right way to do the maths? If a high-status opinion-former publicly signals that he's quitting meat because it's ethically indefensible, then others are more likely to follow suit - and the chain-reaction continues. For sure, studies purportedly showing longer lifespans, higher IQs etc of vegetarians aren't very impressive because there are too many possible confounding variables. But what such studies surely do illustrate is that any health-benefits of meat-eating vs vegetarianism, if they exist, must be exceedingly subtle. Either way, practising friendliness towards cognitively humble lifeforms might not strike AI researchers as an urgent challenge now. But isn't the task of ensuring that precisely such an outcome ensues from a hypothetical Intelligence Explosion right at the heart of MIRI's mission - as I understand it at any rate?

Replies from: RobertWiblin
comment by RobertWiblin · 2013-06-14T23:29:06.313Z · LW(p) · GW(p)

I think David is right. It is important that people who may have a big influence on the values of the future lead the way by publicly declaring and demonstrating that suffering (and pleasure) are important where-ever they occur, whether in humans or mice.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T00:16:04.204Z · LW(p) · GW(p)

I have to disagree on two points:

  1. I don't think that we should take this thesis ("suffering (and pleasure) are important where-ever they occur, whether in humans or mice") to be well-established and uncontroversial, even among the transhumanist/singularitarian/lesswrongian crowd.

  2. More importantly, I don't think Eliezer or people like him have any obligation to "lead the way", set examples, or be a role model, except insofar as it's necessary for him to display certain positive character traits in order for people to e.g. donate to MIRI, work for MIRI, etc. (For the record, I think Eliezer already does this; he seems, as near as I can tell, to be a pretty decent and honest guy.) It's really not necessary for him to make any public declarations or demonstrations; let's not encourage signaling for signaling's sake.

Replies from: RobertWiblin
comment by RobertWiblin · 2013-06-15T01:57:58.953Z · LW(p) · GW(p)

Needless to say, I think 1 is settled. As for the second point - Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it's OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T02:44:27.691Z · LW(p) · GW(p)

Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it's OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.

That has very little to do with whether Eliezer should make public declarations of things. Are you of the opinion that Eliezer does not share your view on this matter? (I don't know whether he does, personally.) If so, you should be attempting to convince him, I guess. If you think that he already agrees with you, your work is done. Public declarations would only be signaling, having little to do with maximizing good outcomes.

As for the other thing — I should think the fact that we're having some disagreement in the comments on this very post, about whether animal suffering is important, would be evidence that it's not quite as uncontroversial as you imply. I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one. Perhaps you should write one? I'd be interested in reading it.

Replies from: Pablo_Stafforini, RobertWiblin
comment by Pablo (Pablo_Stafforini) · 2013-06-15T11:41:24.446Z · LW(p) · GW(p)

I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one.

I think we should be wary of reasoning that takes the form: "There is no good argument for x on Less Wrong, therefore there are likely no good arguments for x."

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T15:53:00.393Z · LW(p) · GW(p)

Certainly we should, but that was not my reasoning. What I said was:

I don't think that we should take this thesis ("suffering (and pleasure) are important where-ever they occur, whether in humans or mice") to be well-established and uncontroversial, even among the transhumanist/singularitarian/lesswrongian crowd. [emphasis added]

I object to treating an issue as settled and uncontroversial when it's not. And the implication was that if this issue is not settled here, then it's likely to be even less settled elsewhere; after all, we do have a greater proportion of vegetarians here at Less Wrong than in the general population.

"I will act as if this is a settled issue" in such a case is an attempt to take an epistemic shortcut. You're skipping the whole part where you actually, you know, argue for your viewpoint, present reasoning and evidence to support it, etc. I would like to think that we don't resort to such tricks here.

If caring about animal suffering is such a straightforward thing, then please, write a post or two outlining the reasons why. Posters on Less Wrong have convinced us of far weirder things; it's not as if this isn't a receptive audience. (Or, if there are such posts and I've just missed them, link please. Or! If you think there are very good, LW-quality arguments elsewhere, why not write a Main post with a few links, with maybe brief summaries of each?)

Replies from: davidpearce, Pablo_Stafforini
comment by davidpearce · 2013-06-15T18:18:58.924Z · LW(p) · GW(p)

SaidAchmiz, you're right. The issue isn't settled: I wish it were so. The Transhumanist Declaration (1998, 2009) of the World Transhumanist Association / Humanity Plus does express a non-anthropocentric commitment to the well-being of all sentience. ["We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise" : http://humanityplus.org/philosophy/transhumanist-declaration/] But I wonder what percentage of lesswrongers would support such a far-reaching statement?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T19:31:28.935Z · LW(p) · GW(p)

I certainly wouldn't, and here's why.

Mentioning "non-human animals" in the same sentence and context along with humans and AIs, and "other intelligences" (implying that non-human animals may be usefully referred to as "intelligences", i.e. that they are similar to humans along the relevant dimensions here, such as intelligence, reasoning capability, etc.) reads like an attempt to smuggle in a claim by means of that implication. Now, I don't impute ignoble intent to the writers of that declaration; they may well consider the question settled, and so do not consider themselves to be making any unsupported claims. But there's clearly a claim hidden in that statement, and I'd like to see it made quite explicit, at least, even if you think it's not worth arguing for.

That is, of course, apart from my belief that animals do not have intrinsic moral value. (To be truthful, I often find myself more annoyed with bad arguments than wrong beliefs or bad deeds.)

comment by Pablo (Pablo_Stafforini) · 2013-06-15T18:16:15.096Z · LW(p) · GW(p)

I object to treating an issue as settled and uncontroversial when it's not. And the implication was that if this issue is not settled here, then it's likely to be even less settled elsewhere; after all, we do have a greater proportion of vegetarians here at Less Wrong than in the general population.

Those who have thought most about this issue, namely professional moral philosophers, generally agree (1) that suffering is bad for creatures of any species and (2) that it's wrong for people to consume meat and perhaps other animal products (the two claims that seem to be the primary subjects of dispute in this thread). As an anecdote, Jeff McMahan--a leading ethicist and political philosopher--mentioned at a recent conference that the moral case for vegetarianism was one of the easiest cases to make in all philosophy (a discipline where peer disagreement is pervasive).

I mention this, not as evidence that the issue is completely settled, but as a reply to your speculation that there is even more disagreement in the relevant community outside Less Wrong.

(Or, if there are such posts and I've just missed them, link please. Or! If you think there are very good, LW-quality arguments elsewhere, why not write a Main post with a few links, with maybe brief summaries of each?)

Frankly, I'm baffled by your insistence that the relevant arguments must be found in the Less Wrong archives. There's plenty of good material out there which I'm happy to recommend if you are interested in reading what others who have thought about these issues much more than either of us have written on the subject.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T19:28:31.669Z · LW(p) · GW(p)

Those who have thought most about this issue, namely professional moral philosophers, almost universally agree [...] that it's wrong for people to consume meat and perhaps other animal products

Citation needed. :)

As an anecdote, Jeff McMahan mentioned at a recent conference that the moral case for vegetarianism was one of the easiest cases to make in all philosophy (a discipline where peer disagreement is pervasive).

It's interesting that you use Jeff McMahan as an example. In his essay The Meat Eaters, McMahan makes some excellent arguments; his replies to the "playing God" and "against Nature" objections, for instance, are excellent examples of clear reasoning and argument, as is his commentary on the sacredness of species. (As an aside, when McMahan started talking about the hypothetical modification or extinction of carnivorous species, I immediately thought of Stanislaw Lem's Return From the Stars, where the human civilization of a century hence has chemically modified all carnivores, including humans, to be nonviolent, evidently having found some way to solve the ecological issues.)

But one thing he doesn't do is make any argument for why we should care about the suffering of animals. The moral case, as such, goes entirely unmade; McMahan only alludes to its obviousness once or twice. If he thinks it's an easy case to make — perhaps he should go ahead and make it! (Maybe he does elsewhere? If so, a quick googling does not turn it up. Links, as always, would be appreciated.) He just takes "animal suffering is bad" as an axiom. Well, fair enough, but if I don't share that axiom, you wouldn't expect me to be convinced by his arguments, yes?

I mention this, not as evidence that the issue is completely settled, but as a reply to your speculation that there is even more disagreement in the relevant community outside Less Wrong.

I don't think the relevant community outside Less Wrong is professional moral philosophers. I meant something more like... "intellectuals/educated people/technophiles/etc. in general", and then even more broadly than that, "people in general". However, this is a peripheral issue, so I'm ok with dropping it.

Frankly, I'm baffled by your insistence that the relevant arguments must be found in the Less Wrong archives. There's plenty of good material out there which I'm happy to recommend if you are interested in reading what others who have thought about these issues much more than either of us have written on the subject.

In case it wasn't clear (sorry!), yes, I am interested in reading good material elsewhere (preferably in the form of blog posts or articles rather than entire books or long papers, at least as summaries); if you have some to recommend, I'd appreciate it. I just think that if such very convincing material exists, you (or someone) should post it (links or even better, a topic summary/survey) on Less Wrong, such that we, a community with a high level of discourse, may discuss, debate, and examine it.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-06-15T21:54:28.549Z · LW(p) · GW(p)

(FWIW, I'm not the one downvoting your comments, and I think it's a shame that the debate has become so "politicized".)

Here are a couple of relevant survey articles:

  • Jeff McMahan, Animals, in The Blackwell Companion to Applied Ethics, Oxford: Blackwell, 2002, pp. 525-536.

  • Stuart Rachels, Vegetarianism, in The Oxford Handbook of Animal Ethics, Oxford: Oxford University Press, 2012, pp. 877–905.

On the seriousness of suffering, see perhaps

  • Thomas Nagel, Pleasure and Pain, in The View from Nowhere, Oxford: Oxford University Press, 1986, pp. 156-162.

--

Here are some quotes about pain from contemporary moral philosophers which I believe are fairly representative. (I don't have any empirical studies to back this up, other than my impression from interacting with this community for several years, and my inability to find even a single quote that supports the contrary position.)

When I am in pain, it is plain, as plain as anything is, that what I am experiencing is bad.

Guy Kahane, The Sovereignty of Suffering: Reflections on Pain’s Badness, 2004, p. 2

Some things are bad without it being the case that we have a prima facie duty to get rid of them. The badness of suffering is different. Here I need to use somewhat metaphorical language to get across what seems to me to be the heart of the matter. Where there is suffering, there exists a demand or an appeal for the prevention of that suffering. I say "a demand or an appeal," but this demand does not issue from anyone in particular, nor is it addressed to anyone in particular. We might say (again metaphorically) that suffering cries out for its own abolition or cancellation.

Jamie Mayerfeld, Suffering and Moral Responsibility, Oxford, 2002, p. 111.

[Pain] is a bad thing in itself. It does not matter who experiences it, or where it comes in a life, or where in the course of a painful episode. Pain is bad; it should not happen. There should be as little pain as possible in the world, however it is distributed across people and across time.

John Broome, ‘More Pain or Less?’, Analysis, vol. 56, no. 2 (April, 1996), p. 117

it seems to me that certain things, such as pain and suffering to take the clearest example, are bad. I don’t think I’m just making that up, and I don’t think that is just an arbitrary personal preference of mine. If I put my finger in a flame, I have a certain experience, and I can directly see something about it (about the experience) that is bad. Furthermore, if it is bad when I experience pain, it seems that it must also be bad when someone else experiences pain. Therefore, I should not inflict such pain on others, any more than they should inflict it on me. So there is at least one example of a rational moral principle.

Michael Huemer, Ethical Intuitionism, Basingstoke, Hampshire, 2005, p. 250.

The idea that it is wrong to cause suffering, unless there is a sufficient justification, is one of the most basic moral principles, shared by virtually anyone.

James Rachels, ‘Animals and Ethics’, in Edward Craig (ed.), Routledge Encyclopedia of Philosophy, London, 1998, sect. 3.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T22:05:55.160Z · LW(p) · GW(p)

Thank you! This is an impressive array of references, and I will read at least some of them as soon as I have time. I very much appreciate you taking the time to collect and post them.

(FWIW, I'm not the one downvoting your comments, and I think it's a shame that the debate has become so "politicized".)

Thank you. The downvotes don't worry me too much, at least partly because I continue to be unsure about what down/upvotes even mean on this site. (It seems to be an emotivist sort of yay/boo thing? Not that there's necessarily anything terribly wrong with that, it just doesn't translate to very useful data, especially in small quantities.)

To anyone who is downvoting my comments: I'd be curious to hear your reasons, if you're willing to explain them publicly. Though I do understand if you want to remain anonymous.

comment by Said Achmiz (SaidAchmiz) · 2013-06-30T01:02:02.182Z · LW(p) · GW(p)

Stuart Rachels, Vegetarianism, in The Oxford Handbook of Animal Ethics, Oxford: Oxford University Press, 2012, pp. 877–905.

So, I've just finished reading this one.

To say that I found it unconvincing would be quite the understatement.

For one, Rachels seems entirely unwilling to even take seriously any objections to his moral premises or argument (he, again, takes the idea that we should care about animal suffering as given). He dismisses the strongest and most interesting objections outright; he selects the weakest objections to rebut, and condescendingly adds that "Resistance to [such] arguments usually stems from emotion, not reason. ... Moreover, they [opponents of his argument] want to justify their next hamburger."

Rachels then launches into a laundry list of other arguments against eating factory farmed animals, not based on a moral concern for animals. It seems that factory farming is bad in literally every way! It's bad for animals, it's bad for people, it causes diseases, eating meat is bad for our health, and more, and more.

(I'm always wary of such claims. When someone tells you thing A has bad effect X, you listen with concern; when they add that oh yeah, it also had bad effect Y! And Z! And W! ... and then you discover that their political/ideological alignment is "opponent of thing A"... suspicion creeps in. Can eating meat really just be universally bad, bad in every way, irredeemably bad so as to be completely unmotivated? Well, there's no law of nature that says that can't be the case (e.g. eating uranium probably has no upside), but I'm inclined to treat such claims with skepticism, and, in any case, I'd prefer each aspect of meat-eating to be argued against separately, such that I can evaluate them individually, not be faced with a shotgun barrage of everything at once.)

Incidentally, I find the "factory farming is detrimental to local human populations" argument much more convincing than any of the others, certainly far more so than the animal-suffering argument. If the provided facts are accurate, then that's the most salient case for stopping the practice — or, preferably, reforming it so as to mitigate the environmental and public-health impact.

I assign the "eating meat is bad for you" argument negligible weight. The one universal truth I've observed about nutrition claims is that finding someone else who's making the opposite claim is trivial. (The corollary is that generalizing nutritional findings to all humans in all circumstances is nigh-impossible.) Red meat reduces lifespan? But the peoples of the Caucasus highlands eat almost nothing but red meat, and they've got some of the longest lifespans in the world. The citations in this section, incidentally, amount to "page so-and-so of some book" and "a study". I can find "a study" that proves pretty much any nutritional claim. Thumbs down. (Vegetarians should really stay away from human-health arguments. It never makes them look good.)

Of the rest of the arguments Rachels makes, I found "industrial farming is worse than the Holocaust" (yes, he really claims this, making it clear that he means it) particularly ludicrous. Obviously, this argument is made with the express intent of being provocative; but as it does seem that Rachels genuinely believe it to be true, I can't help but conclude that here is a person who is exemplifying one of the most egregious failure modes of naive utilitarianism. (How many chickens would I sacrifice to save my great-grandfather from the Nazis? N, where N is any number. This seems to argue either for rejecting straightforward aggregation of value or for assigning chickens a value of 0.)

Replies from: wedrifid, davidpearce
comment by wedrifid · 2013-06-30T17:40:47.908Z · LW(p) · GW(p)

The one universal truth I've observed about nutrition claims is that finding someone else who's making the opposite claim is trivial. (The corollary is that generalizing nutritional findings to all humans in all circumstances is nigh-impossible.)

"Partially hydrogenated vegetable oils prevent heart disease and improve lipid profile". To the extent that it is true that it is trivial to find someone claiming the opposite of every nutritional claim it is trivial to find people who are clearly just plain wrong. (The position you are taking is far too strong to be tenable.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-30T17:49:37.947Z · LW(p) · GW(p)

The opposite claim of "Food X causes problem Y" is not necessarily "Food X reduces problem Y". "It is not the case that (or "there is no evidence that") Food X causes problem Y" also counts as "opposite". That's how I meant it: every time someone says "X causes Y", there's some other study that concludes that eh, actually, it's not clear that X causes Y, and in fact probably doesn't.

comment by davidpearce · 2013-06-30T10:21:30.699Z · LW(p) · GW(p)

SaidAchmiz, one difference between factory farming and the Holocaust is that the Nazis believed in the existence of an international conspiracy of the Jews to destroy the Aryan people. Humanity's only justification of exploiting and killing nonhuman animals is that we enjoy the taste of their flesh. No one believes that factory-farmed nonhuman animals have done "us" any harm. Perhaps the parallel with the (human) Holocaust fails for another reason. Pigs, for example, are at least as intelligent as prelinguistic toddlers; but are they less sentient? The same genes, neural processes, anatomical pathways and behavioural responses to noxious stimuli are found in pigs and toddlers alike. So I think the burden of proof here lies on meat-eating critics who deny any equivalence. A third possible reason for denying the parallel with the Holocaust is the issue of potential. Pigs (etc) lack the variant of the FOXP2 gene implicated in generative syntax. In consequence, pigs will never match the cognitive capacities of many but not all adult humans. The problem with this argument is that we don't regard, say, humans with infantile Tay-Sachs who lack the potential to become cognitively mature adults as any less worthy of love, care and respect than heathy toddlers. Indeed the Nazi treatment of congenitally handicapped humans (the "euthanasia" program) is often confused with the Holocaust, for which it provided many of the technical personnel. A fourth reason to deny the parallel with the human Holocaust is that it's offensive to Jewish people. This unconformable parallel has been drawn by some Jewish writers. "An eternal Treblinka", for example, was made by Isaac Bashevis Singer - the Jewish-American Nobel laureate. Apt comparison or otherwise, creating nonhuman-animal-friendly intelligence is going to be an immense challenge.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-30T16:15:48.051Z · LW(p) · GW(p)

Humanity's only justification of exploiting and killing nonhuman animals is that we enjoy the taste of their flesh.

It seems to me like a far more relevant justification for exploiting and killing nonhuman animals is "and why shouldn't we do this...?", which is the same justification we use for exploiting and killing ore-bearing rocks. Which is to say, there's no moral problem with doing this, so it needs no "justification".

Pigs, for example, are at least as intelligent as prelinguistic toddlers; but are they less sentient? The same genes, neural processes, anatomical pathways and behavioural responses to noxious stimuli are found in pigs and toddlers alike. So I think the burden of proof here lies on meat-eating critics who deny any equivalence.

I make it clear in this post that I don't deny the equivalence, and don't think that very young children have the moral worth of cognitively developed humans. (The optimal legality of Doing Bad Things to them is a slightly more complicated matter.)

we don't regard, say, humans with infantile Tay-Sachs who lack the potential to become cognitively mature adults as any less worthy of love, care and respect than heathy toddlers.

Well, I certainly do.

Apt comparison or otherwise, creating nonhuman-animal-friendly intelligence is going to be an immense challenge.

Eh...? Expand on this, please; I'm quite unsure what you mean here.

Replies from: davidpearce, army1987
comment by davidpearce · 2013-06-30T17:48:48.357Z · LW(p) · GW(p)

SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from "exploiting and killing ore-bearing rocks" does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn't there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever? Insofar as we want a benign outcome for humans, I'd have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.

Replies from: Watercressed, SaidAchmiz
comment by Watercressed · 2013-06-30T18:11:06.825Z · LW(p) · GW(p)

Isn't there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever. Insofar as we want a benign outcome for humans, I'd have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.

No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there's no reason they must care about the plight of animals in the face of humans, because they didn't care about animals to begin with.

It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.

comment by Said Achmiz (SaidAchmiz) · 2013-06-30T18:14:59.972Z · LW(p) · GW(p)

a cognitively ambitious level of empathetic understanding of other subjects of experience

What the heck does this mean? (And why should I be interested in having it?)

Isn't there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever?

Wikipedia says:

In modern western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as "qualia").

If that's how you're using "sentience", then:

1) It's not clear to me that (most) nonhuman animals have this quality;
2) This quality doesn't seem central to moral worth.

So I see no irony.

If you use "sentience" to mean something else, then by all means clarify.

There are some other problems with your formulation, such as:

1) I don't "belong to" MIRI (which is the organization you refer to, yes?). I have donated to them, which I suppose counts?
2) Your description of their mission, specifically the implied comparison of an FAI with humans, is inaccurate.

the computational equivalent of Godlike capacity for perspective-taking

You use a lot of terms ("cognitively ambitious", "cognitively humble", "empathetic understanding", "Godlike capacity for perspective-taking" (and "the computation equivalent" thereof)) that I'm not sure how to respond to, because it seems like either these phrases are exceedingly odd ways of referring to familiar concepts, or else they are incoherent and have no referents. I'm not sure which interpretation is dictated by the principle of charity here; I don't want to just assume that I know what you're talking about. So, if you please, do clarify what you mean by... any of what you just said.

comment by A1987dM (army1987) · 2013-07-01T10:42:07.677Z · LW(p) · GW(p)

It seems to me like a far more relevant justification for exploiting and killing nonhuman animals is "and why shouldn't we do this...?"

Huh, no, you don't normally go out of your way to do stuff unless there's something in it for you or someone else.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-01T12:45:12.393Z · LW(p) · GW(p)

Well, first of all, this is just false. People do things for the barest, most trivial of reasons all the time. You're walking along the street and you kick a bottle that happens to turn up in your path. What's it in for you? In the most trivial sense you could say that "I felt like it" is what's in it for you, but then the concept rather loses its meaning.

In any case, that's a tangent, because you mistook my meaning: I wasn't talking about the motivation for doing something. I (and davidpearce, as I read him) was talking about the moral justification for eating meat. His comment, under my intepretation, was something like: "Exploiting and killing nonhuman animals carries great negative moral value. What moral justification do we have for doing this? (i.e. what positive moral value counterbalances it?) None but that we enjoy the taste of their flesh." (Implied corollary: and that is inadequate moral justification!)

To which my response was, essentially, that morally neutral acts do not require such justification. (And by implication, I was contradicting davidpearce by claiming that killing and eating animals is a morally neutral act.) If I smash a rock, I don't need to justify that (unless the rock was someone's property, I suppose, which is not the issue we're discussing). I might have any number of motivations for performing a morally neutral act, but they're none of anyone's business, and certainly not an issue for moral philosophers.

(Did you really not get all of this intended meaning from my comment...? If that's how you intepreted what I said, shouldn't you be objecting that smashing ore-bearing rocks is not, in fact, unmotivated, as I would seem to be implying, under your interpretation?)

comment by RobertWiblin · 2013-06-15T11:07:27.258Z · LW(p) · GW(p)

"Public declarations would only be signaling, having little to do with maximizing good outcomes."

On the contrary, trying to influence other people in the AI community to share Eliezer's (apparent) concern for the suffering of animals is very important, for the reason given by David.

"I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one."

a) Less Wrong doesn't contain the best content on this topic. b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them. c) The reason has been given by Pablo Stafforini - when I directly experience the badness of suffering, I don't only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering). d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.

Replies from: None, SaidAchmiz
comment by [deleted] · 2013-06-15T16:02:14.034Z · LW(p) · GW(p)

c) The reason has been given by Pablo Stafforini - when I directly experience the badness of suffering, I don't only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering).

This is an interesting argument, but it seems a bit truncated. Could you go into more detail?

comment by Said Achmiz (SaidAchmiz) · 2013-06-15T15:56:21.438Z · LW(p) · GW(p)

a) Less Wrong doesn't contain the best content on this topic.

Where is the best content on this topic, in your opinion?

b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists

Eh? Unpack this, please.

comment by ThrustVectoring · 2013-06-13T01:46:06.598Z · LW(p) · GW(p)

If it's on the order of $20, I'd pay $30 just to be able to say I'm 50% more ethical than an actual vegetarian.

That's not exactly true, since advocating vegetarianism has more effects than simply reducing the consumption of meat. For one thing, it alters how people think about and live their lives. If that $30 of spending produces a certain amount of human suffering (say, from self-induced guilt over eating meat), then your ethicalness isn't as high as calculated.

comment by Peter Wildeford (peter_hurford) · 2013-06-13T05:44:30.352Z · LW(p) · GW(p)

Since all of my work output goes to effective altruism, I can't afford any optimization of my meals that isn't about health x productivity.

Allegedly, vegetarian diets are supposed to be healthier, but I don't know if that's true. I also don't know how much of a productivity drain, if any, a vegetarian diet would be. I've personally noticed no difference.

~

Assuming I go on eating one meat meal per day, how much vegetarian advocacy would I have to buy in order to offset all of my annual meat consumption? If it's on the order of $20, I'd pay $30 just to be able to say I'm 50% more ethical than an actual vegetarian.

It depends on what the cost-effectiveness ends up looking like, but $30 sounds fine to me. Additionally or alternatively, you could eat larger animals instead of smaller animals (i.e. more beef and less chicken) so as to do less harm with each meal.

comment by Mestroyer · 2013-06-13T21:26:09.274Z · LW(p) · GW(p)

If the ethical hidden variables turn out unfavorably, you have more to make up for than that. HPJEV thinking animals are not sentient has probably lost the world more than one vegetarian-lifetime.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T22:04:51.487Z · LW(p) · GW(p)

This seems unlikely to be a significant fraction of my impact upon the summum bonum, for good or ill.

Replies from: Raemon
comment by Raemon · 2013-06-13T23:38:44.763Z · LW(p) · GW(p)

I'm actually fairly concerned about the possibility of you influencing the beliefs of AI researchers, in particular.

I'm not sure if it ends up mattering for FAI, if executed as currently outlined. My understanding is that the point is that it'll be able to predict the collective moral values of humanity-over-time (or safely fail to do so), and your particular guesses about ethical-hidden-variables shouldn't matter.

But I can imagine plausible scenarios where various ethical-blind-spots on the part of the FAI team, or people influenced by it, end up mattering a great deal in a pretty terrifying way. (Maybe people in that cluster decide they have a better plan, and leave and do their own thing, where ethical-blind-spots/hidden-variables matter more).

This concern extends beyond vegetarianism and doesn't have a particular recommended course of action beyond "please be careful about your moral reasoning and public discussion thereof", which presumably you're doing already, or trying to.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T23:43:34.602Z · LW(p) · GW(p)

FAI builders do not need to be saints. No sane strategy would be set up that way. They need to endorse principles of non-jerkness enough to endorse indirect normativity (e.g. CEV). And that's it. Morality is not sneezed into AIs by contact with the builders.

Replies from: Mestroyer, Raemon, None
comment by Mestroyer · 2013-06-14T00:32:12.934Z · LW(p) · GW(p)

Haven't you considered extrapolating the volition of a single person if CEV for many people looks like it won't work out, or will take significantly longer? Three out of three non-vegetarian LessWrongers (my best model for MIRI employees, present and future, aside from you) I have discussed it with say they care about something besides sentience, like sapience. Because they have believed that that's what they care about for a while, I think it has become their true value, and CEV based on them alone would not act on concern for sentience without sapience. These are people who take MWI and cryonics seriously, probably because you and Robin Hansen do and have argued in favor of them. And you could probably change the opinion of these people, or at least people on the road to becoming like them with a few of blog posts.

Because in HPMOR you used the word "sentience," which is typically used in sci fi to mean sapience, (instead of using something like "having consciousness") I am worried you are sending people down that path by letting them think HPJEV draws the moral-importance line at sapience, besides my concern that you are showing others that a professional rationalist thinks animals aren't sentient.

comment by Raemon · 2013-06-14T00:09:12.751Z · LW(p) · GW(p)

I did finally read the 2004 CEV paper recently, and it was fairly reassuring in a number of ways. (The "Jews vs Palestinians cancel each other but Martin Luther King and Gandhi add together" thing sounded... plausible but a little too cutely elegant for me to trust at first glance.)

I guess the question I have is (this is less relevant to the current discussion but I'm pretty curious) - in the event where CEV fails to produce a useful outcome (i.e. values diverge too much), is there a backup plan, that doesn't hinge on someone's judgment? (Is there a backup plan, period?)

comment by [deleted] · 2013-06-14T00:10:03.264Z · LW(p) · GW(p)

They need to endorse principles of non-jerkness enough to endorse indirect normativity

Indirect Normativity is more a matter of basic sanity than non-jerky altruism. I could be a total jerk and still realize that I wanted the AI to do moral philosophy for me. Of course, even if I did this, the world would turn out better than anyone could imagine, for everyone. So yeah, I think it really has more to do with being A) sane enough to choose Indirect Normativity, and B) mostly human.

Also, I would regard it as a straight-up mistake for a jerk to extrapolate anything but their own values. (Or a non-jerk for that matter). If they are truly altruistic, the extrapolation should reflect this. If they are not, building altruism or egalitarianism in at a basic level is just dumb (for them, nice for me).

(Of course then there are arguments for being honest and building in altruism at a basic level like your supporters wanted you to. Which then suggests the strategy of building in altruism towards only your supporters, which seems highly prudent if there is any doubt about who we should be extrapolating. And then there is the meta-uncertain argument that you shouldn't do too much clever reasoning outside of adult supervision. And then of course there is the argument that these details have low VOI compared to making the damn thing work at all. At which point I will shut up.)

comment by Decius · 2013-06-12T22:12:28.655Z · LW(p) · GW(p)

Wouldn't that $30 come from your work output that is currently going to effective altruism?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-12T22:37:27.032Z · LW(p) · GW(p)

Arguably worth it for $30 of reduced guilt, bragging rights and twisted, warped enjoyment of ethical weirdness.

Replies from: Decius
comment by Decius · 2013-06-14T04:31:31.431Z · LW(p) · GW(p)

Using the worst estimate, that would mean that it's arguable that a 1 in 50 chance of killing a child under 5 is worth that much reduced guilt, bragging rights, and twisted, warped enjoyment of ethical weirdness.

I'd call you a monster, but I'd totally take actions which fail to prevent the death of an entire kid I'd never meet anyway if I could do so without suffering any risk of being blamed and could get a warped enjoyment of ethical weirdness.

We monsters.

comment by Qiaochu_Yuan · 2013-06-14T18:40:13.208Z · LW(p) · GW(p)

Several people have been attempting to reductio my pro-human point of view, so I'll do the same back to the pro-animal people here: how simple is the simplest animal you're willing to assign moral worth to? Are you taking into account meta-uncertainty about the moral worth of even very simple animals? (What about living organisms outside of the animal kingdom, like bacteria? Viruses?) If you don't care about organisms simple enough that they don't suffer, does it seem "arbitrary" to you to single out a particular mental behavior as being the mental behavior that signifies moral worth? Does it seem "mindist" to you to single out having a particular kind of mind as being the thing that signifies moral worth?

If you calculated that assigning even very small moral worth to a simple but sufficiently numerous organism leads to the conclusion that the moral worth of non-human organisms on Earth strongly outweighs, in aggregate, the moral worth of humans, would you act on it (e.g. by making the world a substantially better place for some bacterium by infecting many other animals, such as humans, with it)?

If you were the only human left on Earth and you couldn't find enough non-meat to survive on, would you kill yourself to avoid having to hunt to survive?

How do you resolve conflicts among organisms (e.g. predatorial or parasitic relationships)?

Replies from: Lukas_Gloor, Raemon, Vaniver, Xodarap, MugaSofer, shminux, elharo, komponisto
comment by Lukas_Gloor · 2013-06-14T22:23:34.143Z · LW(p) · GW(p)

how simple is the simplest animal you're willing to assign moral worth to?

I don't value animals per se, it is their suffering I care about and want to prevent. If it turns out that even the tiniest animals can suffer, I will take this into consideration. I'm already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.

If you don't care about organisms simple enough that they don't suffer, does it seem "arbitrary" to you to single out a particular mental behavior as being the mental behavior that signifies moral worth?

No, it seems completely non-arbitrary to me. Only sentient beings have a first-person point of view, only for them can states of the world be good or bad. A stone cannot be harmed in the same way a sentient being can be harmed. Introspectively, my suffering is bad because it is suffering, there is no other reason.

If you calculated that assigning even very small moral worth to a simple but sufficiently numerous organism leads to the conclusion that the moral worth of non-human organisms on Earth strongly outweighs, in aggregate, the moral worth of humans, would you act on it (e.g. by making the world a substantially better place for some bacterium by infecting many other animals, such as humans, with it)?

I don't care about maximizing the amount of morally relevant entities, so this is an unlikely scenario. But I guess the point of your question is whether I am serious about the criteria I'm endorsing. Yes, I am. If my best estimates come out in a way leading to counterintuitive conclusions, and if that remains the case even if I adjust for overconfidence on my part before doing something irreversible, then I would indeed act accordingly.

If you were the only human left on Earth and you couldn't find enough non-meat to survive on, would you kill yourself to avoid having to hunt to survive?

The lives of most wild animals involve a lot of suffering already, and at some point, they are likely going to die painfully anyway. It is unclear whether me killing them (assuming I'd even be skilled enough to get one of them) would be net bad. I don't intrinsically object to beings dying/being killed. But again, if it turns out that some action (e.g. killing myself) is what best fulfills the values I've come up with under reflection, I will do that, or, if I'm not mentally capable of doing it, I'd take a pill that would make me capable.

How do you resolve conflicts among organisms (e.g. predatorial or parasitic relationships)?

I don't know, but I assume that an AI would be able to find a great solution. Maybe through reengineering animals so they become incapable of experiencing suffering, while somehow keeping the function of pain intact. Or maybe simply get rid of Darwinian nature and replace it, if that is deemed necessary, with something artificial and nice.

Replies from: Watercressed, Qiaochu_Yuan
comment by Watercressed · 2013-06-17T05:26:23.752Z · LW(p) · GW(p)

I'm already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.

A priori, it seems that the moral weight of insects would either be dominated by their massive numbers or by their tiny capacities. It's a narrow space where the two balance and you get a non-negligible but still-not-overwhelming weight for insects in a utility function. How did you decide that this was right?

Replies from: Jabberslythe, Lukas_Gloor
comment by Jabberslythe · 2013-06-17T06:17:00.302Z · LW(p) · GW(p)

I think there are good arguments for for suffering not being weighted by number of neurons and if you assign even a 10% to that being the case you end up with insects (and maybe nematodes and zooplankton) dominating the utility function because of their overwhelming numbers.

Having said that, ways on increasing the well being of these may be quite a bit different from increasing it for larger animals. In particular, because they so many of them die so within the first few days of life, their averaged life quality seems like it would be terrible. So reducing the populations looks like the current best option.

There may be good instrumental reasons for focusing on less controversial animals and hoping that they promote the kind of antispeciesism that spills over to concern about insects and does work for improving similar situations in the future.

Replies from: Pablo_Stafforini, Lukas_Gloor
comment by Pablo (Pablo_Stafforini) · 2013-06-17T17:59:47.832Z · LW(p) · GW(p)

For what is worth, here are the results of a survey that Vallinder and I circulated recently. 85% of expert respondents, and 89% of LessWrong respondents, believe that there is at least a 1% chance that insects are sentient, and 77% of experts and 69% of LessWrongers believe there is at least a 20% chance that they are sentient.

Replies from: Jabberslythe
comment by Jabberslythe · 2013-06-17T19:23:17.831Z · LW(p) · GW(p)

Very interesting. What were they experts in? And how many people responded?

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-06-17T19:39:27.299Z · LW(p) · GW(p)

They were experts in pain perception and related fields. We sent the survey to about 25 people, of whom 13 responded.

Added (6 November, 2015): If there is interest, I can reconstruct the list of experts we contacted. Just let me know.

comment by Lukas_Gloor · 2013-06-17T15:19:34.935Z · LW(p) · GW(p)

Yes, my current estimate for that is less than 1%, but this is definitely something I should look into more closely. This has been on my to-do list for quite a while already.

Another thing to consider is that insects are a diverse bunch. I'm virtually certain that some of them aren't conscious, see for instance this type of behavior. OTOH, cockroaches or bees seem to be much more likely to be sentient.

Replies from: Jabberslythe, TheOtherDave
comment by Jabberslythe · 2013-06-17T18:22:33.486Z · LW(p) · GW(p)

Yes. Bees and Cockroaches both have about a million neurons compared with maybe 100,000 for most insects.

comment by TheOtherDave · 2013-06-17T15:59:36.300Z · LW(p) · GW(p)

Can you summarize the properties you look for when making these kinds of estimates of whether an insect is conscious/sentient/etc.? Or do you make these judgments based on more implicit/instinctive inspection?

Replies from: Jabberslythe, Lukas_Gloor
comment by Jabberslythe · 2013-06-17T19:10:43.992Z · LW(p) · GW(p)

I mostly do it by thinking about what I would accept as evidence of pain in more complex animals and see if it is present in insects. Complex pain behavior and evolutionary and functional homology relating to pain are things to look for.

There is a quite a bit of research on complex pain behavior in crabs by Robert Elwood. I'd link his site but it doesn't seem to be up right now. You should be able to find the articles, though. Crabs have 100,000 neurons which is around what many insects have.

Here is a pdf of a paper that find that a bunch of common human mind altering drugs affecting crawfish and fruit flies.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-17T19:43:35.249Z · LW(p) · GW(p)

Thanks.

comment by Lukas_Gloor · 2013-06-17T17:15:52.473Z · LW(p) · GW(p)

It is quite implicit/instinctive. The problem is that without having solved the problem of consciousness, there is also uncertainty about what you're even looking for. Nociception seems to be a necessary criterion, but it's not sufficient. In addition, I suspect that consciousness' adaptive role has to do with the weighting of different "possible" behaviors, so there has to be some learning behavior or variety in behavioral subroutines.

I actually give some credence to extreme views like Dennett's (and also Eliezer's if I'm informed correctly), which state that sentience implies self-awareness, but my confidence for that is not higher than 20%. I read a couple of papers on invertebrate sentience and I adjusted the expert estimates downwards somewhat because I have a strong intuition that many biologists are too eager to attribute sentience to whatever they are studying (also, it is a bit confusing because opinions are all over the place). Brian Tomasik lists some interesting quotes and material here.

And regarding the number of neurons thing, there I'm basically just going by intuition, which is unfortunate so I should think about this some more.

Replies from: davidpearce, TheOtherDave
comment by davidpearce · 2013-06-17T17:52:04.348Z · LW(p) · GW(p)

Ice9, perhaps consider uncontrollable panic. Some of the most intense forms of sentience that humans undergo seem to be associated with a breakdown of meta-cognitive capacity. So let's hope that what it's like to be an asphyxiating fish, for example, doesn't remotely resemble what it feels like to be a waterboarded human. I worry that our intuitive dimmer-switch model of consciousness, i.e. more intelligent = more sentient, may turn out to be mistaken.

comment by TheOtherDave · 2013-06-17T17:38:48.727Z · LW(p) · GW(p)

OK, thanks for clarifying.

comment by Lukas_Gloor · 2013-06-17T15:12:27.766Z · LW(p) · GW(p)

Good point, there is reason to expect that I'm just assigning numbers in a way that makes the result come out convenient. Last time I did a very rough estimate, the expected suffering of insects and nematodes (given my subjective probabilities) came out around half the expected suffering of all decapodes/amphibians-and-larger wild animals. And then wild animals outnumber farm animals by around 2-3 orders of magnitude in terms of expected suffering, and farm animals outnumber humans by a large margin too. So if I just cared about current suffering, or suffering on earth only, then "non-negligible" would indeed be an understatement for insect suffering.

However, what worries me most is not the suffering that is happening on earth. If space colonization goes wrong or even non-optimal, the current amount of suffering could be multiplied by orders of magnitude. And this might happen even if our values will improve. Consider the case with farmed animals, humans probably never cared as much for the welfare of animals as they do now, but at the same time, we have never caused as much direct suffering to animals as we do now. If you're primarily care about reducing the absolute amount of suffering, then whatever lets the amount of sentience skyrocket is a priori very dangerous.

comment by Qiaochu_Yuan · 2013-06-14T22:40:35.885Z · LW(p) · GW(p)

Only sentient beings have a first-person point of view, only for them can states of the world be good or bad.

Is the blue-minimizing robot suffering if it sees a lot of blue? Would you want to help alleviate that suffering by recoloring blue things so that they are no longer blue?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2013-06-16T14:56:26.977Z · LW(p) · GW(p)

I don't see the relevance of this question, but judging by the upvotes it received, it seems that I'm missing something.

I think suffering is suffering, no matter the substrate it is based on. Whether such a robot would be sentient is an empirical question (in my view anyway, it has recently come to my attention that some people disagree with this). Once we solve the problem of consciousness, it will turn out that such a robot is either conscious or that it isn't. If it is conscious, I will try to reduce its suffering. If the only way to do that would involve doing "weird" things, I would do weird things.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-16T19:13:36.843Z · LW(p) · GW(p)

The relevance is that my moral intuitions suggest that the blue-minimizing robot is morally irrelevant. But if you're willing to bite the bullet here, then at least you're being consistent (although I'm no longer sure that consistency is such a great property of a moral system for humans).

comment by Raemon · 2013-06-14T20:17:36.268Z · LW(p) · GW(p)

1) I am okay with humanely raised farm meat (I found a local butcher shop that sources from farms I consider ethical)

2) If I didn't have access to civilization, I would probably end up hunting to survive, although I'd try to do so as rarely and humanely as was possible given my circumstances. (I'm only like 5% altruist, I just try to direct that altruism as effectively as possible and if push comes to shove I'm a primal animal that needs to eat. I'm skeptical of people who claim otherwise)

3) I'm currently okay with eating insects, mussels, and similar simplish animals, where I can make pretty good guesses about the lack of sentience of. (If insects do turn out to have sentience, that's a pretty inconvenient world to have to live in, morally.)

4) I'm approximately average-preference-utilitarian. I value there being more creatures with more complex and interesting capacities for preference satisfaction (this is arbitrary and I'm fine with that). If I had to choose between humans and animals, I'd choose humans. But that's not the choice offered to humans RE vegetarianism - what's at stake is not humanity and complex relationships/art/intellectual-endeavors - it's pretty straightforward pleasure (of a sort that I'm expect large swaths of the animal kingdom to be capable of experiencing - visceral enjoyment of food almost certainly evolved fairly early. You are not exercising any special human-ness to experience it)

Most people don't need meat (or much of it) to be productive (the amount most people think they need is pretty grossly wrong), and the amount of hedonic satisfaction you're getting from eating meat is vastly dwarfed by the anti-hedons that enabled it.

5) Ultimately, what I actually advocate is making the best decisions you can, given your circumstances. This includes trading off the willpower and energy you spend on Vegetarianism vs other ways you might be reducing suffering or increasing pleasure/joy/complex-beauty. I wouldn't push too hard for an effective altruist to be Vegetarian. If you argue that devoting your "give a shit" energy is better spent on fighting poverty or injustice or preventing the destruction of the world by unfriendly AI, I won't argue with you.

But I'd like people to at least have animal suffering on the radar of "things I'd like to give a shit about, if I had the energy, and that if it became much more convenient to care about, I'd make small modifications to my lifestyle." So that when in-vitro meat becomes cheap and tasty, I think people should make the initial effort to switch over. (Possibly even while it's still a bit more expensive). Meanwhile, humanely-raised meat tends to be tastier (it's overall higher quality) so if you have leftover budget for nicer food in the first place, I'd consider that.

I don't know how to resolve things like "the ecosystem is full of terribleness". It is possible than plans that include "destroy all natural ecosystems" will turn out to be correct, but my prior on any given person deciding correctly to do that and execute on it without making lots of things worse is low.

Replies from: Swimmer963, SaidAchmiz, Qiaochu_Yuan, MugaSofer
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-06-14T22:02:14.885Z · LW(p) · GW(p)

But I'd like people to at least have animal suffering on the radar of "things I'd like to give a shit about, if I had the energy, and that if it became much more convenient to care about, I'd make small modifications to my lifestyle." So that when in-vitro meat becomes cheap and tasty, I think people should make the initial effort to switch over. (Possibly even while it's still a bit more expensive).

This is pretty much the case for me. I was vegetarian for a while in high school–oddly enough, less for reducing-suffering ethical reasons than for "it costs fewer resources to produce enough plants to feed the world population than to produce enough meat, as animals have to be fed plants and are a low-efficiency conversion of plant calories, so in order to better use the planet's resources, everyone should eat more plants and less meat." I consistently ended up with low iron and B12. It's possible to get enough iron, B12, and protein as a vegetarian, but you do have to plan your meals a bit more carefully (i.e. always have beans with rice so you get complete protein) and possibly eat foods that you don't like as much. Right now I cook about one dish with meat in it per week, and I haven't had any iron or B12 deficiency problems since graduating high school 4 years ago.

In general, I optimize food for low cost as well as health value and ethics, but if in-vitro meat became available, I think this is valuable enough in the long run that I would be willing to "subsidize" its production and commercialization by paying higher prices.

Replies from: maia
comment by maia · 2013-06-16T19:31:40.897Z · LW(p) · GW(p)

I was vegetarian for a while in high school–oddly enough, less for reducing-suffering ethical reasons than for "it costs fewer resources to produce enough plants to feed the world population than to produce enough meat, as animals have to be fed plants and are a low-efficiency conversion of plant calories, so in order to better use the planet's resources, everyone should eat more plants and less meat."

Oddly, this sentence is more or less exactly true for me as well. Only on LessWrong...

Replies from: wedrifid
comment by wedrifid · 2013-06-16T20:56:38.124Z · LW(p) · GW(p)

Oddly, this sentence is more or less exactly true for me as well. Only on LessWrong...

That reasoning does not seem to be either unique to or particularly prevalent on lesswrong.

Replies from: maia
comment by maia · 2013-06-16T21:06:20.514Z · LW(p) · GW(p)

Fair enough. I've never encountered it elsewhere, myself.

Replies from: wedrifid
comment by wedrifid · 2013-06-16T21:22:58.853Z · LW(p) · GW(p)

Fair enough. I've never encountered it elsewhere, myself.

(Typically it is expressed as an additional excuse/justification for the political and personal position being taken for unrelated reasons.)

comment by Said Achmiz (SaidAchmiz) · 2013-06-14T20:43:40.844Z · LW(p) · GW(p)

Most people don't need meat (or much of it) to be productive (the amount most people think they need is pretty grossly wrong)

Could you (very briefly) expand on this, or even just give a link with a reasonably accessible explanation? I am curious.

Replies from: MTGandP, MugaSofer, Raemon
comment by MTGandP · 2013-06-15T22:35:25.521Z · LW(p) · GW(p)

From the American Dietetic Association: http://www.ncbi.nlm.nih.gov/pubmed/19562864

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T22:56:19.717Z · LW(p) · GW(p)

Interesting, thank you.

comment by MugaSofer · 2013-06-15T21:40:14.596Z · LW(p) · GW(p)

Well, considering the existence of healthy vegetarians, it seems clear that we evolved to be at least capable of surviving in a low-meat environment.

I don't have any sources or anything, and I'm pretty lazy, but I've been vegetarian since childhood, and never had any health problems as a result AFAICT.

Replies from: SaidAchmiz, elharo
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T21:53:00.176Z · LW(p) · GW(p)

I am entirely willing to take your word on this, but you know what they say about "anecdote" and declensions thereof. In this case specifically, one of the few things that seem to be reliably true about nutrition is that "people are different, and what works for some may fail or be outright disastrous for others".

In any case, Raemon seemed to be making a weaker claim than "vegetarianism has no serious health downsides". "Healthy portions of meat amount to far less than the 32 oz steak a day implied by some anti-vegetarian doomsayers" is something I'm completely willing to grant.

Replies from: MugaSofer
comment by MugaSofer · 2013-06-16T15:27:44.309Z · LW(p) · GW(p)

Fair enough.

comment by elharo · 2013-06-16T12:59:07.106Z · LW(p) · GW(p)

Considering the existence of healthy vegetarians, it seems clear that we evolved to be at least capable of surviving in a low-meat environment supported by modern agriculture that produces large quantities of concentrated non-meat protein in the form of tofu, eggs, whey protein, beans, and the like. This may be a happy accident. Are there any vegetarian hunter-gatherer societies?

Replies from: TheOtherDave, Nornagest
comment by TheOtherDave · 2013-06-16T13:56:25.236Z · LW(p) · GW(p)

"Are there any vegetarian hunter-gatherer societies?"

Wouldn't these be "gatherer societies" pretty much definitionally?

Replies from: wedrifid
comment by wedrifid · 2013-06-16T16:12:39.303Z · LW(p) · GW(p)

Wouldn't these be "gatherer societies" pretty much definitionally?

(Unless there are Triffids!)

Replies from: TheOtherDave
comment by Nornagest · 2013-06-17T19:11:52.128Z · LW(p) · GW(p)

I've been having a hell of a time finding trustworthy cites on this, possibly because there are so many groups with identity stakes in the matter -- obesity researchers and advocates, vegetarians, and paleo diet adherents all have somewhat conflicting interests in ancestral nutrition. That said, this survey paper describes relatively modern hunter-gatherer diets ranging from 1% vegetable (the Nunamiut of Alaska) to 74% vegetable (the Gwi of Africa), with a mean somewhere around one third; no entirely vegetarian hunter-gatherers are described. This one describes societies subsisting on up to 90% gathered food (I don't know whether or not this is synonymous with "vegetable"), but once again no exclusively vegetarian cultures and a mean around 30%.

I should mention by way of disclaimer that modern forager cultures tend to live in marginal environments and these numbers might not reflect the true ancestral proportions. And, of course, that this has no bearing either way on the ethical dimensions of the subject.

comment by Raemon · 2013-06-14T21:02:18.086Z · LW(p) · GW(p)

I'm having trouble finding... any kind of dietary information that isn't obviously politicized (in any direction) right now.

But basically, when people think of a "serving" of meat, they imagine a large hunk of steak, when in fact a serving is more like the size of a deck of cards. A healthy diet has enough things going on in it besides meat that removing meat shouldn't feel like it's gutting out your entire source of pleasure from food.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-14T21:33:37.666Z · LW(p) · GW(p)

Ah. Yeah, I don't eat meat in huge chunks or anything. But meat sure is delicious, and comes in a bunch of different formats. Obviously removing meat would not totally turn my diet into a bleak, gray desert of bland gruel; I don't think anyone would claim that. But it would make it meaningfully less enjoyable, on the whole.

comment by Qiaochu_Yuan · 2013-06-14T20:26:24.719Z · LW(p) · GW(p)

This all seems pretty reasonable (except that I don't think the validity of a human preference has much to do with how difficult it is for non-humans to have the same preference).

comment by MugaSofer · 2013-06-15T21:35:54.470Z · LW(p) · GW(p)

Most people don't need meat (or much of it) to be productive (the amount most people think they need is pretty grossly wrong)

This fact seems to outweigh the rest of your comment.

comment by Vaniver · 2013-06-14T19:00:09.434Z · LW(p) · GW(p)

What about living organisms outside of the animal kingdom, like bugs?

Bugs, both true and not, are most definitely part of the animal kingdom.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-14T20:14:15.628Z · LW(p) · GW(p)

Whoops. Edited.

comment by Xodarap · 2013-06-14T20:21:40.603Z · LW(p) · GW(p)

It doesn't seem like you're really criticizing "pro-animal people" - you're just critiquing utilitarianism. (e.g. "Is it arbitrary to state that suffering is bad?" "What if you could help others only at great expense to yourself?")

Supposing one does accept utilitarian principles, is there any reason why we shouldn't care about the suffering of non-humans?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-14T20:23:43.115Z · LW(p) · GW(p)

This is half a criticism and half a reflection of arguments that have been used against my position that I think are problematic. To the extent that you think these arguments are problematic, I probably agree.

is there any reason why we shouldn't care about the suffering of non-humans?

Resources spent on alleviating the suffering of non-humans are resources that aren't spent on alleviating the suffering of humans, which I value a lot more.

Replies from: elharo, Xodarap
comment by elharo · 2013-06-16T12:53:53.983Z · LW(p) · GW(p)

That's a false dichotomy. Resources that stop being spent on alleviating the suffering of non-humans do not automatically translate into resources that are spent on alleviating the suffering of humans. Nor is it the case that there are insufficient resources in the world today to eliminate most human suffering. The issue there is purely one of distribution of wealth, not gross wealth.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-16T19:00:53.209Z · LW(p) · GW(p)

Yes, but they're less available. Maybe I triggered the wrong intuition with the word "resources." I had in mind resources like the time and energy of intelligent people, not resources like money. I think it's plausible to guess that time and energy spent on one altruistic cause really does funge directly against time and energy spent on others, e.g. because of good-deed-for-the-day effects.

comment by Xodarap · 2013-06-14T20:39:20.236Z · LW(p) · GW(p)

Which I value a lot more

Why?

(Keeping in mind that we have agreed the basic tenets of utilitarianism are correct: pain is bad etc.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-14T20:48:56.036Z · LW(p) · GW(p)

(Keeping in mind that we have agreed the basic tenets of utilitarianism are correct: pain is bad etc.)

Oh. No. Human pain is bad. The pain of sufficiently intelligent animals might also be bad. Fish pain and under is irrelevant.

Replies from: Pablo_Stafforini, Xodarap
comment by Pablo (Pablo_Stafforini) · 2013-06-14T21:04:59.669Z · LW(p) · GW(p)

There is nothing inconsistent about valuing the pain of some animals, but not of others. That said, I find the view hard to believe. When I reflect on why I think pain is bad, it seems clear that my belief is grounded in the phenomenology of pain itself, rather than in any biological or cognitive property of the organism undergoing the painful experience.

Pain is bad because it feels bad. That's why I think pain should be alleviated irrespective of the species in which it occurs.

Replies from: Qiaochu_Yuan, Nornagest
comment by Qiaochu_Yuan · 2013-06-14T21:54:34.099Z · LW(p) · GW(p)

I don't share these intuitions. Pain is bad if it happens to something I care about. I don't care about fish.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-06-14T21:55:51.576Z · LW(p) · GW(p)

I don't care about fish either. I care about pain. It just so happens that fish can experience pain.

comment by Nornagest · 2013-06-15T23:02:28.815Z · LW(p) · GW(p)

Truthfully, I'm not even sure I believe pain is bad in the relevant sense. It's certainly something I'd prefer to avoid under most circumstances, but when I think about it in detail there always ends up being a "because" in there: because it monopolizes attention, because in sufficient quantity it can thoroughly screw up your motivational and emotional machinery, because it's often attached to particular actions in a way that limits my ability to do things. It doesn't feel like a root-level aversion to my reasoning self: when I've torn a ligament and can't flex my foot in a certain way without intense stabbing agony, I'm much more annoyed by the things it prevents me from doing than by the pain it gives me, and indeed I remember the former much better than the latter.

I haven't thought this through rigorously, but if I had to take a stab at it right now I'd say that pain is bad in roughly the same way that pleasure is good: in other words, it works reasonably well as a rough experiential pointer to the things I actually want to avoid, and it does place certain constraints on the kind of life I'd want to live, but I'd expect trying to ground an entire moral system in it to give me some pretty insane results once I started looking at corner cases.

comment by Xodarap · 2013-06-14T21:24:56.979Z · LW(p) · GW(p)

You probably don't want to draw the line at fish.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-14T21:52:27.979Z · LW(p) · GW(p)

What point are you trying to make with that link?

Replies from: Swimmer963, Xodarap
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-06-14T22:08:41.451Z · LW(p) · GW(p)

Probably that fish don't seem to be hugely different from amphibians/reptiles, birds, and mammals in terms of the six substitute-indicators-for-feeling-pain, and so it's hard to say whether their pain experience is different.

I would agree that fish pain is less relevant than human pain (they have a central nervous system, yes, but less of one, and a huge part of what makes human pain bad is the psychological suffering associated with it).

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-14T22:41:41.932Z · LW(p) · GW(p)

My claim was that I don't care about fish pain, not that fish pain is too different from human pain to matter. Rather, fish are too different from humans to matter.

Replies from: MugaSofer, Swimmer963, Xodarap
comment by MugaSofer · 2013-06-15T21:34:21.517Z · LW(p) · GW(p)

Rather, fish are too different from humans to matter.

Could you expand on this idea?

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-06-16T02:17:56.521Z · LW(p) · GW(p)

Fair enough. I think "too X to matter" is a complex concept, though.

comment by Xodarap · 2013-06-14T23:06:01.968Z · LW(p) · GW(p)

How is the statement "fish and humans feel pain approximately equally" different from the statement "we should care about fish and human pain approximately equally?"

Replies from: shminux, Qiaochu_Yuan
comment by Shmi (shminux) · 2013-06-14T23:14:42.292Z · LW(p) · GW(p)

You and I feel pain approximately equally, but I care about mine a lot more than about yours.

Replies from: MugaSofer, Xodarap
comment by MugaSofer · 2013-06-15T23:08:55.931Z · LW(p) · GW(p)

Do you consider this part of morality?

I mean, I personally experience selfish emotions, but I usually, y'know, try to override them?

Replies from: Nornagest, shminux
comment by Nornagest · 2013-06-15T23:20:10.256Z · LW(p) · GW(p)

Most people probably wouldn't consider that moral as such (though they'd likely be okay with it on pragmatic grounds), but the more general idea of treating some people's pain as more significant than others' is certainly consistent with a lot of moral systems. Common privileged categories: friends, relatives, children, the weak or helpless, people not considered evil.

comment by Shmi (shminux) · 2013-06-16T00:09:26.214Z · LW(p) · GW(p)

It's perfectly moral for me to be selfish to some degree, yes. I cannot care about others if I don't care about myself. You might work differently, but utter unselfishness seems like an anomaly.

Replies from: wedrifid
comment by wedrifid · 2013-06-16T06:48:43.137Z · LW(p) · GW(p)

You might work differently, but utter unselfishness seems like an anomaly.

It also seems like a lie (to the self or to others).

comment by Xodarap · 2013-06-15T22:40:49.629Z · LW(p) · GW(p)

Fair enough. To restate but with different emphasis: "we should care about fish and human pain approximately equally?"

comment by Qiaochu_Yuan · 2013-06-14T23:10:59.721Z · LW(p) · GW(p)

"I care about X's pain" is mostly a statement about X, not a statement about pain. I don't care about fish and I care about humans. You may not share this moral preference, but are you claiming that you don't even understand it?

Replies from: Xodarap
comment by Xodarap · 2013-06-15T22:50:26.095Z · LW(p) · GW(p)

No, I have a lot of biases like this: the halo effect makes me think that humans' ability to do math makes our suffering more important, "what you see is all there is" allows me to believe that slaughterhouses which operate far away must be morally acceptable, and so forth.

Anyway, fish suffering isn't a make-or-break decision. People very frequently have the opportunity to choose a bean burrito over a chicken one (or even a beef burrito over a chicken one), and from what Peter has presented here it seems like this is an extremely effective way to reduce suffering.

comment by Xodarap · 2013-06-14T22:06:25.677Z · LW(p) · GW(p)

I may be misunderstanding you, but I thought you were suggesting that there is a non-arbitrary set of physiological features that vertebrates share but fish don't. I was pointing out that this doesn't seem to be the case.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-14T22:43:09.061Z · LW(p) · GW(p)

No, I'm suggesting that I don't care about fish.

comment by MugaSofer · 2013-06-15T21:31:49.705Z · LW(p) · GW(p)

how simple is the simplest animal you're willing to assign moral worth to?

Can't speak for all vegetarians/pro-animal-rights types, but I personally discount based on complexity (or intelligence of whatever.)

That's not the same as discounting simpler creatures altogether - at least not when we're discussing, say, pigs.

(At what point do you draw the line to start valuing creatures, by the way? Chimpanzees? Children? Superintelligent gods? Just curious, this isn't a reductio.)

Replies from: Qiaochu_Yuan, Eugine_Nier
comment by Qiaochu_Yuan · 2013-06-15T21:33:20.468Z · LW(p) · GW(p)

Right, but what's the discount rate? What does your discount rate imply is the net moral worth of all mosquitoes on the planet? All bacteria?

I'm not sure where my line is either. It's hovering around pigs and dolphins at the moment.

Replies from: MugaSofer
comment by MugaSofer · 2013-06-16T15:30:22.768Z · LW(p) · GW(p)

I'm not sure what the discount rate is, which is largely why I asked if you were sure about where the line was. I mostly go off intuition for determining how much various species are worth, so if you throw scope insensitivity into the mix...

comment by Eugine_Nier · 2013-06-16T06:47:31.470Z · LW(p) · GW(p)

but I personally discount based on complexity (or intelligence of whatever.)

Would you apply said discount rate intraspecies in addition to interspecies?

By the way. One question I always wanted to ask a pro-animal-rights type: would you support a program for the extinction/reductions of the population of predatory animals on the grounds that they cause large amounts of unnecessary suffering to their prey?

Replies from: Lukas_Gloor, davidpearce, elharo, KatieHartman, army1987
comment by Lukas_Gloor · 2013-06-16T13:43:36.528Z · LW(p) · GW(p)

By the way. One question I always wanted to ask a pro-animal-rights type: would you support a program for the extinction/reductions of the population of predatory animals on the grounds that they cause large amounts of unnecessary suffering to their prey?

Yes. Assuming that prey populations are kept from skyrocketing (e.g. through the use of immunocontraception) since that too would result in large amounts of unnecessary suffering.

comment by davidpearce · 2013-06-16T09:49:53.421Z · LW(p) · GW(p)

Eugine, in answer to your question: yes. If we are committed to the well-being of all sentience in our forward light-cone, then we can't simultaneously conserve predators in their existing guise. (cf. http://www.abolitionist.com/reprogramming/index.html) Humans are not obligate carnivores; and the in vitro meat revolution may shortly make this debate redundant; but it's questionable whether posthuman superintelligence committed to the well-being of all sentience could conserve humans in their existing guise either.

comment by elharo · 2013-06-16T12:23:41.549Z · LW(p) · GW(p)

This is, sadly, not a hypothetical question. This is an issue wildlife managers face regularly. For example, do you control the population of Brown-headed Cowbirds in order to maintain or increase the population of Bell's Vireo or Kirtlands Warbler? The answer is not especially controversial. The only questions are which methods of predator control are most effective, and what unintended side effects might occur. However these are practical, instrumental questions, not moral ones.

Where this comes into play in the public is in the conflict between house cats and birds. In particular, the establishment of feral cat colonies causes conflicts between people who preference non-native, vicious but furry and cute predators and people who preference native, avian, non-pet species. Indeed, this is one of the problems I have with many animal rights groups such as the Humane Society. They're not pro-animal rights, just pro-pet species rights.

A true concern for animals needs to treat animals as animals, not as furry baby human substitutes. We need to value the species as a whole, not just the individual members; and we need to value their inherent nature as predators and prey. A Capuchin Monkey living in a zoo safe from the threat of Harpy Eagles leads a life as limited and restricted as a human living in Robert Nozick's Experience Machine. While zoos have their place, we should not seek to move all wild creatures into safe, sterile environments with no predators, pain, or danger any more than we would move all humans into isolated, AI-created virtual environments with no true interaction with reality.

Replies from: davidpearce, KatieHartman
comment by davidpearce · 2013-06-16T13:12:48.230Z · LW(p) · GW(p)

Elharo, I take your point, but surely we do want humans to enjoy healthy lives free from hunger and disease and safe from parasites and predators? Utopian technology promises similar blessings to nonhuman sentients too. Human and nonhuman animals alike typically flourish best when free- living but not "wild".

Replies from: elharo
comment by elharo · 2013-06-16T13:16:53.274Z · LW(p) · GW(p)

I'm not quite sure what you're saying here. Could you elaborate or rephrase?

comment by KatieHartman · 2013-06-16T13:23:43.453Z · LW(p) · GW(p)

We need to value the species as a whole, not just the individual members; and we need to value their inherent nature as predators and prey.

Why?

While zoos have their place, we should not seek to move all wild creatures into safe, sterile environments with no predators, pain, or danger any more than we would move all humans into isolated, AI-created virtual environments with no true interaction with reality.

Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn't we?

Replies from: elharo
comment by elharo · 2013-06-16T22:47:26.986Z · LW(p) · GW(p)

We're treading close to terminal values here. I will express some aesthetic preference for nature qua nature. However I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible, and I see no justification for anthropocentric limits on such a preference.

Absent strong reasons otherwise, "do no harm" and "careful, limited action" should be the default position. The best we can do for animals that don't have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat. Where we have destroyed it, attempt to restore it as best we can, or protect what remains. Focus on the species, not the individual. We have neither the knowledge nor the will to protect individual, non-pet animals.

When you ask, "Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn't we?" it's not clear to me whether you're referring to why we shouldn't move humans into virtual boxes or why we shouldn't move animals into virtual boxes, or both. If you're talking about humans, the answer is because we don't get to make that choice for other humans. I for one have no desire to live my life in Nozick box, and will oppose anyone who tries to put me in one while I'm still capable of living a normal life. If you're referring to animals, the argument is similar though more indirect. Ultimately humans should not take it upon themselves to decide how another species lives. The burden of proof rests on those who wish to tamper with nature, not those who wish to leave it alone.

Replies from: KatieHartman
comment by KatieHartman · 2013-06-17T00:26:35.405Z · LW(p) · GW(p)

We're treading close to terminal values here. I will express some aesthetic preference for nature qua nature.

That strikes me as inconsistent, assuming that preventing suffering/minimizing disutility is also a terminal value. In those terms, nature is bad. Really, really bad.

I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible.

It seems arbitrary to exclude the environment from the cluster of factors that go into living "the lives they choose." I choose to not live in a hostile environment where things much larger than me are trying to flay me alive, and I don't think it's too much of a stretch to assume that most other conscious beings would choose the same if they knew they had the option.

Absent strong reasons otherwise, "do no harm" and "careful, limited action" should be the default position. The best we can do for animals that don't have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat.

Taken with this...

We need to value the species as a whole, not just the individual members; and we need to value their inherent nature as predators and prey.

...it seems like you don't really have a problem with animal suffering, as long as human beings aren't the ones causing it. But the gazelle doesn't really care whether she's being chased down by a bowhunter or a lion, although she might arguably prefer that the human kill her if she knew what was in store for her from the lion.

I still don't know why you think we ought to value predators' "inherent nature" as predators or treat entire species as more important than their constituent individuals. My follow-up questions would be:

(1) If there were a species of animal who fed on the chemicals produced from intense, prolonged suffering and fear, would we be right to value its "inherent nature" as a torturer? Would it not be justifiable to either destroy it or alter it sufficiently that it didn't need to torture other creatures to eat?

(2) What is the value in keeping any given species in existence, assuming that its disappearance would have an immense positive effect on the other conscious beings in its environment? Why is having n species necessarily better than having n-1? Presumably, you wouldn't want to add the torture-predators in the question above to our ecosystem - but if they were already here, would you want them to continue existing? Are worlds in which they exist somehow better than ours?

We have neither the knowledge nor the will to protect individual, non-pet animals.

We certainly know enough to be able to cure their most common ailments, ease their physical pain, and prevent them from dying from the sort of injuries and illnesses that would finish them off in their natural environments. Our knowledge isn't perfect, but it's a stretch to say we don't have "the knowledge to protect" them. I suspect that our will to do so is constrained by the scope of the problem. "Fixing nature" is too big a task to wrap our heads around - for now. That might not always be the case.

When you ask, "Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn't we?" it's not clear to me whether you're referring to why we shouldn't move humans into virtual boxes or why we shouldn't move animals into virtual boxes, or both.

Both.

If you're talking about humans, the answer is because we don't get to make that choice for other humans. I for one have no desire to live my life in Nozick box, and will oppose anyone who tries to put me in one while I'm still capable of living a normal life.

Then that environment wouldn't be better on the measures that matter to you, although I suspect that there is some plausible virtual box sufficiently better on the other measures that you would prefer it to the box you live in now. I have a hard time understanding what is so unappealing about a virtual world versus the "real one."

If you're referring to animals, the argument is similar though more indirect. Ultimately humans should not take it upon themselves to decide how another species lives.

This suggests to me that you haven't really internalized exactly how bad it is to be chased down by something that wants to pin you down and eat parts of you away until you finally die.

The burden of proof rests on those who wish to tamper with nature, not those who wish to leave it alone.

To prove what?

Replies from: army1987, elharo, elharo
comment by A1987dM (army1987) · 2013-06-17T12:26:21.568Z · LW(p) · GW(p)

That strikes me as inconsistent, assuming that preventing suffering/minimizing disutility is also a terminal value.

Two values being in conflict isn't necessarily inconsistent, it just mean that you have to make trade-offs.

comment by elharo · 2013-07-02T22:23:54.093Z · LW(p) · GW(p)

An example of the importance of predators I happened across recently:

Mounting evidence indicates that there are cascading ecological effects when top-level predators decline. A recent investigation looked at four reef systems in the Pacific Islands, ranging from hosting a robust shark population to having few, if any, because of overfishing. Where sharks were abundant, other fish and coral thrived. When they were absent, algae choked the reef nearly to death and biodiversity plummetted.

Overfishing sharks, such as the bullk, great white, and hammerhead, aloing the Atlantic Coast has led to an explosion of the rays, skates, and small sharks they eat, another study found. Some of these creatures, in turn, are devouring shellfish and possibly tearing up seagrass while they forage, destroying feeding grounds for birds and nurseries for fish.

To have healthy populations of healthy seabirds and shorebirds, we need a healthy marine environment," says Mike Sutton, Audubon California executive director and a Shark-Friendly Marina Intiative board member. "We're not goping to have that without sharks."

"Safer Waters", Alisa Opar, Audubon, July-August 2013, p. 52

This is just one example of the importance of top-level predators for everything in the ecosystem. Nature is complex and interconnected. If you eliminate some species because you think they're mean, you're going to damage a lot more.

Replies from: nshepperd, KatieHartman
comment by nshepperd · 2013-07-04T16:18:50.636Z · LW(p) · GW(p)

This is an excellent example of how it's a bad idea to mess with ecosystems without really knowing what you're doing. Ideally, any intervention should be tested on some trustworthy (ie. more-or-less complete, and experimentally verified) ecological simulations to make sure it won't have any catastrophic effects down the chain.

But of course it would be a mistake to conclude from this that keeping things as they are is inherently good.

comment by KatieHartman · 2013-07-04T08:16:57.648Z · LW(p) · GW(p)

If you eliminate some species because you think they're mean, you're going to damage a lot more.

I'd just like to point out that (a) "mean" is a very poor descriptor of predation (neither its severity nor its connotations re: motivation do justice to reality), and (b) this use of "damage" relies on the use of "healthy" to describe a population of beings routinely devoured alive well before the end of their natural lifespans. If we "damaged" a previously "healthy" system wherein the same sorts of things were happening to humans, we would almost certainly consider it a good thing.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-04T09:03:07.997Z · LW(p) · GW(p)

(b) this use of "damage" relies on the use of "healthy" to describe a population of beings routinely devoured alive well before the end of their natural lifespans.

If "natural lifespans" means what they would have if they weren't eaten, it's a tautology. If not, what does it mean? The shark's "natural" lifespan requires that it eats other creatures. Their "natural" lifespan requires that it does not.

Replies from: KatieHartman
comment by KatieHartman · 2013-07-04T16:01:01.276Z · LW(p) · GW(p)

Yes, I'm using "natural lifespan" here as a placeholder for "the typical lifespan assuming nothing is actively trying to kill you." It's not great language, but I don't think it's obviously tautological.

The shark's "natural" lifespan requires that it eats other creatures. Their "natural" lifespan requires that it does not.

Yes. My question is whether that's a system that works for us.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-04T16:12:15.027Z · LW(p) · GW(p)

We can say, "Evil sharks!" but I don't feel any need to either exterminate all predators from the world, nor to modify them to graze on kelp. Yes, there's a monumental amount of animal suffering in the ordinary course of things, even apart from humans. Maybe there wouldn't be in a system designed by far future humans from scratch. But radically changing the one we live in when we hardly know how it all works -- witness the quoted results of overfishing shark -- strikes me as quixotic folly.

Replies from: KatieHartman
comment by KatieHartman · 2013-07-04T16:32:52.270Z · LW(p) · GW(p)

It strikes me as folly, too. But "Let's go kill the sharks, then!" does not necessarily follow from "Predation is not anywhere close to optimal." Nowhere have I (or anyone else here, unless I'm mistaken) argued that we should play with massive ecosystems now.

I'm very curious why you don't feel any need to exterminate or modify predators, assuming it's likely to be something we can do in the future with some degree of caution and precision.

Replies from: Richard_Kennaway, SaidAchmiz
comment by Richard_Kennaway · 2013-07-04T17:36:04.648Z · LW(p) · GW(p)

I'm very curious why you don't feel any need to exterminate or modify predators, assuming it's likely to be something we can do in the future with some degree of caution and precision.

That sort of intervention is too far in the future for me to consider it worth thinking about. People of the future can take care of it then. That applies even if I'm one of those people of the far future (not that I expect to be). Future-me can deal with it, present-me doesn't care or need to care what future-me decides.

In contrast, smallpox, tuberculosis, cholera, and the like are worth exterminating now, because (a) unlike the beautiful big fierce animals, they're no loss in themselves, (b) it doesn't appear that their loss will disrupt any ecosystems we want to keep, and (c) we actually can do it here and now.

comment by Said Achmiz (SaidAchmiz) · 2013-07-04T18:42:54.784Z · LW(p) · GW(p)

There's something about this sort of philosophy that I've wondered about for a while.

Do you think that deriving utility from the suffering of others (or, less directly, from activities that necessarily involve the suffering of others) is a valid value? Or is it intrinsically invalid?

That is, if we were in a position to reshape all of reality according to our whim, and decided to satisfy the values of all morally relevant beings, would we also want to satisfy the values of beings that derive pleasure/utility from the suffering of others, assuming we could do so without actually inflicting disutility/pain on any other beings?

And more concretely: in a "we are now omnipotent gods" scenario where we could, if we wanted to, create for sharks an environment where they could eat fish to their hearts' content (and these would of course be artificial fish without any actual capacity for suffering, unbeknownst to the sharks) — would we do so?

Or would we judge the sharks' pleasure from eating fish to be an invalid value, and simply modify them to not be predators?

The shark question is perhaps a bit esoteric; but if we substitute "psychopaths" or "serial killers" for "sharks", it might well become relevant at some future date.

Replies from: KatieHartman
comment by KatieHartman · 2013-07-08T04:10:04.600Z · LW(p) · GW(p)

I'm not sure what you mean by "valid" here - could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn't inferior to a world where beings are deriving the same amount of utility from some other activity that doesn't affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don't do anything that actually causes suffering.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-08T05:10:03.260Z · LW(p) · GW(p)

I'm not sure what you mean by "valid" here - could you clarify?

Sure. By "valid" I mean something like "worth preserving", or "to be endorsed as a part of the complex set of values that make up human-values-in-general".

In other words, in the scenario where we're effectively omnipotent (for this purpose, at least), and have decided that we're going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: "we'll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don't find their values to be worth satisfying, so they're going to be excluded from this"?

I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let's also satisfy the values of all the paperclip maximizers. We don't find paperclip maximization to be a valid value, in that sense.

So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy's values as well as those of humans? If not, how about sharks? Psychopaths? Etc.?

I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn't inferior to a world where beings are deriving the same amount of utility from some other activity that doesn't affect other beings, all else held equal.

Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool.

However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don't do anything that actually causes suffering.

Well, sure. But let's keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.

comment by elharo · 2013-06-17T10:56:28.505Z · LW(p) · GW(p)

There's a lot here, and I will try to address some specific points later. For now, I will say that personally I do not espouse utilitarianism for several reasons, so if you find me inconsistent with utilitarianism, no surprise there. Nor do I accept the complete elimination of all suffering and maximization of pleasure as a terminal value. I do not want to live, and don't think most other people want to live, in a matrix world where we're all drugged to our gills with maximal levels of L-dopamine and fed through tubes.

Eliminating torture, starvation, deprivation, deadly disease, and extreme poverty is good; but that's not the same thing as saying we should never stub our toe, feel some hunger pangs before lunch, play a rough game of hockey, or take a risk climbing a mountain. The world of pure pleasure and no pain, struggle, or effort is a dystopia, not a utopia, at least in my view.

I suspect that giving any one single principle exclusive value is likely a path to a boring world tiled in paperclips. It is precisely the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living in. There is no single principle, not even maximizing pleasure and minimizing pain, that does not lead to dystopia when it is taken to its logical extreme and all other competing principles are thrown out. We are complicated and contradictory beings, and we need to embrace that complexity; not attempt to smooth it out.

Replies from: davidpearce
comment by davidpearce · 2013-06-17T16:54:32.062Z · LW(p) · GW(p)

Elharo, which is more interesting? Wireheading - or "the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living"? Yes, I agree, the latter certainly sounds more exciting; but "from the inside", quite the reverse. Wireheading is always enthralling, whereas everyday life is often humdrum. Likewise with so-called utilitronium. To humans, utilitronium sounds unimaginably dull and monotonous, but "from the inside" it presumably feels sublime.

However, we don't need to choose between aiming for a utilitronium shockwave and conserving the status quo. The point of recalibrating our hedonic treadmill is that life can be fabulously richer - in principle orders of magnitude richer - for everyone without being any less diverse, and without forcing us to give up our existing values and preference architectures. (cf. "The catechol-O-methyl transferase Val158Met polymorphism and experience of reward in the flow of daily life.": http://www.ncbi.nlm.nih.gov/pubmed/17687265) In principle, there is nothing to stop benign (super)intelligence from spreading such reward pathway enhancements across the phylogenetic tree.

comment by KatieHartman · 2013-06-16T11:59:52.352Z · LW(p) · GW(p)

By the way. One question I always wanted to ask a pro-animal-rights type: would you support a program for the extinction/reductions of the population of predatory animals on the grounds that they cause large amounts of unnecessary suffering to their prey?

I've heard this posed as a "gotcha" question for vegetarians/vegans. The socially acceptable answer is the one that caters to two widespread and largely unexamined assumptions: that extinction is just bad, always, and that nature is just generally good. If the questioned responds in any other way, he or she can be written off right there. Who the hell thinks nature is a bad thing and genocide is a good thing?

But once you get past the idea that nature is somehow inherently good and that ending any particular species is inherently bad, there's not really any way to justify allowing the natural world to exist the way it does if you can do something about it.

Replies from: Jiro
comment by Jiro · 2013-07-01T16:09:52.977Z · LW(p) · GW(p)

It's a "gotcha" question for vegetarians because vegetarians in the real world are seldom vegetarians in a vacuum; their vegetarianism is typically associated and based on a cloud of other ideas that include respect for nature. In other words, it's not a "gotcha" because you would write off the vegetarian who believes it, it's because believing it would undermine his own core, but illogical and unstated, motives.

comment by A1987dM (army1987) · 2013-06-16T16:59:03.965Z · LW(p) · GW(p)

Would you apply said discount rate intraspecies in addition to interspecies?

The former effect would generally be a heckuva lot smaller than the latter.

comment by Shmi (shminux) · 2013-06-14T19:27:22.748Z · LW(p) · GW(p)

I'm parsing this as follows: I don't have a good intuition on whose suffering matters, and unbounded utilitarianism is vulnerable to the Repugnant Conclusion, so I will pick an obvious threshold: humans and decide to not care about other animals until and unless the reason to care arises.

EDIT: the Schelling point for the caring threshold seems to be shifting toward progressively less intelligent (but still cute and harmless) species as time passes

Replies from: Qiaochu_Yuan, MugaSofer, TheOtherDave
comment by Qiaochu_Yuan · 2013-06-14T20:08:08.616Z · LW(p) · GW(p)

EDIT: the Schelling point for the caring threshold seems to be shifting toward progressively less intelligent (but still cute and harmless) species as time passes

Have you read The Narrowing Circle?

Replies from: shminux
comment by Shmi (shminux) · 2013-06-14T21:03:52.125Z · LW(p) · GW(p)

Have you read The Narrowing Circle?

I tried. But it's written in extreme Gwernian: well researched, but long, rambling and without a decent summary upfront. I skipped to the (also poorly written) conclusion, missing most of the arguments, and decided that it's not worth my time. The essay would be right at home as a chapter in some dissertation, though.

Leaving aside the dynamics of the Schelling point, did the rest of my reply miss the mark?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-14T21:51:12.610Z · LW(p) · GW(p)

What I mostly got out of it is that there are two big ways in which the circle of things with moral worth has shrunk rather than grown throughout history: it shrunk to exclude gods, and it shrunk to exclude dead people.

Leaving aside the dynamics of the Schelling point, did the rest of my reply miss the mark?

I'm not sure what your comment was intended to be, but if it was intended to be a summary of the point I was implicitly trying to make, then it's close enough.

comment by MugaSofer · 2013-06-15T21:41:41.795Z · LW(p) · GW(p)

... are you including chimpanzees there, by any chance?

comment by TheOtherDave · 2013-06-14T20:10:56.870Z · LW(p) · GW(p)

the Schelling point for the caring threshold seems to be shifting toward progressively less intelligent (but still cute and harmless) species as time passes

"Cute" I'll give you.
"Harmless" I'm not sure about.

That is, it's not in the least bit clear to me that I can reliably predict, from species S being harmful and cute, that the Schelling point you describe won't/hasn't shifted so as to include S on the cared-about side.

For clarity: I make no moral claims here about any of this, and am uninterested in the associated moral claims, I'm just disagreeing with the bare empirical claim.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-15T08:37:52.016Z · LW(p) · GW(p)

I think it's simply a case of more animals moving into the harmless category as our technology improves.

comment by elharo · 2013-06-16T12:49:12.366Z · LW(p) · GW(p)

The value of a species is not merely the sum of the values of the individual members of the species. I feel a moral obligation to protect and not excessively harm the environment without necessarily feeling a moral obligation to prevent each gazelle from being eaten by a lion. There is value in nature that includes the predator-prey cycle. The moral obligation to animals comes from their worth as animals, not from a utilitarian calculation to maximize pleasure and minimize pain. Animals living as animals in the wild (which is very different than animals living in a farm or as pets) will experience pleasure and pain; but even the ones too low on the complexity scale to feel pleasure and pain have value and should have a place to exist. I don't know if an Orange Roughy feels pain or pleasure or not; but either way it doesn't change my belief that we should stop eating them to avoid the extinction of the species.

The non-hypothetical, practical issue at hand is not do we make the world a better place for some particular species, but do we stop making it a worse one? Is it worth extinguishing a species so a few people can have a marginally tastier or more high status dinner? (whales, sharks, Patagonian Toothfish, etc.) Is it worth destroying a few dozen acres of forest containing the last habitat of a microscopic species we've never noticed so a few humans can play golf a little more frequently? I answer No, it isn't. It is possible for the costs of an action to non-human species to outweigh the benefits gained by humans of taking that action.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-16T19:03:57.447Z · LW(p) · GW(p)

I feel a moral obligation to protect and not excessively harm the environment

Why?

The moral obligation to animals comes from their worth as animals

What worth?

it doesn't change my belief that we should stop eating them to avoid the extinction of the species.

Where does this belief come from?

comment by Qiaochu_Yuan · 2013-06-12T21:53:59.543Z · LW(p) · GW(p)

I asked this before but don't remember if I got any good answers: I am still not convinced that I should care about animal suffering. Human suffering seems orders of magnitude more important. Also, meat is delicious and contains protein. What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian? Alternatively, how much would you be willing to pay me to stop eating meat?

Replies from: Kaj_Sotala, RobbBB, peter_hurford, Vaniver, army1987, Pablo_Stafforini, selylindi, Raemon
comment by Kaj_Sotala · 2013-06-13T10:43:52.136Z · LW(p) · GW(p)

What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian?

Huh. I'm drawing a similar blank as if someone asked me to provide an argument for why the suffering of red-haired people should count equally to the suffering of black-haired people. Why would the suffering of one species be more important than the suffering of another? Yes, it is plausible that once your nervous system becomes simple enough, you no longer experience anything that we would classify as suffering, but then you said "human suffering is more important", not "there are some classes of animals that suffer less". I'm not sure I can offer a good argument against "human suffering is more important", because it strikes me as so completely arbitrary and unjustified that I'm not sure what the arguments for it would be.

Replies from: Qiaochu_Yuan, army1987
comment by Qiaochu_Yuan · 2013-06-13T19:17:31.364Z · LW(p) · GW(p)

Why would the suffering of one species be more important than the suffering of another?

Because one of those species is mine?

I'm not sure I can offer a good argument against "human suffering is more important", because it strikes me as so completely arbitrary and unjustified that I'm not sure what the arguments for it would be.

Historically, most humans have viewed a much smaller set of (living, mortal) organisms as being the set of (living, mortal) organisms whose suffering matters, e.g. human members of their own tribe. How would you classify these humans? Would you say that their morality is arbitrary and unjustified? If so, I wonder why they're so similar. If I were to imagine a collection of arbitrary moralities, I'd expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now? If so, have you read gwern's The Narrowing Circle (which is the reason for the living and mortal qualifiers above)?

There is something in human nature that cares about things similar to itself. Even if we're currently infected with memes suggesting that this something should be rejected insofar as it distinguishes between different humans (and I think we should be honest with ourselves about the extent to which this is a contingent fact about current moral fashions rather than a deep moral truth), trying to reject it as much as we can is forgetting that we're rebelling within nature.

I care about humans because I think that in principle I'm capable of having a meaningful interaction with any human: in principle, I could talk to them, laugh with them, cry with them, sing with them, dance with them... I can't do any of these things with, say, a fish. When I ask my brain in what category it places fish, it responds "natural resources." And natural resources should be conserved, of course (for the sake of future humans), but I don't assign them moral value.

Replies from: Zack_M_Davis, Lukas_Gloor, Kaj_Sotala
comment by Zack_M_Davis · 2013-06-14T20:03:14.472Z · LW(p) · GW(p)

Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?

Yes! We know stuff that our ancestors didn't know; we have capabilities that they didn't have. If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere. Yes, given bounded resources, I'm going to protect me and my friends and other humans before worrying about other creatures, but that's not because nonhumans don't matter, but because in this horribly, monstrously unfair universe, we are forced to make tradeoffs. We do what we must, but that doesn't make it okay.

Replies from: Qiaochu_Yuan, SaidAchmiz
comment by Qiaochu_Yuan · 2013-06-14T20:10:56.246Z · LW(p) · GW(p)

We know stuff that our ancestors didn't know; we have capabilities that they didn't have.

I'm more than willing to agree that our ancestors were factually confused, but I think it's important to distinguish between moral and factual confusion. Consider the following quote from C.S. Lewis:

I have met people who exaggerate the differences [between the morality of different cultures], because they have not distinguished between differences of morality and differences of belief about facts. For example, one man said to me, Three hundred years ago people in England were putting witches to death. Was that what you call the Rule of Human Nature or Right Conduct? But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did. There is no difference of moral principle here: the difference is simply about matter of fact. It may be a great advance in knowledge not to believe in witches: there is no moral advance in not executing them when you do not think they are there. You would not call a man humane for ceasing to set mousetraps if he did so because he believed there were no mice in the house.

I think our ancestors were primarily factually, rather than morally, confused. I don't see strong reasons to believe that humans over time have made moral, as opposed to factual, progress, and I think attempts to convince me and people like me that we should care about animals should rest primarily on factual, rather than moral, arguments (e.g. claims that smarter animals like pigs are more psychologically similar to humans than I think they are).

If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.

If I write a computer program with a variable called isSuffering that I set to true, is it suffering?

Yes, given bounded resources, I'm going to protect me and my friends and other humans before worrying about other creatures

Cool. Then we're in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead), which is fine with me.

Replies from: Zack_M_Davis, RobbBB, SaidAchmiz
comment by Zack_M_Davis · 2013-06-14T20:38:22.132Z · LW(p) · GW(p)

If I write a computer program with a variable called isSuffering that I set to true, is it suffering?

(I have no idea how consciousness works, so in general, I can't answer these sorts of questions, but) in this case I feel extremely confident saying No, because the variable names in the source code of present-day computer programs can't affect what the program is actually doing.

humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead

That doesn't follow if it turns out that preventing animal suffering is sufficiently cheap.

comment by Rob Bensinger (RobbBB) · 2013-06-15T11:53:50.319Z · LW(p) · GW(p)

I'm not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren't wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one's affective state.) If this motivational component isn't what you had in mind as the 'moral', nonfactual component of our judgments, then I don't know what you do have in mind.

If I write a computer program with a variable called isSuffering that I set to true, is it suffering?

I don't think this is specifically relevant. I upvoted your 'blue robot' comment because this is an important issue to worry about, but 'that's a black box' can't be used as a universal bludgeon. (Particularly given that it defeats appeals to 'isHuman' even more thoroughly than it defeats appeals to 'isSuffering'.)

Cool. Then we're in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead)

I assume you're being tongue-in-cheek here, but be careful not to mislead spectators. 'Human life isn't perfect, ergo we are under no moral obligation to eschew torturing non-humans' obviously isn't sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans' welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn't exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-15T19:06:53.777Z · LW(p) · GW(p)

If this motivational component isn't what you had in mind as the 'moral', nonfactual component of our judgments, then I don't know what you do have in mind.

I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth).

I assume you're being tongue-in-cheek here

Nope.

White slave-owners had plenty of white-person-specific problems to deal with, but that didn't exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.

I don't think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can't do.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-15T21:21:58.555Z · LW(p) · GW(p)

White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can't do.

Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can't do. I don't see a relevant disanalogy. (Other than the question-begging one 'fish aren't human'.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-15T21:36:27.125Z · LW(p) · GW(p)

I guess that should've ended "...that fish can't do and that are important parts of how they interact with other white people." Black people are capable of participating in human society in a way that fish aren't.

A "reversed stupidity is not intelligence" warning also seems appropriate here: I don't think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-15T21:59:26.922Z · LW(p) · GW(p)

I don't think we should stop making distinctions altogether either; I'm just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take 'the expanding circle' as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that's improved far beyond contemporary society's hodgepodge of standards.

I think the main lesson from 'expanding circle' events is that we should be relatively cautious about assuming that something isn't a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. 'Black people don't have moral standing because they're less intelligent than us' fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, 'fish can't participate in human society' fails, because extremely pathologically antisocial or socially inept people (of the sort that can't function in society at all) still shouldn't be tortured.

(Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn't give either civilization the right to oppress the other.)

On the other hand, 'rocks aren't conscious' does seem to draw on a good and principled necessary condition -- anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it's a bit of an explanatory IOU until we know exactly what the neural basis of 'consciousness' is, but 'intelligent' and 'able to participate in human society' are IOUs in the same sense.) Likewise for gods and dead bodies -- the former don't exist, and the latter again fail very general criteria like 'is it conscious?' and 'can it suffer?' and 'can it desire?'. These are fully general criteria, not ad-hoc or parochial ones, so they're a lot less likely to fall into the racism trap.

Possibly they fall into a new and different trap, though? Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it. The chances of our engineering (or encountering in the stars) new species that blur the lines between our concepts of psychological 'humanity' and 'inhumanity' are significant, and that makes it dangerous to adopt a policy of 'assume everything with a weird appearance or behavior has no moral rights until we've conclusively proved that its difference from us is only skin-deep'.

Replies from: Eugine_Nier, Qiaochu_Yuan
comment by Eugine_Nier · 2013-06-16T06:40:02.151Z · LW(p) · GW(p)

On the other hand, 'rocks aren't conscious' does seem to draw on a good and principled necessary condition -- anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum.

What about unconscious people?

Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it.

So what's your position on abortion?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-16T08:46:31.429Z · LW(p) · GW(p)

I don't know why you got a down-vote; these are good questions.

What about unconscious people?

I'm not sure there are unconscious people. By 'unconscious' I meant 'not having any experiences'. There's also another sense of 'unconscious' in which people are obviously sometimes unconscious — whether they're awake, aware of their surroundings, etc. Being conscious in that sense may be sufficient for 'bare consciousness', but it's not necessary, since people can experience dreams while 'unconscious'.

Supposing people do sometimes become truly and fully unconscious, I think this is morally equivalent to dying. So it might be that in a loose sense you die every night, as your consciousness truly 'switches off' — or, equivalently, we could say that certain forms of death (like death accompanying high-fidelity cryonic preservation) are in a loose sense a kind of sleep. You say /pəˈteɪtəʊ/, I say /pəˈteɪtoʊ/. The moral rights of dead or otherwise unconscious people would then depend on questions like 'Do we have a responsibility to make conscious beings come into existence?' and 'Do we have a responsibility to fulfill people's wishes after they die?'. I'd lean toward 'yes' on the former, 'no but it's generally useful to act as though we do' on the latter.

So what's your position on abortion?

Complicated. At some stages the embryo is obviously unconscious, for the same reason some species are obviously unconscious. It's conceivable that there's no true consciousness at all until after birth — analogously, it's possible all non-humans are zombies — but at this point I find it unlikely. So I think mid-to-late-stage fetuses do have some moral standing — perhaps not enough for painlessly killing them to be bad, but at least enough for causing them intense pain to be bad. (My view of chickens is similar; suffering is the main worry rather than death.) The two cases are also analogous in that some people have important health reasons for aborting or for eating meat.

comment by Qiaochu_Yuan · 2013-06-15T23:44:07.537Z · LW(p) · GW(p)

Likewise, 'fish can't participate in human society' fails, because extremely pathologically antisocial or socially inept people (of the sort that can't function in society at all) still shouldn't be tortured.

The original statement of my heuristic for deciding moral worth contained the phrase "in principle" which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they'd still be capable of participating in human society. But even in possible worlds fairly different from this one, fish still aren't so capable.

I also think the reasoning in this example is bad for general reasons, namely moral heuristics don't behave like scientific theories: falsifying a moral hypothesis doesn't mean it's not worth considering. Heuristics that sometimes fail can still be useful, and in general I am skeptical of people who claim to have useful moral heuristics that don't fail on weird edge cases (sufficiently powerful such heuristics should constitute a solution to friendly AI).

Plus many fish can participate in their own societies.

I'm skeptical of the claim that any fish have societies in a meaningful sense. Citation?

If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them?

If they're intelligent enough we can still trade with them, and that's fine.

Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn't give either civilization the right to oppress the other

I don't think this is analogous to the above case. The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.

These are fully general criteria, not ad-hoc or parochial ones, so they're a lot less likely to fall into the racism trap. Possibly they fall into a new and different trap, though?

Yes: not capturing complexity of value. Again, morality doesn't behave like science. Looking for general laws is not obviously a good methodology, and in fact I'm pretty sure it's a bad methodology.

Replies from: RobbBB, army1987, RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-16T01:13:55.097Z · LW(p) · GW(p)

Yes: not capturing complexity of value.

'Your theory isn't complex enough' isn't a reasonable objection, in itself, to a moral theory. Rather, 'value is complex' is a universal reason to be less confident about all theories. (No theory, no matter how complex, is immune to this problem, because value might always turn out to be even more complex than the theory suggests.) To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it's more complicated is obviously wrong, because knowing that value is complex tells us nothing about how it is complex.

In fact, even though we know that value is complex, a complicated theory that accounts for the evidence will almost always get more wrong than a simple theory that accounts for the same evidence -- a more detailed map can be wrong about the territory in more ways.

Again, morality doesn't behave like science.

Interestingly, in all the above respects human morality does behave like any other empirical phenomenon. The reasons to think morality is complex, and the best methods for figuring out exactly how it is complex, are the same as for any complex natural entity. "Looking for general laws" is a good idea here for the same reason it's a good idea in any scientific endeavor; we start by ruling out the simplest explanations, then move toward increasing complexity as the data demands. That way we know we're not complicating our theory in arbitrary or unnecessary ways.

Knowing at the outset that storms are complex doesn't mean that we shouldn't try to construct very simple predictive and descriptive models of weather systems, and see how close our simulation comes to getting it right. Once we have a basically right model, we can then work on incrementally increasing its precision. As for storms, so for norms. The analogy is particularly appropriate because in both cases we seek an approximation not only as a first step in a truth-seeking research program, but also as a behavior-guiding heuristic for making real-life decisions under uncertainty.

Replies from: wedrifid, Qiaochu_Yuan
comment by wedrifid · 2013-06-16T08:03:54.609Z · LW(p) · GW(p)

'Your theory isn't complex enough' isn't a reasonable objection, in itself, to a moral theory. Rather, 'value is complex' is a universal reason to be less confident about all theories.

If I am sure that value is complex and I am given two theories, one of which is complex and the other simple, then I can be sure that the simple one is wrong. The other one is merely probably wrong (as most such theories are). "Too simple" is a valid objection if the premise "Not simple" is implied.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-16T08:56:56.058Z · LW(p) · GW(p)

That's assuming the two theories are being treated as perfected Grand Unified Theories Of The Phenomenon. If that's the case, then yes, you can simply dismiss a purported Finished Product that is too simple, without even bothering to check on how accurate it is first. But we're talking about preliminary hypotheses and approximate models here. If your first guess adds arbitrary complications just to try to look more like you think the Final Theory will someday appear, you won't learn as much from the areas where your map fails. 'Value is complex' is compatible with the utility of starting with simple models, particularly since we don't yet know in what respects it is complex.

comment by Qiaochu_Yuan · 2013-06-16T02:01:54.808Z · LW(p) · GW(p)

To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it's more complicated is obviously wrong

Obviously that's not what I'm suggesting. What I'm suggesting is that it's both more complicated and that this complication is justified from my perspective because it captures my moral intuitions better.

the data

What data?

comment by A1987dM (army1987) · 2013-06-15T23:53:10.118Z · LW(p) · GW(p)

I also think the reasoning in this example is bad for general reasons, namely moral heuristics don't behave like scientific theories: falsifying a moral hypothesis doesn't mean it's not worth considering.

Then again, the same applies to scientific theories, so long as the old now-falsified theory is a good approximation to the new currently accepted theory within certain ranges of conditions (e.g. classical Newtonian physics if you're much bigger than an atom and much slower than light).

comment by Rob Bensinger (RobbBB) · 2013-06-16T01:11:32.748Z · LW(p) · GW(p)

The original statement of my heuristic for deciding moral worth contained the phrase "in principle" which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they'd still be capable of participating in human society.

Isn't a quasi-Aristotelian notion of the accidental/essential or contingent/necessary properties of different species a rather metaphysically fragile foundation for you to base your entire ethical system on? We don't know whether the unconscious / conscious distinction will end up being problematized by future research, but we do already know that the distinctions between taxonomical groupings can be very fuzzy -- and are likely to become far fuzzier as we take more control of our genetic future. We also know that what's normal for a certain species can vary wildly over historical time. 'In principle' we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans, but because our prototype of a fish is dumb, while our prototype of a human is smart, we think of smart fish and dumb humans as aberrant deviations from the telos (proper function) of the species.

It seems damningly arbitrary to me. Why should torturing sentient beings be OK in contexts where the technology for improvement is (or 'feels'?) distant, yet completely intolerable in contexts where this external technology is more 'near' on some metric, even if in both cases there is never any realistic prospect of the technology being deployed here?

I don't find it implausible that we currently use prototypes as a quick-and-dirty approximation, but I do find it implausible that on reflection, our more educated and careful selves would continue to found the human enterprise on essentialism of this particular sort.

I also think the reasoning in this example is bad for general reasons, namely moral heuristics don't behave like scientific theories: falsifying a moral hypothesis doesn't mean it's not worth considering.

Actually, now that you bring it up, I'm surprised by how similar the two are. 'Heuristics' by their very nature are approximations; if we compare them to scientific models that likewise approximate a phenomenon, we see in both cases that an occasional error is permissible. My objection to the 'only things that can intelligently socialize with humans matter' heuristic isn't that it gets things wrong occasionally; it's that it almost always yields the intuitively wrong answer, and when it gets the right answer it seems to do so for overdetermined reasons. E.g., it gets the right answer in cases of ordinary human suffering and preference satisfaction.

in general I am skeptical of people who claim to have useful moral heuristics that don't fail on weird edge cases

I agree that someone who claims an unrealistic level of confidence in a moral claim as an individual deserves less trust. But that's different from claiming that it's an advantage of a moral claim that it gets the right answer less often.

I'm skeptical of the claim that any fish have societies in a meaningful sense.

I just meant a stable, cooperative social group. Is there something specific about human societies that you think is the source of their unique moral status?

If they're intelligent enough we can still trade with them, and that's fine.

If we can't trade with them for some reason, it's still not OK to torture them.

The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.

'The psychological unity of mankind' is question-begging here. It's just a catchphrase; it's not as though there's some scientific law that all and only biologically human minds form a natural kind. If we're having a battle of catchphrases, vegetarians can simply appeal to the 'psychological unity of sentient beings'.

Sure, they're less unified, but how do we decide how unified a unity has to be? While you dismiss the psychological unity of sentient beings as too generic to be morally relevant, the parochialist can step up to dismiss the psychological unity of mankind as too generic to be morally relevant, preferring instead to favor only the humans with a certain personality type, or a certain cultural background, or a certain ideology. What I'm looking for is a reason to favor the one unity over an infinite number of rival unities.

I should also reiterate that it's not an advantage of your theory that it requires two independent principles ('being biologically human', 'being able to (be modified without too much difficulty into something that can) socialize with biological humans') to explain phenomena that other models can handle with only a single generalization. Noting that value is complex is enough to show that your model is possible, but it's not enough to elevate it to a large probability.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-16T01:41:49.267Z · LW(p) · GW(p)

'In principle' we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans

I don't think most fish have complicated enough minds for this to be true. (By contrast, I think dolphins might, and this might be a reason to care about dolphins.)

It seems damningly arbitrary to me.

You're still using a methodology that I think is suspect here. I don't think there's good reasons to expect "everything that feels pain has moral value, period" to be a better moral heuristic than "some complicated set of conditions singles out the things that have moral value" if, upon reflection, those conditions seem to be in agreement with what my System 1 is telling me I actually care about (namely, as far as I can tell, my System 1 cares about humans in comas but not fish). My System 2 can try to explain what my System 1 cares about, but if those explanations are bad because your System 2 can find implications they have which are bad, then oh well: at the end of the day, as far as I can tell, System 1 is where my moral intuitions come from, not System 2.

My objection to the 'only things that can intelligently socialize with humans matter' heuristic isn't that it gets things wrong occasionally; it's that it almost always yields the intuitively wrong answer

Your intuition, not mine.

I should also reiterate that it's not an advantage of your theory that it requires two independent principles ('being biologically human', 'being able to (be modified without too much difficulty into something that can) socialize with biological humans') to explain phenomena

System 1 doesn't know what a biological human is. I'm not using "human" to mean "biological human." I'm using "human" to mean "potential friend." Posthumans and sufficiently intelligent AI could also fall in this category, but I'm still pretty sure that fish don't. I actually only care about the second principle.

that other models can handle with only a single generalization.

While getting what I regard to be the wrong answers with respect to most animals. A huge difference between morality and science is that the results of properly done scientific experiments can be relatively clear: it can be clear to all observers that the experiment provides evidence for or against some theory. Morality lacks an analogous notion of moral experiment. (We wouldn't be having this conversation if there were such a thing as a moral experiment; I'd be happy to defer to the evidence in that case, the same as I would in any scientific field where I'm not a domain expert.)

Replies from: RobbBB, ialdabaoth, RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-17T20:00:49.214Z · LW(p) · GW(p)

Thanks for fleshing out your view more! It's likely that previously I was being a bit too finicky with how you were formulating your view; I wanted to hear you come out and express the intuition more generally so I could see exactly where you thought the discontinuity lay, and I think you've done a good job of that now. Any more precision would probably be misleading, since the intuition itself is a bit amorphous: A lot of people think of their pets as friends and companions in various ways, and it's likely that no simple well-defined list of traits would provide a crisp criterion for what 'friendship' or 'potential friendship' means to you. It's just a vague sense that morality is contingent on membership in a class of (rough) social equals, partners, etc. There is no room in morality for a hierarchy of interests — everything either deserves (roughly) all the rights, or none of them at all.

The reliance on especially poorly-defined and essentializing categories bothers me, but I'll mostly set that aside. I think the deeper issue here is that our intuitions do allow for hierarchies, and for a more fine-grained distribution of rights based on the different faculties of organisms. It's not all-or-nothing.

Allowing that it's not all-or-nothing lets us escape most of your view's problems with essentialism and ad-hoc groupings — we can allow that there is a continuum of different moral statuses across individual humans for the same reasons, and in the same ways, that there is a continuum across species. For instance, if it were an essential fact that our species divided into castes, one of which just couldn't be a 'friend' or socialize with the other — a caste with permanent infant-like minds, for instance — we wouldn't be forced into saying that this caste either has 100% of our moral standing, or 0%. Thinking in terms of a graded scale of moral responsibility gives us the flexibility needed to adapt to an unpredictable environment that frequently lacks sharp lines between biological kinds. And once we admit such a scale, it becomes much harder to believe that the scale ends completely at a level above the most intelligent and suffering-prone non-human, as opposed to slowly trailing off into the phylogenetic distance.

On your view, the reason — at a deep level, the only reason — that it's the least bit wrong to torture infants or invalids arbitrarily intensely for arbitrarily long periods of time, is that when we think of infants and invalids, we imagine them as (near) potential or (near) would-be 'friends' of ours. That is, I see a baby injured and in pain and I feel morally relevant sympathy (as opposed to, say, the confused, morally delusive sympathy some people feel toward dogs or dolphins) only because I imagine that the baby might someday grow up and, say, sell staplers to me, share secrets with me, or get me a job as a server at Olive Garden. Even if the child has some congenital disease that makes its premature death a physical certainty, still the shadow of its capacities to socialize with me, cast from nearby possible worlds (or just from the ease with which we could imagine the baby changing to carry on business transactions with me), confers moral significance upon its plight.

I just don't think that's so. Our sympathy toward infants doesn't depend on a folk theory of human development. We'd feel the same sympathy, or at least something relevantly similar, even if we'd been raised in a bubble and never been told that infants are members of the human species at all prior to encountering one. If you knew all the physiological and psychological facts about infants except ones that showed they tend to develop into intelligent socializers, you'd already have plenty of good reason not to torture infants. Learning that infants develop into adult humans might add new (perhaps even better) reasons to not torture them, but it wouldn't constitute the only such reasons. But in that case the sympathy we feel for animals with infant-like neural and psychological traits, but that never develop into active participants in complex language-using societies, would be morally relevant for the same reasons.

We've been speaking as though my view of morality were the simple one, yours relatively complex and rich. But in fact I think your reluctance to say 'it's bad when chickens suffer' results from an attempt to oversimplify and streamline your own ethical reasoning, privileging one set of moral intuitions without providing a positive argument for the incoherence or illegitimacy of the others. I think it's entirely possible that some of our moral intuitions are specific to beings like adult humans (e.g., intuitions of reciprocity, fairness, and authority), while others generalize to varying degrees to non-humans (e.g., intuitions of suffering and gain). Because different moral thoughts and emotions can be triggered by radically different stimuli, we shouldn't be surprised that some of those stimuli require an assumption of intelligence (hence, if applied to chickens, plausibly depend on excessive anthopomorphization), while others require very different assumptions. In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.

Maybe, evolutionarily, we feel sympathy for non-humans only as a side-effect of traits selected because they help us cooperate with other humans. But that then is just a historical fact; it has no bearing on the legitimacy, or the reality, of our sympathy for suffering non-humans. What's really needed to justify carnivory is a proof that this sympathy presupposes an ascription of properties to non-humans that they in fact lack, e.g., sentience. Our intuitions about the moral relevance of chickens would then go the way of moral intuitions about ghosts and gods. But I haven't seen a case like this made yet; the claim has not been that animals don't suffer (or that their 'suffering' is so radically unlike human suffering that it loses its moral valence). Rather, the claim has been that if brutes do suffer, that doesn't matter (except insofar as it also causes human suffering).

We both think moral value is extremely complicated; but I think it's relatively disjunctive, whereas you think it's relatively conjunctive. I'd be interested to hear arguments for why we should consider it conjunctive, but I still think it's important that to the extent we're likely to be in error at all, we err on the side of privileging disjunctivity.

Replies from: Qiaochu_Yuan, None
comment by Qiaochu_Yuan · 2013-06-17T20:17:48.008Z · LW(p) · GW(p)

In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.

This is a good point. I'll have to think about this.

comment by [deleted] · 2013-06-17T21:36:57.759Z · LW(p) · GW(p)

This is quite a good post, thanks for taking the time to write it. You've said before that you think vegetarianism is the morally superior option. While you've done a good job here of defending the coherence or possibility of the moral significance of animal suffering, would you be willing to go so far as to defend such moral significance simpliciter?

I ask in part because I don't think the claim that we ought to err on the side of disjunctivity as I think you construe it (where this involves something like a proportional distribution of moral worth on the basis of a variety of different merits and relationships) is morally safer than operating as if there were a hard and flat moral floor. Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed. We've historically had both problems, and I don't know that one or the other is necessarily the more disastrous. Exclusion has led to some real moral abominations (the holocaust, I guess), but uneven distribution where even distribution is called for has led to some long-standing and terribly unjust political traditions (feudalism, say).

EDIT: I should add, and not at all by way of criticism, that for all the pejorative aimed at Aristotelian thinking in this last exchange, your conclusion (excluding the safety bit) is strikingly Aristotelian.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-17T21:53:05.422Z · LW(p) · GW(p)

Thanks, hen! My primary argument is indeed that if animals suffer, that is morally significant — not that this thesis is coherent or possible, but that it's true. My claim is that although humans are capable both of suffering and of socializing, and both of these have ethical import, the import of suffering is not completely dependent on the import of socializing, but has some valence in its own right. This allows us to generalize the undesirability of suffering both to sapient nonsocial sentient beings and to nonsapient nonsocial sentient beings, independent of whether they would be easy, hard, or impossible to modify into a social being.

Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed.

It's hard to talk about this in the abstract, so maybe you should say more about what you're worried about, and (ideally) about some alternative that avoids the problem. It sounds like you're suggesting that if we assert that humans have a richer set of rights than non-humans — if we allow value to admit of many degrees and multiple kinds — then we may end up saying that some groups of humans intrinsically deserve more rights than others, in a non-meritocratic way. Is that your worry?

Replies from: None
comment by [deleted] · 2013-06-17T22:12:48.754Z · LW(p) · GW(p)

the import of suffering is not completely dependent on the import of socializing,

Thanks for filling that out. Could I ask you to continue with a defense of this premise in particular? (You may have done this already, and I may have missed it. If so, I'd be happy to be pointed in the right direction).

Then we may end up saying that some groups of humans deserve more rights than others, in a non-meritocratic way. Is that your worry?

My worry is with both meritocratic and non-meritocratic unevenness. You said earlier that Qiaochu's motivation for excluding animals from moral consideration was based on a desire for simplicity. I think this is right, but could use a more positive formulation: I think on the whole people want this simplicity because they want to defend the extremely potent modern intuition that moral hierarchy is unqualifiedly wrong . At least part of this idea is to leave our moral view fully determined by our understanding of humanity: we owe to every human (or relevantly human-like thing) the moral consideration we take ourselves to be owed. Most vegetarians, I would think, deploy such a flat moral floor (at sentience) for defending the rights of animals.

So one view Qiaochu was attacking (I think) by talking about the complexity of value is the view that something so basic as sentience could be the foundation for our moral floor. Your response was not to argue for sentience as such a basis, but to deny the moral floor in favor of a moral stairway, thereby eliminating the absurdity of regarding chickens as full fledged people.

The reason this might be worrying is that our understanding of what it is to be human, or what kinds of things are morally valuable now fails to determine our ascription of moral worth. So we admit the possibility of distributing moral worth according to intelligence, strength, military power, wealth, health, beauty, etc. and thereby denying to many people who fall short in these ways the moral significance we generally think they're owed. It was a view very much along these lines that led Aristotle to posit that some human beings, incapable of serious moral achievement for social or biological reasons, were natural slaves. He did not say they were morally insignificant, mind, just that given their capacities slavery was the best they could do.

I'm not saying you're committed to any kind of moral oligarchy, only that because this kind of disjunctive strategy eschews the direct and solitary link between humanity and moral value, it cannot be called the safer option without further ado. A society in error could do as much damage proceeding by your disjunctive rule (by messing up the distributions) as they could proceeding with a conjunctive rule (by messing up who counts as 'human').

An alternative might be to say that there is moral value proper, that every human being (or relevantly human-like thing) has, and then there are a variety of defective or subordinate forms of moral significance depending on how something is related to that moral core. This way, you'd keep the hard moral floor, but you'd be able to argue for the (non-intrinsic) moral value of non-humans. (Unfortunately, this alternative also deploys an Aristotelian idea: core dependent homonomy.)

comment by ialdabaoth · 2013-06-17T20:15:43.803Z · LW(p) · GW(p)

I'm not using "human" to mean "biological human." I'm using "human" to mean "potential friend."

The term you are looking for here is 'person'. The debate you are currently having is about what creatures are persons.

The following definitions aid clarity in this discussion:

  • Animal - a particular form of life that has evolved on earth; most animals are mobile, multicellular, and respond to their environment (but this is not universally necessary or sufficient).
  • Human - a member of the species Homo sapiens, a particular type of hairless ape

  • Person - A being which has recognized agency, and (in many moral systems) specific rights.

Note that separating 'person' from 'human' allows you to recognize the possibility that all humans are not necessarily persons in all moral systems (i.e.: apartheid regimes and ethnic cleansing schemas certainly treat many humans as non-persons; certain cultures treat certain genders as effectively non-persons, etc.). If this is uncomfortable for you, explore the edges of it until your morality restabilizes (example: brain-dead humans are still human, but are they persons?).

comment by Rob Bensinger (RobbBB) · 2013-06-17T20:03:42.842Z · LW(p) · GW(p)

I don't think most fish have complicated enough minds for this to be true.

Just keep adding complexity until you get an intelligent socializer. If an AI can be built, and prosthetics can be built, then a prosthetic that confers intelligence upon another system can be built. At worst, the fish brain would just play an especially small or especially indirect causal role in the rest of the brain's functioning.

Morality lacks an analogous notion of moral experiment. (We wouldn't be having this conversation if there were such a thing as a moral experiment; I'd be happy to defer to the evidence in that case, the same as I would in any scientific field where I'm not a domain expert.)

You are deferring to evidence; I just haven't given you good evidence yet that you do indeed feel sympathy for non-human animals (e.g., I haven't bombarded you with videos of tormented non-humans; I can do so if you wish), nor that you're some sort of exotic fish-sociopath in this regard. If you thought evidence had no bearing on your current moral sentiments, then you wouldn't be asking me for arguments at all. However, because we're primarily trying to figure out our own psychological states, a lot of the initial evidence is introspective -- we're experimenting on our own judgments, testing out different frameworks and seeing how close they come to our actual values. (Cf. A Priori.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-17T20:31:08.391Z · LW(p) · GW(p)

If an AI can be built, and prosthetics can be built, then a prosthetic that confers intelligence upon another system can be built.

But in that case I would be tempted to ascribe moral value to the prosthetic, not the fish.

However, because we're primarily trying to figure out our own psychological states, a lot of the initial evidence is introspective

Agreed, but this is why I think the analogy to science is inappropriate.

Replies from: RobbBB, TheOtherDave
comment by Rob Bensinger (RobbBB) · 2013-06-17T20:39:33.953Z · LW(p) · GW(p)

But in that case I would be tempted to ascribe moral value to the prosthetic, not the fish.

I doubt there will always be a fact of the matter about where an organism ends and its prosthesis begins. My original point here was that we can imagine a graded scale of increasingly human-socialization-capable organisms, and it seems unlikely that Nature will be so kind as to provide us with a sharp line between the Easy-To-Make-Social and the Hard-To-Make-Social. We can make that point by positing prosthetic enhancements of increasing complexity, or genetic modifications to fish brain development, or whatever you please.

this is why I think the analogy to science is inappropriate.

Fair enough! I don't have a settled view on how much moral evidence should be introspective v. intersubjective, as long as we agree that it's broadly empirical.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-17T20:46:02.553Z · LW(p) · GW(p)

With respect to this human-socialization-as-arbiter-of-moral-weight idea, are you endorsing the threshold which human socialization currently demonstrates as the important threshold, or the threshold which human socialization demonstrates at any given moment?

For example, suppose species X is on the wrong side of that line (however fuzzy the line might be). If instead of altering Xes so they were better able to socialize with unaltered humans and thereby had, on this view, increased moral weight, I had the ability to increase my own ability to socialize with X, would that amount to the same thing?

comment by TheOtherDave · 2013-06-17T21:29:58.731Z · LW(p) · GW(p)

I would be tempted to ascribe moral value to the prosthetic, not the fish.

Thinking about this... while I sympathize with the temptation, it does seem to me that the same mindset that leads me in this direction also leads me to ascribe moral values to human societies, rather than to individual humans.

I'm not yet sure what I want to do with that.

Replies from: None
comment by [deleted] · 2013-06-17T21:45:04.174Z · LW(p) · GW(p)

I'm not yet sure what I want to do with that.

It might be worth distinguishing a genetic condition on X from a constituting condition on X. So human society is certainly necessary to bring about the sapience and social capacities of human beings, but if you remove the human from the society once they've been brought up in the relevant way, they're no less capable of social and sapient behavior.

On the other hand, the fish-prosthetic is part of what constitutes the fish's capacity for social and sapient behavior. If the fish were removed from it, it would loose those capacities.

I think you could plausibly say that the prosthetic should be considered part of the basis for the moral worth of the fish (at the expense of the fish on its own), but refuse to say this about human societies (at the expense of individual human) in light of this distinction.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-18T17:59:58.525Z · LW(p) · GW(p)

Hm.
Well, I agree with considering the prosthetic part of the basis of the worth of the prosthetically augmented fish, as you suggest.
And while I think we underestimate the importance of a continuing social framework for humans to be what we are, even as adults, I will agree that there's some kind of meaningful threshold to be identified such that I can be removed from human society without immediately dropping below that threshold, and there's an important difference (if perhaps not strictly a qualitative one) between me and the fish in this respect.

So, yeah, drawing this distinction allows me to ascribe moral value to individual adult humans (though not to very young children, I suppose), rather than entirely to their societies, even while embracing the general principle here.

Fair enough.

comment by Said Achmiz (SaidAchmiz) · 2013-06-14T20:25:34.826Z · LW(p) · GW(p)

I've seen that C.S. Lewis quote before, and it seems to me quite mistaken. In this part:

But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did.

Lewis seems to suggest that executing a witch, per se, is what we consider bad. But that's wrong. What was bad about witch hunts was:

  1. People were executed without anything resembling solid evidence of their guilt — which of course could not possibly have been obtained, seeing as how they were not guilty and the crimes they were accused of were imaginary; but my point is that the "trial" process was horrifically unjust and monstrously inhumane (torture to extract confessions, etc.). If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.

  2. Punishments were terribly inhumane — burning alive? Come now. Even if we thought witches existed today, and even if we thought the death penalty was an appropriate punishment, we'd carry it out in a more humane manner, and certainly not as a form of public entertainment (again, one would hope; at least, our moral standards today dictate thus).

So differences of factual belief are not the main issue here. The fact that, when you apply rigorous standards of evidence and fair prosecution practices to the witch issue, witchcraft disappears as a crime, is instructive (i.e. it indicates that there's no such crime in the first place), but we shouldn't therefore conclude that not believing in witches is the relevant difference between us and the Inquisition.

Replies from: MugaSofer, Qiaochu_Yuan
comment by MugaSofer · 2013-06-15T22:33:05.680Z · LW(p) · GW(p)

Considering people seemed to think that this was the best way to find witches, 1 still seems like a factual confusion.

2 was based on a Bible quote, I think. The state hanged witches.

comment by Qiaochu_Yuan · 2013-06-14T20:40:40.909Z · LW(p) · GW(p)

If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.

We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?

If you think humanity as a whole has made substantial moral progress throughout history, what's driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don't have an analogous story about moral progress. How do you distinguish the current state of affairs from "moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral"?

Replies from: army1987, SaidAchmiz
comment by A1987dM (army1987) · 2013-06-14T21:30:32.923Z · LW(p) · GW(p)

Who knows what kind of things a real witch could do to a jury?

Who knows what kind of things a real witch could do to an executioner, for that matter?

comment by Said Achmiz (SaidAchmiz) · 2013-06-14T21:17:01.446Z · LW(p) · GW(p)

We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?

There is a difference between "we should take precautions to make sure the witch doesn't blanket the courtroom with fireballs or charm the jury and all officers of the court; but otherwise human rights apply as usual" and "let's just burn anyone that anyone has claimed to be a witch, without making any attempt to verify those claims, confirm guilt, etc." Regardless of what you think would happen in practice (fear makes people do all sorts of things), it's clear that our current moral standards dictate behavior much closer to the former end of that spectrum. At the absolute least, we would want to be sure that we are executing the actual witches (because every accused person could be innocent and the real witches could be escaping justice), and, for that matter, that we're not imagining the whole witchcraft thing to begin with! That sort of certainty requires proper investigative and trial procedures.

If you think humanity as a whole has made substantial moral progress throughout history, what's driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don't have an analogous story about moral progress. How do you distinguish the current state of affairs from "moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral"?

That's two questions ("what drives moral progress" and "how can you distinguish moral progress from a random walk"). They're both interesting, but the former is not particularly relevant to the current discussion. (It's an interesting question, however, and Yvain makes some convincing arguments at his blog [sorry, don't have link to specific posts atm] that it's technological advancement that drives what we think of as "moral progress".)

As for how I can distinguish it from a random walk — that's harder. However, my objection was to Lewis's assessment of what constitutes the substantive difference between our moral standards and those of medieval witch hunters, which I think is totally mistaken. I do not need even to claim that we've made moral progress per se to make my objection.

comment by Said Achmiz (SaidAchmiz) · 2013-06-14T20:15:34.598Z · LW(p) · GW(p)

If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.

No they don't. Are you saying it's not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).

Replies from: Raemon, Lukas_Gloor
comment by Raemon · 2013-06-14T20:21:35.297Z · LW(p) · GW(p)

There's a difference between "it's possible to construct a mind" and "other particular minds are likely to be constructed a certain way." Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities.

(I also would define it, not in terms of "pain and suffering" but "preference satisfaction and dissatisfaction". I think I might consider "suffering" as dissatisfaction, by definition, although "pain" is more specific and might be valuable for some minds.)

Replies from: army1987, SaidAchmiz
comment by A1987dM (army1987) · 2013-06-14T21:24:28.558Z · LW(p) · GW(p)

although "pain" is more specific and might be valuable for some minds

Such as human masochists.

comment by Said Achmiz (SaidAchmiz) · 2013-06-14T20:37:44.460Z · LW(p) · GW(p)

I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don't even have so much as a strong certainty.

I don't know that I'm comfortable with identifying "suffering" with "preference dissatisfaction" (btw, do you mean by this "failure to satisfy preferences" or "antisatisfaction of negative preferences"? i.e. if I like playing video games and I don't get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-14T20:57:50.499Z · LW(p) · GW(p)

I can't speak for Raemon, but I would certainly say that the condition described by "I like playing video games and am prohibited from playing video games" is a trivial but valid instance of the category /suffering/.

Is the difficulty that there's a different word you'd prefer to use to refer to the category I'm nodding in the direction of, or that you think the category itself is meaningless, or that you don't understand what the category is (reasonably enough; I haven't provided nearly enough information to identify it if the word "suffering" doesn't reliably do so) , or something else?

I'm usually indifferent to semantics, so if you'd prefer a different word, I'm happy to use whatever word you like when discussing the category with you.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-14T21:28:45.782Z · LW(p) · GW(p)

... or that you don't understand what the category is (reasonably enough; I haven't provided nearly enough information to identify it if the word "suffering" doesn't reliably do so)

That one. Also, what term we should use for what categories of things and whether I know what you're talking about is dependent on what claims are being made... I was objecting to Zack_M_Davis's claim, which I take to be something either like:

"We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also."

or...

"We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also."

I don't think either of those claims are justified. Do you think they are? If you do, I guess we'll have to work out what you're referring to when you say "suffering", and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we're referring to.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-15T02:44:54.910Z · LW(p) · GW(p)

I don't think either of those claims are justified. Do you think they are?

There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don't. So let me back up a little.

Suppose I prefer that brain B1 not be in state S1.
Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1.
The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.

So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that's strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).

I don't actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.

As long as C is high -- that is, as long as we really are confident that the other brain has a "same or similar implementation", as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I'm pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is "completely identical" to (S1,B1), I'm "certain" I prefer B2 not be in S2.

But I'm not sure that's actually what you mean when you say "same or similar implementation." You might, for example, mean that they have anatomical points of correspondance, but you aren't confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T03:33:32.400Z · LW(p) · GW(p)

Suppose I prefer that brain B1 not be in state S1.
Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1.
The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.

Is brain B1 your brain in this scenario? Or just... some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings' brain states.

Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as "pain" and "suffering" (which, for us, might usefully be operationalized as "brain states we prefer not to be in") are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing "pain" and "suffering" (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...

Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.

Or, he could have been making the claim that we can usefully describe the category of "pain" and/or "suffering" in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don't know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.

I don't think that conclusion is justified either... or rather, I don't think it's instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as "suffering" is by definition. And we all know that arguing "by definition" makes a def out of I and... wait... hm... well, it's bad.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-15T03:48:46.273Z · LW(p) · GW(p)

My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.

My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B's mind antiprefers the experiential correlates of those details. I agree that there's no strict entailment here, though, "merely" evidence.

That said, mere evidence can get us pretty far. I am not inclined to dismiss it.

comment by Lukas_Gloor · 2013-06-14T22:39:02.365Z · LW(p) · GW(p)

No they don't. Are you saying it's not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things?

I'd do it that way. It doesn't strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of "pain". (Subjects report that they notice the sensation of pain, but they claim it doesn't bother them.) I'd define suffering as wanting to get out of the state you're in. If you're fine with the state you're in, it is not what I consider to be suffering.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T00:08:52.842Z · LW(p) · GW(p)

Ok, that seems workable to a first approximation.

So, a question for anyone who both agrees with that formulation and thinks that "we should care about the suffering of animals" (or some similar view):

Do you think that animals can "want to get out of the state they're in"?

Replies from: Raemon
comment by Raemon · 2013-06-15T00:43:58.186Z · LW(p) · GW(p)

Yes?

This varies from animal to animal. There's a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)

comment by Lukas_Gloor · 2013-06-14T13:13:52.386Z · LW(p) · GW(p)

On why the suffering of one species would be more important than the suffering of another:

Because one of those species is mine?

Does that also apply to race and gender? If not, why not? Assuming a line-up of ancestors, always mother and daughter, from Homo sapiens back to the common ancestor of humans and chickens and forward in time again to modern chickens, where would you draw the line? A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother? Is that really the criterion you want to use for making your decisions? And does it at all bother you that racists or sexists can use an analogous line of defense?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-14T18:12:47.734Z · LW(p) · GW(p)

Does that also apply to race and gender? If not, why not?

I feel psychologically similar to humans of different races and genders but I don't feel psychologically similar to members of most different species.

A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother?

Uh, no. System 1 doesn't know what a species is; that's just a word System 2 is using to approximately communicate an underlying feeling System 1 has. But System 1 knows what a friend is. Other humans can be my friends, at least in principle. Probably various kinds of posthumans and AIs can as well. As far as I can tell, a fish can't, not really.

This general argument of "the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad" strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn't have this property?

And does it at all bother you that racists or sexists can use an analogous line of defense?

Also no. I think current moral fashion is prejudiced against prejudice. Racism and sexism are not crazy or evil points of view; historically, they were points of view held by many sane humans who would have been regarded by their peers as morally upstanding. Have you read What You Can't Say?

Replies from: TheOtherDave, Lukas_Gloor
comment by TheOtherDave · 2013-06-14T18:18:44.026Z · LW(p) · GW(p)

I should add to this that even if I endorse what you call "prejudice against prejudice" here -- that is, even if I agree with current moral fashion that racism and sexism are not as good as their absence -- it doesn't follow that because racists or sexists can use a particular argument A as a line of defense, there's therefore something wrong with A.

There are all sorts of positions which I endorse and which racists and sexists (and Babyeaters and Nazis and Sith Lords and...) might also endorse.

comment by Lukas_Gloor · 2013-06-14T20:32:46.557Z · LW(p) · GW(p)

This general argument of "the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad" strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn't have this property?

Actually, I do. I try to rely on System 1 as little as possible when it comes to figuring out my terminal value(s). One reason for that, I guess, is that at some point I started out with the premise that I don't want to be the sort of person that would have been racist or sexist in previous centuries. If you don't share that premise, there is no way for me to show that you're being inconsistent -- I acknowledge that.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-14T20:43:53.439Z · LW(p) · GW(p)

Actually, I do.

Wow! So you've solved friendly AI? Eliezer will be happy to hear that.

Replies from: MugaSofer
comment by MugaSofer · 2013-06-15T22:35:40.614Z · LW(p) · GW(p)

I'm pretty sure Eliezer already knew our brains contained the basis of morality.

comment by Kaj_Sotala · 2013-06-16T09:15:40.900Z · LW(p) · GW(p)

Would you say that their morality is arbitrary and unjustified? If so, I wonder why they're so similar. If I were to imagine a collection of arbitrary moralities, I'd expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?

I should probably clarify - when I said that valuing humans over animals strikes me as arbitrary, I'm saying that it's arbitrary within the context of my personal moral framework, which contains no axioms from which such a distinction could be derived. All morality is ultimately arbitrary and unjustified, so that's not really an argument for or against any moral system. Internal inconsistencies could be arguments, if you value consistency, but your system does seem internally consistent. My original comment was meant more of an explanation of my initial reaction to your question rather than anything that would be convincing on logical grounds, though I did also assign some probability to it possibly being convincing on non-logical grounds. (Our moral axioms are influenced by what other people think, and somebody expressing their disagreement with a moral position has some chance of weakening another person's belief in that position, regardless of whether that effect is "logical".)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-16T09:30:29.927Z · LW(p) · GW(p)

I've been meaning to write a post about how I think it's a really, really bad idea to think about morality in terms of axioms. This seems to be a surprisingly (to me) common habit among LW types, especially since I would have thought it was a habit the metaethics sequence would have stomped out.

(You shouldn't regard it as a strength of your moral framework that it can't distinguish humans from non-human animals. That's evidence that it isn't capable of capturing complexity of value.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-06-16T10:10:25.964Z · LW(p) · GW(p)

I agree that thinking about morality exclusively in terms of axioms in a system of classical logical system is likely to be a rather bad idea, since that makes one underestimate the complexity of morality, the strength of non-logical influences, and the extent to which it resembles a system of classical logic in general. But I'm not sure if it's that problematic as long as you keep in mind that "axioms" is really just shorthand for something like "moral subprograms" or "moral dynamics".

I did always read the metaethics sequence as establishing the existence of something similar-enough-to-axioms-that-we-might-as-well-use-the-term-axioms-as-shorthand-for-them, with e.g. No Universally Compelling Arguments and Created Already In Motion arguing that you cannot convince a mind about the correctness of some action unless its mind contains a dynamic which reacts to your argument in the way you wish - in other words, unless your argument builds on things that the mind's decision-making system already cares about, and which could be described as axioms when composing a (static) summary of the mind's preferences.

You shouldn't regard it as a strength of your moral framework that it can't distinguish humans from non-human animals. That's evidence that it isn't capable of capturing complexity of value.

I'm not really sure of what you mean here. For one, I didn't say that my moral framework can't distinguish humans and non-humans - I do e.g. take a much more negative stance on killing humans than animals, because killing humans would have a destabilizing effect on society and people's feelings of safety, which would contribute to the creation of much more suffering than killing animals would.

Also, whether or not my personal moral framework can capture complexity of value seems irrelevant - CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on, not a description of what others want, nor something that I'd program into an AI.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-06-16T14:12:59.335Z · LW(p) · GW(p)

Also, whether or not my personal moral framework can capture complexity of value seems irrelevant - CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on [...]

Well, I don't think I should care what I care about. The important thing is what's right, and my emotions are only relevant to the extent that they communicate facts about what's right. What's right is too complex, both in definition and consequentialist implications, and neither my emotions nor my reasoned decisions are capable of accurately capturing it. Any consciously-held morals are only a vague map of morality, not morality itself, and so shouldn't hold too much import, on pain of moral wireheading/acceptance of a fake utility function.

(Listening to moral intuitions, possibly distilled as moral principles, might give the best moral advice that's available in practice, but that doesn't mean that the advice is any good. Observing this advice might fail to give an adequate picture of the subject matter.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-06-16T22:24:12.108Z · LW(p) · GW(p)

I must be misunderstanding this comment somehow? One still needs to decide what actions to take during every waking moment of their lives, and "in deciding what to do, don't pay attention to what you want" isn't very useful advice. (It also makes any kind of instrumental rationality impossible.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-06-16T23:08:52.982Z · LW(p) · GW(p)

What you want provides some information about what is right, so you do pay attention. When making decisions, you can further make use of moral principles not based on what you want at a particular moment. In both cases, making use of these signals doesn't mean that you expect them to be accurate, they are just the best you have available in practice.

Estimate of the accuracy of the moral intuitions/principles translates into an estimate of value of information about morality. Overestimation of accuracy would lead to excessive exploitation, while an expectation of inaccuracy argues for valuing research about morality comparatively more than pursuit of moral-in-current-estimation actions.

Replies from: Osiris, Kaj_Sotala
comment by Osiris · 2013-06-19T10:14:18.950Z · LW(p) · GW(p)

I'm not a very well educated person in this field, but if I may:

I see my various squishy feelings (desires and what-is-right intuitions are in this list) as loyal pets. Sometimes, they must be disciplined and treated with suspicion, but for the most part, they are there to please you in their own dumb way. They're no more enemies than one's preference for foods. In my care for them, I train and reward them, not try to destroy or ignore them. Without them, I have no need to DO better among other people, because I would not be human--that is, some things are important only because I'm a barely intelligent ape-man, and they should STAY important as long as I remain a barely intelligent ape-man. Ignoring something going on in one's mind, even when one KNOWS it is wrong, can be a source of pain, I've found--hypocrisy and indecision are not my friends.

Hope I didn't make a mess of things with this comment.

comment by Kaj_Sotala · 2013-06-19T07:03:23.434Z · LW(p) · GW(p)

I'm roughly in agreement, though I would caution that the exploration/exploitation model is a problematic one to use in this context, for two reasons:

1) It implies a relatively clear map/territory split: there are our real values, and our conscious model of them, and errors in our conscious model do not influence the actual values. But to some extent, our conscious models of our values do shape our unconscious values in that direction - if someone switches to an exploitation phase "too early", then over time, their values may actually shift over to what the person thought they were.

2) Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it. In other words, if we realize that our conscious values don't match our unconscious ones, we should revise our conscious values. And sometimes this does happen. But on other occasions, what happens is that our conscious model has become installed as a separate and contradictory set of values, and we need to choose which of the values to endorse (in which situations). This happening is a bad thing if you tend to primarily endorse your unconscious values or a lack of internal conflict, but arguably a good thing if you tend to primarily endorse your conscious values.

The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them, and we probably shouldn't use terminology like exploration/exploitation that implies that it would be just one of those.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-06-22T12:26:16.439Z · LW(p) · GW(p)

But to some extent, our conscious models of our values do shape our unconscious values in that direction

This is value drift. At any given time, you should fix (i.e. notice, as a concept) the implicit idealized values at that time and pursue them even if your hardware later changes and starts implying different values (in the sense where your dog or your computer or an alien also should (normatively) pursue them forever, they are just (descriptively) unlikely to, but you should plot to make that more likely, all else equal). As an analogy, if you are interested in solving different puzzles on different days, then the fact that you are no longer interested in solving yesterday's puzzle doesn't address the problem of solving yesterday's puzzle. And idealized values don't describe valuation of you, the abstract personal identity, of your actions and behavior and desires. They describe valuation of the whole world, including future you with value drift as a particular case that is not fundamentally special. The problem doesn't change, even if the tendency to be interested in a particular problem does. The problem doesn't get solved because you are no longer interested in it. Solving a new, different problem does not address the original problem.

Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it

The nature of idealized values is irrelevant to this point: whatever they are, they are that thing that they are, so that any "correction" discards the original problem statement and replaces it with a new one. What you can and should correct are intermediate conclusions. (Alternatively, we are arguing about definitions, and you read in my use of the term "values" what I would call intermediate conclusions, but then again I'm interested in you noticing the particular idea that I refer to with this term.)

if we realize that our conscious values don't match our unconscious ones

I don't think "unconscious values" is a good proxy for abstract implicit valuation of the universe, consciously-inaccessible processes in the brain are at a vastly different level of abstraction compared to the idealization I'm talking about.

The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them

This might be true in the sense that humans probably underdetermine the valuation of the world, so that there are some situations that our implicit preferences can't compare even in principle. The choice between such situations is arbitrary according to our values. Or our values might just recursively determine the correct choice in every single definable distinction. Any other kind of "creation" will contradict the implicit answer, and so even if it is the correct thing to do given the information available at the time, later reflection can show it to be suboptimal.

(More constructively, the proper place for creativity is in solving problems, not in choosing a supergoal. The intuition is confused on this point, because humans never saw a supergoal, all sane goals that we formulate for ourselves are in one way or another motivated by other considerations, they are themselves solutions to different problems. Thus, creativity is helpful in solving those different problems in order to recognize which new goals are motivated. But this is experience about subgoals, not idealized supergoals.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-06-26T18:34:26.673Z · LW(p) · GW(p)

I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing "what we want" in order to have any way of ensuring that an AI will further the things we want. I do not understand why the concept would be in relevant to our personal lives, however.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-07-06T14:07:32.693Z · LW(p) · GW(p)

I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing "what we want" in order to have any way of ensuring that an AI will further the things we want.

The question of what is normatively the right thing to do (given the resources available) is the same for a FAI and in our personal lives. My understanding is that "implicit idealized value" is the shape of the correct answer to it, not just a tool restricted to the context of FAI. It might be hard for a human to proceed from this concept to concrete decisions, but this is a practical difficulty, not a restriction on the scope of applicability of the idea. (And to see how much of a practical difficulty it is, it is necessary to actually attempt to resolve it.)

I do not understand why the concept would be in relevant to our personal lives, however.

If idealized value indicates the correct shape of normativity, the question should instead be, How are our personal lives relevant to idealized value? One way was discussed a couple of steps above in this conversation: exploitation/exploration tradeoff. In pursuit of idealized values, if in our personal lives we can't get much information about them, a salient action is to perform/support research into idealized values (or relevant subproblems, such as preventing/evading global catastrophes).

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-09T14:31:16.421Z · LW(p) · GW(p)

what is normatively the right thing to do (given the resources available)

What does this mean? It sounds like you're talking about some kind of objective morality?

comment by A1987dM (army1987) · 2013-06-13T16:33:22.048Z · LW(p) · GW(p)

why the suffering of red-haired people should count equally to the suffering of black-haired people

I've interacted with enough red-haired people and enough black-haired people that (assuming the anti-zombie principle) I'm somewhat confident that there's no big difference in average between the ways they suffer . I'm nowhere near as confident about fish.

Replies from: Kaj_Sotala, shminux
comment by Kaj_Sotala · 2013-06-13T19:16:57.805Z · LW(p) · GW(p)

I already addressed that uncertainty in my comment:

Yes, it is plausible that once your nervous system becomes simple enough, you no longer experience anything that we would classify as suffering, but then you said "human suffering is more important", not "there are some classes of animals that suffer less".

To elaborate: it's perfectly reasonable to discount the suffering of e.g. fish by some factor because one thinks that fish probably suffer less. But as I read it, someone who says "human suffering is more important" isn't saying that: they're saying that they wouldn't care about animal suffering even if it was certain that animals suffered just as much as humans, or even if it was certain that animals suffered more than humans. It's saying that no matter the intensity or nature of the suffering, only suffering that comes from humans counts.

comment by Shmi (shminux) · 2013-06-13T16:44:57.204Z · LW(p) · GW(p)

I'm nowhere near as confident about fish.

Even less so about silverfish, despite its complex mating rituals.

comment by Rob Bensinger (RobbBB) · 2013-06-13T08:33:25.552Z · LW(p) · GW(p)

Human suffering might be orders of magnitude more important. (Though: what reason do you have in mind for this?) But non-human animal suffering is likely to be orders of magnitude more common. Some non-human animals are probably capable of suffering, and we care a great deal about suffering in the case of humans (as, presumably, we would in the case of intelligent aliens). So it seems arbitrary to exclude non-human animal suffering from our concerns completely. Moreover, if you're uncertain about whether animals suffer, you should err on the side of assuming that they do because this is the safer assumption. Mistakenly killing thousands of suffering moral patients over your lifetime is plausibly a much bigger worry than mistakenly sparing thousands of unconscious zombies and missing out on some mouth-pleasures.

I'm not a vegetarian myself, but I do think vegetarianism is a morally superior option. I also think vegetarians should adopt a general policy of not paying people to become vegetarians (except perhaps as a short-term experiment, to incentivize trying out the lifestyle).

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-13T19:03:31.028Z · LW(p) · GW(p)

Human suffering might be orders of magnitude more important. (Though: what reason do you have in mind for this?)

I'm a human and I care about humans. Animals only matter insofar as they affect the lives of humans. Is this really such a difficult concept?

But non-human animal suffering is likely to be orders of magnitude more common.

I don't mean per organism, I mean in aggregate. In aggregate, I think the totality of animal suffering is orders of magnitude less important than the totality of human suffering.

Moreover, if you're uncertain about whether animals suffer, you should err on the side of assuming that they do because this is the safer assumption.

I'm not disagreeing that animals suffer. I'm telling you that I don't care whether they suffer.

Replies from: Pablo_Stafforini, RobbBB, shminux, Eliezer_Yudkowsky
comment by Pablo (Pablo_Stafforini) · 2013-06-13T19:45:09.840Z · LW(p) · GW(p)

I'm a human and I care about humans.

You are many things: a physical object, a living being, a mammal, a member of the species Homo sapiens, an East Asian (I believe), etc. What's so special about the particular category you picked?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-13T19:51:03.804Z · LW(p) · GW(p)

The psychological unity of humankind. See also this comment.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-06-13T20:08:08.669Z · LW(p) · GW(p)

Presumably mammals also exhibit more psychological similarity than non-mammals, and the same is probably true about East Asians relative to members of other races. What makes the psychological unity of mankind special?

Moreover, it seems that insofar as you care about humans because they have certain psychological traits, you should care about any creature that has those traits. Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some nonhuman animals.

Replies from: Qiaochu_Yuan, army1987, Nornagest, SaidAchmiz
comment by Qiaochu_Yuan · 2013-06-13T21:42:20.019Z · LW(p) · GW(p)

it seems you should care about at least some nonhuman animals.

I'm willing to entertain this possibility. I've recently been convinced that I should consider caring about dolphins and other similarly intelligent animals, possibly including pigs (so I might be willing to give up pork). I still don't care about fish or chickens. I don't think I can have a meaningful relationship with a fish or a chicken even in principle.

comment by A1987dM (army1987) · 2013-06-15T19:10:13.253Z · LW(p) · GW(p)

Presumably mammals also exhibit more psychological similarity than non-mammals, and the same is probably true about East Asians relative to members of other races. What makes the psychological unity of mankind special?

I suspect that if you plotted all living beings by psychological similarity with Qiaochu_Yuan, there would be a much bigger gap between the -- [reminds himself about small children, people with advanced-stage Alzheimer's, etc.] never mind.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-06-15T19:12:25.005Z · LW(p) · GW(p)

:-)

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-16T16:54:34.493Z · LW(p) · GW(p)

(I could steelman my yesterday self by noticing that even though small children aren't similar to QY they can easily become so in the future, and by replacing “gap” with “sparsely populated region”.)

comment by Nornagest · 2013-06-14T07:58:54.461Z · LW(p) · GW(p)

Moreover, it seems that insofar as you care about humans because they have certain psychological traits, you should care about any creature that has those traits. Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some nonhuman animals.

Doesn't follow. If we imagine a personhood metric for animals evaluated over some reasonably large number of features, it might end up separating (most) humans from all nonhuman animals even if for each particular feature there exist some nonhuman animals that beat humans on it. There's no law of ethics saying that the parameter space has to be small.

It's not likely to be a clean separation, and there are almost certainly some exceptional specimens of H. sapiens that wouldn't stand up to such a metric, but -- although I can't speak for Qiaochu -- that's a bullet I'm willing to bite.

comment by Said Achmiz (SaidAchmiz) · 2013-06-13T20:12:12.050Z · LW(p) · GW(p)

Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some animals.

Does not follow, since an equally valid conclusion is that Qiaochu_Yuan should not-care about some humans (those that exhibit relevant traits less than some nonhuman animals). One person's modus ponens is etc.

comment by Rob Bensinger (RobbBB) · 2013-06-13T19:48:30.409Z · LW(p) · GW(p)

I'm a human and I care about humans. Animals only matter insofar as they affect the lives of humans.

Every human I know cares at least somewhat about animal suffering. We don't like seeing chickens endlessly and horrifically tortured -- and when we become vividly acquainted with such torture, our not-liking-it generally manifests as a desire for the torture to stop, not just as a desire to become ignorant that this is going on so it won't disturb our peace of mind. I'll need more information to see where the disanalogy is supposed to be between compassion for other species and compassion for other humans.

I'm not disagreeing that animals suffer. I'm telling you that I don't care whether they suffer.

Are you certain you don't care?

Are you certain that you won't end up viewing this dispassion as a bias on your part, analogous to people in history who genuinely didn't care at all about black people (but would regret and abandon this apathy if they knew all the facts)?

If you feel there's any realistic chance you might discover that you do care in the future, you should again err strongly on the side of vegetarianism. Feeling a bit silly 20 years from now because you avoided torturing beings it turns out you don't care about is a much smaller cost than learning 20 years from now you're the hitler of cows. Vegetarianism accommodates meta-uncertainty about ethical systems better than its rivals do.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-13T21:47:57.089Z · LW(p) · GW(p)

I'll need more information to see where the disanalogy is supposed to be between compassion for other species and compassion for other humans.

I don't feel psychologically similar to a chicken in the same way that I feel psychologically similar to other humans.

Are you certain you don't care?

No, or else I wouldn't be asking for arguments.

If you feel there's any chance you might discover that you do care in the future, you should again err strongly on the side of vegetarianism. Feeling a bit silly 20 years from now because you avoided torturing beings it turns out you don't care about is a much smaller cost than learning 20 years from now you're the hitler of cows. Vegetarianism accommodates meta-uncertainty about ethical systems better than its rivals do.

This is a good point.

Replies from: RobbBB, Jiro
comment by Rob Bensinger (RobbBB) · 2013-06-15T08:28:12.025Z · LW(p) · GW(p)

I don't feel psychologically similar to a chicken in the same way that I feel psychologically similar to other humans.

I don't either, but unless I can come up with a sharp and universal criterion for distinguishing all chickens from all humans, chickens' psychological alienness to me will seem a difference of degree more than of kind. It's a lot easier to argue that chicken suffering matters less than human suffering (or to argue that chickens are zombies) than to argue that chicken suffering is completely morally irrelevant.

Some chickens may very well have more psychologically in common with me than I have in common with certain human infants or with certain brain-damaged humans; but I still find myself able to feel that sentient infants and disabled sentient humans oughtn't be tortured. (And not just because I don't want their cries to disturb my own peace of mind. Nor just because they could potentially become highly intelligent, through development or medical intervention. Those might enhance the moral standing of any of these organisms, but they don't appear to exhaust it..)

comment by Jiro · 2013-06-14T19:53:16.077Z · LW(p) · GW(p)

That's not a good point, that's a variety of Pascal's Mugging: you're suggesting that the fact that the possible consequence is large ("I tortured beings" is a really negative thing) means that even fi the chance is small, you should act on that basis.

Replies from: BerryPick6
comment by BerryPick6 · 2013-06-14T23:39:56.989Z · LW(p) · GW(p)

It's not a variant of Pascal's Mugging, because the chances aren't vanishingly small and the payoff isn't nearly infinite.

comment by Shmi (shminux) · 2013-06-13T19:35:05.360Z · LW(p) · GW(p)

I'm telling you that I don't care whether they suffer.

I don't believe you. If you see someone torturing a cat, a dolphin or a monkey, would you feel nothing? (Suppose that they are not likely to switch to torturing humans, to avoid "gateway torture" complications.)

Replies from: TheOtherDave, Qiaochu_Yuan, SaidAchmiz
comment by TheOtherDave · 2013-06-13T21:29:33.733Z · LW(p) · GW(p)

My problem with this question is that if I see video of someone torturing a cat when I am confident there was no actual cat-torturing involved in creating those images (e.g., I am confident it was all photoshopped), what I feel is pretty much indistinguishable from what I feel if I see video of someone torturing a cat when I am confident there was actual cat-torturing.

So I'm reluctant to treat what I feel in either case as expressing much of an opinion about suffering, since I feel it roughly equally when I believe suffering is present and when I don't.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-13T21:40:37.021Z · LW(p) · GW(p)

So if you can factor-out, so to speak, the actual animal suffering: If you had to choose between "watch that video, no animal was harmed" versus "watch that video, an animal was harmed, also you get a biscuit (not the food, the 100 squid (not the animals, the pounds (not the weight unit, the monetary unit)))", which would you choose? (Your feelings would be the same, as you say, your decision probably wouldn't be. Just checking.)

Replies from: Qiaochu_Yuan, ciphergoth, TheOtherDave
comment by Qiaochu_Yuan · 2013-06-13T22:57:59.965Z · LW(p) · GW(p)

you get a biscuit (not the food, the 100 squid (not the animals, the pounds (not the weight unit, the monetary unit)))

What?

Replies from: Eliezer_Yudkowsky, Vaniver
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T23:51:05.342Z · LW(p) · GW(p)

A biscuit provides the same number of calories as 100 SQUID, which stands for Superconducting Quantum Interference Device, which weigh a pound apiece, which masses 453.6 grams, which converts to 4 10^16 joules, which can be converted into 1.13 10^10 kilowatt-hours, which are worth 12 cents per kW-hr, so around 136 billion dollars or so.

Replies from: TheOtherDave, Kawoomba
comment by TheOtherDave · 2013-06-14T00:23:49.851Z · LW(p) · GW(p)

...plus a constant.

comment by Kawoomba · 2013-06-14T06:08:55.522Z · LW(p) · GW(p)

Reminds me of ... Note the name of the website. She doesn't look happy! "I am altering the deal. Pray I don't alter it any further."

Edit: Also, 1.13 * 10^10 kilowatt-hours at 12 cents each yields 1.36 billion dollars, not 136 billion dollars! An honest mistake (cents, not dollars per kWh), or a scam? And as soon as Dmitry is less active ...

comment by Vaniver · 2013-06-14T00:24:17.446Z · LW(p) · GW(p)

"squid" is slang for a GBP, i.e. Pound Sterling, although I'm more used to hearing the similar "quid." One hundred of them can be referred to as a "biscuit," apparently because of casino chips, similar to how people in America will sometimes refer to a hundred dollars as a "benjamin."

That is, what are TheOtherDave's preferences between watching an unsettling movie that does not correspond to reality and watching an unsettling movie that does correspond to reality, but they're paid some cash.

Replies from: ciphergoth, TheOtherDave, Qiaochu_Yuan
comment by Paul Crowley (ciphergoth) · 2013-06-14T16:56:47.524Z · LW(p) · GW(p)

"Quid" is slang, "squid" is a commonly used jokey soundalike. There's a joke that ends "here's that sick squid I owe you".

EDIT: also, never heard "biscuit" = £100 before; that's a "ton".

Replies from: Vaniver
comment by Vaniver · 2013-06-14T18:12:06.331Z · LW(p) · GW(p)

"squid" is a commonly used jokey soundalike.

Does Cockney rhyming slang not count as slang?

Replies from: wedrifid
comment by wedrifid · 2013-06-14T19:46:49.000Z · LW(p) · GW(p)

Does Cockney rhyming slang not count as slang?

In this case it seems to. It's the first time I recall encountering it but I'm not British and my parsing of unfamiliar and 'rough' accents is such that if I happened to have heard someone say 'squid' I may have parsed it as 'quid', and discarded the 's' as noise from people saying a familiar term in a weird way rather than a different term.

comment by TheOtherDave · 2013-06-14T00:42:26.717Z · LW(p) · GW(p)

It amuses me that despite making neither head nor tail of the unpacking, I answered the right question.
Well, to the extent that my noncommital response can be considered an answer to any question at all.

comment by Qiaochu_Yuan · 2013-06-14T00:26:11.205Z · LW(p) · GW(p)

Well, I figured that much out from googling, but I was more reacting to what seems like a deliberate act of obfuscation on Kawoomba's part that serves no real purpose.

Replies from: Vaniver, Kawoomba
comment by Vaniver · 2013-06-14T00:27:49.725Z · LW(p) · GW(p)

Nested parentheses are their own reward, perhaps?

comment by Kawoomba · 2013-06-14T07:46:31.830Z · LW(p) · GW(p)

In an interesting twist, in many social circles (not here) your use of the word "obfuscation" would be obfuscatin' in itself.

To be very clear though: "Eschew obfuscation, espouse elucidation."

comment by Paul Crowley (ciphergoth) · 2013-06-14T17:00:11.031Z · LW(p) · GW(p)

So to be clear - you do some Googling and find two videos, one has realistic CGI animal harm, the other real animal harm; assume the CGI etc is so good that I wouldn't be able to tell which was which if you hadn't told me. You don't pay for the animal harm video, or in any way give anyone an incentive to harm an animal in fetching it; just pick up a pre-existing one. I have a choice between watching the fake-harm video (and knowing it's fake) or watching the real-harm video and receiving £100.

If the reward is £100, I'll take the £100; if it's an actual biscuit, I prefer to watch the fake-harm video.

comment by TheOtherDave · 2013-06-13T22:16:28.891Z · LW(p) · GW(p)

I'm genuinely unsure, not least because of your perplexing unpacking of "biscuit".

Both examples are unpleasant; I don't have a reliable intuition as to which is more so if indeed either is.

I have some vague notion that if I watch the real-harm video that might somehow be interpreted as endorsing real-harm more strongly than if I watch the fake-harm vide, like through ratings or download monitoring or something, which inclines me to the fake-harm video. Though whether I'm motivated by the vague belief that such differential endorsement might cause more harm to animals, or by the vague belief that it might cause more harm to my status, I'm again genuinely unsure of. In the real world I usually assume that when I'm not sure it's the latter, but this is such a contrived scenario that I'm not confident of that either.

If I assume the biscuit is a reward of some sort, then maybe that reward is enough to offset the differential endorsement above, and maybe it isn't.

comment by Qiaochu_Yuan · 2013-06-13T19:47:26.542Z · LW(p) · GW(p)

I don't want to see animals get tortured because that would be an unpleasant thing to see, but there are lots of things I think are unpleasant things to see that don't have moral valence (in another comment I gave the example of seeing corpses get raped).

I might also be willing to assign dolphins and monkeys moral value (I haven't made up my mind about this), but not most animals.

Replies from: CoffeeStain
comment by CoffeeStain · 2013-06-13T20:25:10.699Z · LW(p) · GW(p)

Do you have another example besides the assault of corpses? I can easily see real moral repugnance from the effect it has on the offenders, who are victims of their own actions. If you find it unpleasant only when you see it, would not they find it horrific when they perform it?

Also in these situations, repugnance can leak due to uncertainty of other real moral outcomes, such as the (however small) likelihood of family members of the deceased learning of the activity, for whom these corpses have real moral value.

Replies from: army1987, Qiaochu_Yuan
comment by A1987dM (army1987) · 2013-06-14T21:01:25.133Z · LW(p) · GW(p)

Do you have another example besides the assault of corpses?

Two Girls One Cup?

comment by Qiaochu_Yuan · 2013-06-13T21:49:29.451Z · LW(p) · GW(p)

Seeing humans perform certain kinds of body modifications would also be deeply unpleasant to me, but it's also not an act I assign moral valence to (I think people should be allowed to modify their bodies more or less arbitrarily).

comment by Said Achmiz (SaidAchmiz) · 2013-06-13T20:16:54.733Z · LW(p) · GW(p)

I'll chime in to comment that QiaochuYuan's[1] views as expressed in this entire thread are quite similar to my own (with the caveat that for his "human" I would substitute something like "sapient, self-aware beings of approximately human-level intelligence and above" and possibly certain other qualifiers having to do with shared values, to account for Yoda/Spock/AIs/whatever; it seems like QiaochuYuan uses "approximately human" to mean roughly this).

So, please reconsider your disbelief.

[1] Sorry, the board software is doing weird things when I put in underscores...

Replies from: shminux, KatieHartman
comment by Shmi (shminux) · 2013-06-13T20:24:20.801Z · LW(p) · GW(p)

So, presumably you don't keep a pet, and if you did, you would not care for its well-being?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T20:38:44.102Z · LW(p) · GW(p)

Indeed, I have no pets.

If I did have a pet, it is possible that I would not care for it (assuming animal cruelty laws did not exist), although it is more likely that I would develop an attachment to it, and would come to care about its well-being. That is how humans work, in my experience. I don't think this necessarily has any implications w.r.t. the moral status of nonhuman animals.

comment by KatieHartman · 2013-06-17T02:52:17.223Z · LW(p) · GW(p)

Do you consider young children and very low-intelligence people to be morally-relevant?

(If - in the case of children - you consider potential for later development to be a key factor, we can instead discuss only children who have terminal illnesses.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-17T05:21:41.478Z · LW(p) · GW(p)

Good question. Short answer: no.

Long answer: When I read Peter Singer, what I took away was not, as many people here apparently did, that we should value animals; what I took away is that we should not value fetuses, newborns, and infants (to a certain age, somewhere between 0 and 2 years [1]). That is, I think the cutoff for moral relevant is somewhere above, say, cats, dogs, newborns... where exactly? I'm not sure.

Humans who have a general intelligence so low that they are incapable of thinking about themselves as conscious individuals are also, in my view, not morally relevant. I don't know whether such humans exist (most people with Down syndrome don't quite seem to fit that criterion, for instance).

There are many caveats and edge cases, for instance: what if the low-intelligence condition is temporary, and will repair itself with time? Then I think we should consider the wishes of the self that the person was before the impairment, and the rights of their future, non-impaired, selves. But what if the impairment can be repaired using medical technology? Same deal. What if it can't? Then I would consider this person morally irrelevant. What if the person was of extremely low intelligence, and had always been so, but we could apply some medical intervention to raise their intelligence to at least normal human level? I would consider that act morally equivalent to creating a new sapient being (whether that's good or bad is a separate question).

So: it's complicated. But to answer practical questions: I don't consider infanticide the moral equivalent of murder (although it's reasonable to outlaw it anyway, as birth is good Schelling point, but the penalty should surely be nowhere near as harsh as for killing an adult or older child). The rights of low-intelligence people is a harder issue partly because there are no obvious cutoffs or metrics.

I hope that answers your question; if not, I'll be happy to elaborate further.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T19:21:45.972Z · LW(p) · GW(p)

Ethical generalizations check: Do you care about Babyeaters? Would you eat Yoda?

Replies from: wedrifid, Qiaochu_Yuan, Jiro
comment by wedrifid · 2013-06-14T05:05:07.762Z · LW(p) · GW(p)

Would you eat Yoda?

Would that allow absorbing some of his midichlorians? Black magic! Well, I might try (since he died of natural causes anyway). But yoda dies without leaving a corpse. It would be difficult. The only viable strategy would seem to be to have Yoda anethetize himself a minute before he ghosts ("becomes one with the force"). Then the flesh would remain corporeal for consumption.

The real ethical test would be would I freeze yoda's head in carbonite, acquire brain scanning technology and upload him into a robot body? Yoda may have religious objections to the practice so I may honour his preferences while being severely disappointed. I suspect I'd choose the Dark Side of the Force myself. The Sith philosophy seems much more compatible with life extension by whatever means necessary.

Replies from: CCC, Kawoomba, nshepperd
comment by CCC · 2013-06-14T08:51:07.224Z · LW(p) · GW(p)

It should be noted that Yoda has an observable afterlife. Obi-wan had already appeared after his body had died, apparently in full possession of his memories and his reasoning abilities; Yoda proposes to follow in Obi-wan's footsteps, and has good reason to believe that he will be able to do so.

comment by Kawoomba · 2013-06-14T07:54:48.631Z · LW(p) · GW(p)

Sith philosophy, for reference:

Peace is a lie, there is only passion.

Through passion, I gain strength.

Through strength, I gain power.

Through power, I gain victory.

Through victory, my chains are broken.

The Force shall free me.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-14T14:05:04.681Z · LW(p) · GW(p)

Actual use of Sith techniques seems to turn people evil at ridiculously accelerated rates. At least in-universe it seems that sensible people would write off this attractive-sounding philosophy as window dressing on an extremely damaging set of psychic techniques.

comment by nshepperd · 2013-06-14T15:27:03.881Z · LW(p) · GW(p)

If you're lucky, it might grant intrinsic telepathy, as long as the corpse is relatively fresh.

comment by Qiaochu_Yuan · 2013-06-13T19:30:24.865Z · LW(p) · GW(p)

Nope (can't parse them as approximately human without revulsion). Nope (approximately human).

comment by Jiro · 2013-06-14T19:35:04.725Z · LW(p) · GW(p)

I wouldn't eat flies or squids either. But I know that that's a cultural construct.

Let's ask another question: would I care if someone else eats Yoda?

Well, I might, but only because eating Yoda is, in practice, correlated with lots of other things I might find undesirable. If I could be assured that such was not the case (for instance, if there was another culture which ate the dead to honor them, that's why he ate Yoda, and Yoda's will granted permission for this), then no, I wouldn't care if someone else eats Yoda.

Replies from: wedrifid
comment by wedrifid · 2013-06-14T19:58:08.551Z · LW(p) · GW(p)

Well, I might, but only because eating Yoda is, in practice, correlated with lots of other things I might find undesirable.

In practice? In common Yoda-eating practice? Something about down to earth 'in practice' empirical observations about things that can not possibly have ever occurred strikes me as broken. Perhaps "would be, presumably, correlated with".

If I could be assured that such was not the case (for instance, if there was another culture which ate the dead to honor them, that's why he ate Yoda, and Yoda's will granted permission for this), then no, I wouldn't care if someone else eats Yoda.

In Yoda's case he could even have just asked for permission from Yoda's force ghost. Jedi add a whole new level of meaning to "Living Will".

Replies from: Jiro
comment by Jiro · 2013-06-14T20:29:17.891Z · LW(p) · GW(p)

In practice? In common Yoda-eating practice?

"In practice" doesn't mean "this is practiced", it means "given that this is done, what things are, with high probability, associated with it in real-life situations" (or in this case, real-life-+-Yoda situations). "In practice" can apply to rare or unique events.

Replies from: Qiaochu_Yuan, wedrifid
comment by Qiaochu_Yuan · 2013-06-14T20:47:38.797Z · LW(p) · GW(p)

I really don't think statements of the form "X is, in practice, correlated with Y" should apply to situations where X has literally never occurred. You might want to say "I expect that X would, in practice, be correlated with Y" instead.

Replies from: Jiro
comment by Jiro · 2013-06-14T22:02:04.831Z · LW(p) · GW(p)

All events have never occurred if you describe them with enough specificity; I've never eaten this exact sandwich on this exact day.

While nobody has eaten Yoda before, there have been instances where people have eaten beings that could talk intelligently.

comment by wedrifid · 2013-06-15T15:06:38.818Z · LW(p) · GW(p)

"In practice" doesn't mean "this is practiced", it means "given that this is done, what things are, with high probability, associated with it in real-life situations" (or in this case, real-life-+-Yoda situations). "In practice" can apply to rare or unique events.

I share Qiaochu's reasoning.

comment by Peter Wildeford (peter_hurford) · 2013-06-13T01:54:11.320Z · LW(p) · GW(p)

What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian?

I am a moral anti-realist, so I don't think there's any argument I could give you to persuade you to change your values. To me, it feels very inconsistent to not value animals -- it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.

Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction. Though maybe you wouldn't, or you would think the reaction irrational? I don't know.

However, if you really do care about humans and humans alone, the environmental argument still has weight, though certainly less.

~

Also, meat is delicious and contains protein.

One can get both protein and deliciousness from non-meat sources.

~

Alternatively, how much would you be willing to pay me to stop eating meat?

I'm not sure. I don't think there's a way I could make that transaction work.

Replies from: Vaniver, Qiaochu_Yuan, Vladimir_Nesov, army1987, Larks
comment by Vaniver · 2013-06-13T02:23:25.189Z · LW(p) · GW(p)

Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction.

Some interesting things about this example:

  1. Distance seems to have a huge impact when it comes to the bystander effect, and it's not clear that it's irrational. If you are the person who is clearly best situated to save a puppy from torture, that seems different from the fact that dogs are routinely farmed for meat in other parts of the world, by armies of people you could not hope to personally defeat or control.

  2. Someone who is willing to be sadistic to animals might be sadistic towards humans as well, and so they may be a poor choice to associate with (and possibly a good choice to anti-associate with).

  3. Many first world countries have some sort of law against bestiality. (In the US, this varies by state.) However, any justification for these laws based on the rights of the animals would also rule out related behavior in agribusiness, which is generally legal. There seems to be a difference between what people are allowed to do for fun and what they're allowed to do for profit; this makes sense in light of viewing the laws as not against actions, but kinds of people.

comment by Qiaochu_Yuan · 2013-06-13T06:09:51.044Z · LW(p) · GW(p)

To me, it feels very inconsistent to not value animals -- it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.

Well, and what would you say to someone who thought that?

Also, do you really not value animals?

I don't know. It doesn't feel like I do. You could try to convince me that I do even if you're a moral anti-realist. It's plausible I just haven't spent enough time around animals.

I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction.

Probably. I mean, all else being equal I would prefer that an animal not be tortured, but in the case of farming and so forth all else is not equal. Also, like Vaniver said, any negative reaction I have directed at the person is based on inferences I would make about that person's character, not based on any moral weight I directly assign to what they did. I would also have some sort of negative reaction to someone raping a corpse, but it's not because I value corpses.

One can get both protein and deliciousness from non-meat sources.

My favorite non-meat dish is substantially less delicious than my favorite meat dish. I do currently get a decent amount of protein from non-meat sources, but asking someone who gets their protein primarily from meat to give up meat means asking them to incur a cost in finding and purchasing other sources of protein, and that cost needs to be justified somehow.

I'm not sure. I don't think there's a way I could make that transaction work.

Really? This can't be that hard a problem to solve. We could use a service like Fiverr, with you paying me $5 not to eat meat for some period of time.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-13T06:37:11.069Z · LW(p) · GW(p)

And what would you say to someone who thought that [we shouldn't value the lives of foreigners]?

Right now, I don't know. I feel like it would be playing a losing game. What would you say?

You could try to convince me that I do [value nonhuman animals] even if you're a moral anti-realist.

I'm not sure how I would do that. Would you kick a puppy? If not, why not?

We could use a service like Fiverr, with you paying me $5 not to eat meat for some period of time.

How could I verify that you actually refrain from eating meat?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-13T07:25:32.043Z · LW(p) · GW(p)

Right now, I don't know. I feel like it would be playing a losing game. What would you say?

I would probably say something like "you just haven't spent enough time around them. They're less different from you than you think. Get to know them, and you might come to see them as not much different from the people you're more familiar with." In other words, I would bet on the psychological unity of mankind. Some of this argument applies to my relationship with the smarter animals (e.g. maybe pigs) but not to the dumber ones (e.g. fish). Although I'm not sure how I would go about getting to know a pig.

I'm not sure how I would do that. Would you kick a puppy? If not, why not?

No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn't chop down a tree either, but it's not because I think trees have moral value, and I don't plan to take any action against the logging industry as a result.

How could I verify that you actually refrain from eating meat?

Oh, that's what you were concerned about. It would be beneath my dignity to lie for $5, but if that isn't convincing, then I dunno. (On further thought, this seems like a big problem for measuring the actual impact of any proposed vegetarian proselytizing. How can you verify that anyone actually refrains from eating meat?)

Replies from: DavidAgain
comment by DavidAgain · 2013-06-13T14:35:19.912Z · LW(p) · GW(p)

"No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn't chop down a tree either, but it's not because I think trees have moral value, and I don't plan to take any action against the logging industry as a result."

All else is never precisely equal. If I offered you £100 to do one of these of your choice, would you rather a) give up meat for a month b) beat a puppy to death

I suspect that the vast majority of people who eat battery chicken to save a few dollars would require much more money to directly cause the same sort of suffering to a chicken. Whereas when it came to chopping down trees it would be more a matter of if the cash was worth the effort. Of course, it could very easily be that the problem here is not with Person A (detached, callous eater of battery chicken) but with Person B (overemphathic anthrophomorphic person who doesn't like to see chickens suffering), but the contrast is quite telling.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-13T15:13:42.325Z · LW(p) · GW(p)

For what it's worth, I also wouldn't treat painlessly and humanely slaughtering a chicken who has lived a happy and fulfilled life with my own hands equivalently to paying someone else to do so where I don't have to watch. There's quite a contrast there, as well, but it seems to have little to do with suffering.

That said, I would almost undoubtedly prefer watching a chicken be slaughtered painlessly and humanely to watching it suffer while being slaughtered.
Probably also to watching it suffer while not being slaughtered.

Mostly, I conclude that my preferences about what I want to do, what I want to watch, and what I want to have done on my behalf, are not well calibrated to one another.

Replies from: DavidAgain
comment by DavidAgain · 2013-06-13T15:27:59.875Z · LW(p) · GW(p)

Yeah, that's the only clear conclusion. The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled. Honestly not sure where I stand on this.

Replies from: SaidAchmiz, TheOtherDave
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T15:54:09.448Z · LW(p) · GW(p)

I don't think that "not enjoying killing a chicken" should be described as an "intuition". Moral intuitions generally take the form of "it seems to me that / I strongly feel that so-and-so is the right thing to do / the wrong thing to do / bad / good / etc." What you do or do not enjoy doing is a preference, like enjoying chocolate ice cream, not enjoying ice skating, being attracted to blondes, etc. Preferences can't be "true" or "false", they're just facts about your mental makeup. (It may make sense to describe a preference as "invalid" in certain senses, however, but not obviously any senses relevant to this current discussion.)

So for instance "I think killing a chicken is morally ok" (a moral intuition) and "I don't like killing chickens" (a preference) do not conflict with each other any more than "I think homosexuality is ok" and "I am heterosexual" conflict with each other, or "Being a plumber is ok (and in fact plumbers are necessary members of society)" and "I don't like looking inside my plumbing".

Now, if you wanted to take this discussion to a slightly more subtle level, you might say: "This is different! Killing chickens causes in me a kind of psychic distress usually associated with witnessing or performing acts that I also consider to be immoral! Surely this is evidence that this, too, is immoral?" To that I can respond only that yes, this may be evidence in the strict Bayesian sense, but the signals your brain generates may be flawed. We should evaluate the ethical status of the act in question explicitly; yes, we should take moral intuitions into account, but my intuitions, at least, is that chicken-killing is fine, despite having no desire to do it myself. This screens off the "agh I don't want to do/watch this!" signal.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-13T16:30:44.148Z · LW(p) · GW(p)

The dividing lines between the kinds of cognitive states I'm inclined to call "moral intuitions" and the kinds of cognitive states I'm inclined to call "preferences" and the kinds of cognitive states I'm inclined to call "psychic distress" are not nearly as sharp, in my experience, as you seem to imply here. There's a lot of overlap, and in particular the states I enter surrounding activities like killing animals (especially cute animals with big eyes) don't fall crisply into just one category.

But, sure, if we restrict the discussion to activities where those categories are crisply separated, those distinctions are very useful.

comment by TheOtherDave · 2013-06-13T16:48:56.373Z · LW(p) · GW(p)

The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled.

Mm. If you mean to suggest that the outcome of moral reasoning is necessarily that one of my intuitions gets endorsed, then I disagree; I would expect worthwhile moral reasoning to sometimes endorse claims that my intuition didn't provide in the first place, as well as claims that my intuitions consistently reject.

In particular, when my moral intuitions conflict (or,as SaidAchmiz suggests, when the various states that I have a hard time cleanly distinguishing from my moral intuitions despite not actually being any such thing conflict), I usually try to envision patterning the world in different ways that map in some fashion to some weighting of those states, ask myself what the expected end result of that patterning is, see whether I have clear preferences among those expected endpoints, work backwards from my preferred endpoint to the associated state-weighting, and endorse that state-weighting.

The result of that process is sometimes distressingly counter-moral-intuitive.

Replies from: DavidAgain
comment by DavidAgain · 2013-06-14T07:09:47.644Z · LW(p) · GW(p)

Sorry, I was unclear: I meant moral (and political) arguments from other people - moral rhetoric if you like - often takes that form.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-14T16:40:52.157Z · LW(p) · GW(p)

Ah, gotcha. Yeah, that's true.

comment by Vladimir_Nesov · 2013-06-13T07:09:10.292Z · LW(p) · GW(p)

I am a moral anti-realist, so I don't think there's any argument I could give you to persuade you to change your values.

The relevant sense of changing values is change of someone else's purposeful behavior. The philosophical classification of your views doesn't seem like useful evidence about that possibility.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-13T12:42:46.917Z · LW(p) · GW(p)

I don't understand what that means for my situation, though. How am I supposed to argue him out of his current values?

I mean, it's certainly possible to change someone's values through anti-realist argumentation. My values were changed in that way several times. But I don't know how to do it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-06-13T14:24:22.811Z · LW(p) · GW(p)

How am I supposed to argue him out of his current values?

This is a separate question. I was objecting to the relevance of invoking anti-realism in connection with this question, not to the bottom line where that argument pointed.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-17T04:32:10.186Z · LW(p) · GW(p)

If moral realism were true, there would be a very obvious path to arguing someone out of their values -- argue for the correct values. In my experience, when people want an argument to change their values, they want an argument for what the correct value is, assuming moral realism.

Moral anti-realism certainly complicates things.

comment by A1987dM (army1987) · 2013-06-15T19:04:21.428Z · LW(p) · GW(p)

I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction.

That doesn't necessarily mean that I have animals being tortured as a negative terminal value: I might only dislike that because it generates negative warm fuzzies.

Replies from: MugaSofer
comment by MugaSofer · 2013-06-15T22:58:40.071Z · LW(p) · GW(p)

This also applies to foreigners, though.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-15T23:35:32.060Z · LW(p) · GW(p)

Well, it also applies to blood relatives, for that matter.

comment by Larks · 2013-06-13T08:52:46.021Z · LW(p) · GW(p)

To me, it feels very inconsistent to not value animals -- it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.

Unfortunately, the typical argument in favour of caring about foreigners, people of other races, etc., is that they are human too.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-13T09:03:51.040Z · LW(p) · GW(p)

If distinct races were instead distinct human subspecies or closely-related species, would the moral case for treating these groups equally ipso facto collapse?

If not, then 'they're human too' must be a stand-in for some other feature that's really doing the pushing and pulling of our moral intuitions. At the very least, we need to taboo 'human' to figure out what the actual relevant concept is, since it's not the standard contemporary biological definition.

Replies from: CCC, Larks, None
comment by CCC · 2013-06-13T09:56:01.741Z · LW(p) · GW(p)

In my case, I think that the relevant concept is human-level (or higher) intelligence. Of all the known species on Earth, humanity is the only one that I know to possess human-level or higher intelligence.

One potentially suitable test for human-level intelligence is the Turing test; due to their voice-mimic abilities, a parrot or a mynah bird may sound human at first, but it will not in general pass a Turing test.

Biological engineering on an almost-sufficiently-intelligent species (such as a dolphin) may lead to another suitably intelligent species with very little relation to a human.

Replies from: RobbBB, MugaSofer
comment by Rob Bensinger (RobbBB) · 2013-06-13T10:04:28.699Z · LW(p) · GW(p)

That different races have effectively the same intellectual capacities is surely an important part of why we treat them as moral equals. But this doesn't seem to me to be entirely necessary — young children and the mentally handicapped may deserve most (though not all) moral rights, while having a substantially lower level of intelligence. Intelligence might also turn out not to be sufficient; if a lot of why we care about other humans is that they can experience suffering and pleasure, and if intelligent behavior is possible without affective and evaluative states like those, then we might be able to build an AI that rivaled our intelligence but did not qualify as a moral patient, or did not qualify as one to the same extent as less-intelligent-but-more-suffering-prone entities.

comment by MugaSofer · 2013-06-15T23:11:23.056Z · LW(p) · GW(p)

Clearly, below-human-average intelligence is still worth something ... so is there a cutoff point or what?

(I think you're onto something with "intelligence", but since intelligence varies, shouldn't how much we care vary too? Shouldn't there be some sort of sliding scale?)

Replies from: CCC
comment by CCC · 2013-06-17T09:18:29.271Z · LW(p) · GW(p)

That's a very good question.

I don't know.

Thinking through my mental landscape, I find that in most cases I value children (slightly) above adults. I think that this is more a matter of potential than anything else. I also put some value on an unborn human child, which could reasonably be said to have no intelligence at all (especially early on).

So, given that, I think that I put some fairly significant value on potential future intelligence as well as on present intelligence.

But, as you point out, below-human intelligence is still worth something.

...

I don't think there's really a firm cutoff point, such that one side is "worthless" and the other side is "worthy". It's a bit like a painting.

At one time, there's a blank canvas, a paintbrush, and a pile of tubes of paint. At this point, it is not a painting. At a later time, there's a painting. But there isn't one particular moment, one particular stroke of the brush, when it goes from "not-a-painting" to "painting". Similarly for intelligence; there isn't any particular moment when it switches automatically from "worthless" to "worthy".

If I'm going to eat meat, I have to find the point at which I'm willing to eat it by some other means than administering I.Q. tests (especially as, when I'm in the supermarket deciding whether or not to purchase a steak, it's a bit late to administer any tests to the cow). Therefore, I have to use some sort of proxy measurement with correlation to intelligence instead. For the moment, i.e. until some other species is proven to have human-level or near-human intelligence, I'm going to continue to use 'species' as my proxy measurement.

comment by Larks · 2013-06-14T09:44:14.834Z · LW(p) · GW(p)

See Arneson's What, if anything, renders all humans morally Equal?

edit: can't get the syntax to work, but here's the link: www.philosophyfaculty.ucsd.edu/faculty/rarneson/singer.pdf

comment by [deleted] · 2013-06-13T14:20:48.967Z · LW(p) · GW(p)

So what do you think of 'sapient' as a taboo for 'human'? Necessary conditions on sapience will, I suppose, but things like language use and sensation. As for those mentally handicapped enough to fall below sapience, I'm willing to bite the bullet on that so long as we're willing to discuss indirect reasons for according something moral respect. Something along the lines of Kant's claim that cruelty to animals is wrong not because of the rights of the animal (who has none) but because wantonly harming a living thing damages the moral faculties of the agent.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-13T19:40:37.153Z · LW(p) · GW(p)

How confident are you that beings capable of immense suffering, but who haven't learned any language, all have absolutely no moral significance? That we could (as long as it didn't damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?

I don't see any particular reason for this to be the case, and again the risks of assuming it and being wrong seem much greater than the risks of assuming its negation and being wrong.

Replies from: None
comment by [deleted] · 2013-06-13T22:23:57.940Z · LW(p) · GW(p)

That we could (as long as it didn't damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?

I'm not committed to this, or anything close. What I'm committed to is the ground of moral respect being sapience, and whatever story we tell about the moral respect accorded to non-sapient (but, say, sentient) beings is going to relate back to the basic moral respect we have for sapience. This is entirely compatible with regarding sentient non-language-users as worthy of protection, etc. In other words, I didn't intend my suggestion about a taboo replacement to settle the moral-vegetarian question. It would be illicit to expect a rephrasing of the problem to do that.

So to answer your question:

How confident are you that beings capable of immense suffering, but who haven't learned any language, all have absolutely no moral significance?

I donno, I didn't claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.

Replies from: elharo, RobbBB, Eugine_Nier
comment by elharo · 2013-06-15T12:37:01.907Z · LW(p) · GW(p)

"Sapience" is not a crisp category. Humans are more sapient than chimpanzees, crows, and dogs. Chimpanzees, crows, and dogs are more sapient than house cats and fish. Some humans are more or less sapient than other humans.

Suppose one day we encounter a non-human intelligent species that is to us as we are to chimpanzees. Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?

I don't think sapience and/or sentience is necessarily a bad place to start. However I am very skeptical of attempts to draw hard lines that place all humans in one set, and everything else on Earth in another.

Replies from: None
comment by [deleted] · 2013-06-15T15:31:20.010Z · LW(p) · GW(p)

"Sapience" is not a crisp category.

Well, I was suggesting a way of making it pretty crisp: it requires language use. None of those other animals can really do that. But to the extent that they might be trained to do so, I'm happy to call those animals sapient. What's clear is that, for example, dogs, cows, or chickens are not at all sapient by this standard.

Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?

No, but I think the situation you describe is impossible. That intelligent species (assuming they understood us well enough to make this judgement) would recognize that we're language-users. Chimps aren't.

Replies from: elharo, MugaSofer
comment by elharo · 2013-06-15T16:23:27.868Z · LW(p) · GW(p)

Sorry, still not crisp. If you're using sapience as a synonym for language, language is not a crisp category either. Crows and elephants have demonstrated abilities to communicate with other members of their own species. Chimpanzees can be taught enough language to communicate bidirectionally with humans. Exactly what this means for animal cognition and intelligence is a matter of much dispute among scientists, as is whether animals can really be said to use language or not; but the fact that it is disputed should make it apparent that the answer is not obvious or self-evident. It's a matter of degree.

Ultimately this just seems like a veiled way to specially privilege humans, though not all of them. Is a stroke victim with receptive aphasia nonsapient? You might equally well pick the use of tools to make other tools, or some other characteristic to draw the line where you've predetermined it will be drawn; but it would be more honest to simply state that you privilege Homo sapiens sapiens, and leave it at that.

Replies from: None
comment by [deleted] · 2013-06-16T00:52:09.099Z · LW(p) · GW(p)

If you're using sapience as a synonym for language, language is not a crisp category either.

Not a synonym. Language use is a necessary condition. And by 'language use' I don't mean 'ability to communicate'. I mean more strictly something able to work with things like syntax and semantics and concepts and stuff. We've trained animals to do some pretty amazing things, but I don't think any, or at least not more than a couple, are really language users. I'm happy to recognize the moral worth of any there are, and I'm happy to recognize a gradient of worth on the basis of a gradient of sapience. I don't think anything we've encountered comes close to human beings on such a gradient, but that might just be my ignorance talking.

Ultimately this just seems like a veiled way to specially privilege humans,

It's not veiled! I think humans are privileged, special, better, more significant, etc. And I'm not picking an arbitrary part of what it means to be human. I think this is the very part that, were we to find it in a computer or an alien or an animal would immediately lead us to conclude that this being had moral worth.

comment by MugaSofer · 2013-06-15T23:03:18.534Z · LW(p) · GW(p)

Are you seriously suggesting that the difference between someone you can understand and someone you can't matters just as much as the difference between me and a rock? Do you think your own moral worth would vanish if you were unable to communicate with me?

Replies from: None
comment by [deleted] · 2013-06-16T00:44:17.090Z · LW(p) · GW(p)

Yes, I'm suggesting both, on a certain reading of 'can' and 'unable'. If I were, in principle, incapable of communicating with anyone (in the way worms are) then my moral worth, or anyway the moral worth accorded to sapient beings on the basis of their being sapient on my view, would disappear. I might have moral worth for other reasons, though I suspect these will come back to my holding some important relationship to sapient beings (like formerly being one).

If you are asking whether my moral worth would disappear if I, a language user, were by some twist of fate made unable to communicate, then my moral worth would not disappear (since I am still a language user).

comment by Rob Bensinger (RobbBB) · 2013-06-15T08:19:15.413Z · LW(p) · GW(p)

The goal of defining 'human' (and/or 'sapient') here is to steel-man (or at least better understand) the claim that only human suffering matters, so we can evaluate it. If "language use and sensation" end up only being necessary or sufficient for concepts of 'human' that aren't plausible candidates for the original 'non-humans aren't moral patients' claim, then they aren't relevant. The goal here isn't to come up with the one true definition of 'human', just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.

I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.

Well, you'd be at a loss because you either wouldn't exist or wouldn't be able to linguistically express anything. But we can still adopt an outsider's perspective and claim that universes with sentience but no sapience are better when they have a higher ratio of joy to suffering, or of preference satisfaction to preference frustration.

Replies from: None
comment by [deleted] · 2013-06-15T15:22:15.329Z · LW(p) · GW(p)

The goal here isn't to come up with the one true definition of 'human', just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.

Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it's perfectly okay to subject sentient non-language users to infinite torture. It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons. This argument didn't begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us (when we have no other reason to care) is that it is specifically human suffering.

Another way to put this is that I'm defending, or trying to steel-man, the claim that the fact that a human's suffering is human gives us a reason all on its own to think that that suffering is ethically significant. While nothing about an animal's suffering being animal suffering gives us a reason all on its own to think that that suffering is ethically significant. We could still have other reasons to think it so, so the 'infinite torture' objection doesn't necessarily land.

Well, you'd be at a loss because you either wouldn't exist or wouldn't be able to linguistically express anything.

We can discuss that world from this one.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-15T15:30:00.805Z · LW(p) · GW(p)

Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it's perfectly okay to subject sentient non-language users to infinite torture.

You seem to be using 'anthropocentric' to mean 'humans are the ultimate arbiters or sources of morality'. I'm using 'anthropocentric' instead to mean 'only human experiences matter'. Then by definition it doesn't matter whether non-humans are tortured, except insofar as this also diminishes humans' welfare. This is the definition that seems relevant Qiaochu's statement, "I am still not convinced that I should care about animal suffering." The question isn't why we should care; it's whether we should care at all.

It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons.

I don't think which reasons happen to psychologically motivate us matters here. People can have bad reasons to do good things. More interesting is the question of whether our good reasons would all be human-related, but that too is independent of Qiaochu's question.

This argument didn't begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us is that it is specifically human suffering.

No, the latter was an afterthought. The discussion begins here.

Replies from: None
comment by [deleted] · 2013-06-15T15:36:46.414Z · LW(p) · GW(p)

I'm using 'anthropocentric' instead to mean 'only human experiences matter'.

Ah, okay, to be clear, I'm not defending this view. I think it's a strawman.

I don't think which reasons happen to psychologically motivate us matters here.

I didn't refer to psychological reasons. An example besides Kant's (which is not psychological in the relevant sense) might be this: it is unethical to torture a cow because though cows have no ethical significance in and of themselves, they do have ethical significance as domesticated animals, who are wards of our society. But that's just an example of such a reason.

No, the latter was an afterthought. The discussion begins here.

I took the discussion to begin from Peter's response to that comment, since that comment didn't contain an argument, while Peter's did. It would be weird for me to respond to Qiaochu's request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.

But this is getting to be a discussion about our discussion. I'm not tapping out, quite, but I would like us to move on to the actual conversation.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-15T15:54:49.020Z · LW(p) · GW(p)

It would be weird for me to respond to Qiaochu's request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.

Not if you agreed with Qiaochu that no adequately strong reasons for caring about any non-human suffering have yet been presented. There's no rule against agreeing with an OP.

Replies from: None
comment by [deleted] · 2013-06-16T00:47:04.317Z · LW(p) · GW(p)

Fair point, though we might be reading Qiaochu differently. I took him to be saying "I know of no reasons to take animal suffering as morally significant, though this is consistant with my treating it as if it is and with its actually being so." I suppose you took him to be saying something more like "I don't think there are any reasons to take animal suffering as morally significant."

I don't have good reasons to think my reading is better. I wouldn't want to try and defend Qiaochu's view if the second reading represents it.

comment by Eugine_Nier · 2013-06-15T08:12:48.737Z · LW(p) · GW(p)

I donno, I didn't claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.

If that was the case there would be no one to do the discussing.

Replies from: None
comment by [deleted] · 2013-06-15T14:54:58.490Z · LW(p) · GW(p)

If that was the case there would be no one to do the discussing.

Well, we could discuss that world from this one.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-16T06:20:29.594Z · LW(p) · GW(p)

Yes, and we could, for example, assign that world no moral significance relative to our world.

comment by Vaniver · 2013-06-12T22:31:35.984Z · LW(p) · GW(p)

What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian? Alternatively, how much would you be willing to pay me to stop eating meat?

I found it interesting to compare "this is the price at which we could buy animals not existing" to the "this is the price people are willing to pay for animals to exist so they can eat them," because it looks like the second is larger, often by orders of magnitude. (This shouldn't be that surprising for persuasion; if you can get other people to spend their own resources, your costs are much lower.)

It also bothers me that the so many of the animals saved are fish; they dominate the weighted mean, have very different lifespans from chickens, and to the best of my knowledge cannot be 'factory farmed' in the same way. [Edit: It appears that conditions for fish on fish farms are actually pretty bad, to the point that many species of fish cannot survive modern farming techniques. So, no comment on the relative badness.]

Replies from: peter_hurford, peter_hurford, Desrtopa
comment by Peter Wildeford (peter_hurford) · 2013-06-13T01:49:21.049Z · LW(p) · GW(p)

It also bothers me that the so many of the animals saved are fish; they dominate the weighted mean, have very different lifespans from chickens, and to the best of my knowledge cannot be 'factory farmed' in the same way. (It seems to me that fish farms are much more like their natural habitat than chicken farms are like their natural habitat, but that may be mistaken.)

From what I know, fish farming doesn't sound pleasant, though perhaps it's not nearly as bad as chicken farming.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-06-13T17:15:31.621Z · LW(p) · GW(p)

If that description makes you think that fish farming might possibly be in the same ballpark as chicken farming, then you're pretty ignorant of factory farming. Maybe you haven't seen enough propaganda?

Your other link is about killing the fish. Focus on the death rather than life may be good for propaganda, but do you really believe that the much of the suffering is there? Indeed, your post claimed to be about days of life.

Added: it makes me wonder if activists are corrupted by dealing with propaganda to focus on the aspects for which propaganda is most effective. Or maybe it's just that the propaganda works on them.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-17T04:36:34.170Z · LW(p) · GW(p)

If that description makes you think that fish farming might possibly be in the same ballpark as chicken farming, then you're pretty ignorant of factory farming.

I never said they were in the same ballpark. Just that fish farming is also something I don't like.

~

Your other link is about killing the fish. Focus on the death rather than life may be good for propaganda, but do you really believe that the much of the suffering is there?

Yes, I do.

~

Indeed, your post claimed to be about days of life.

I agree that might not make much sense for fish, except in so far as farming causes more fish to be birthed than otherwise would.

~

Added: it makes me wonder if activists are corrupted by dealing with propaganda to focus on the aspects for which propaganda is most effective. Or maybe it's just that the propaganda works on them.

I think this is a bias that is present in any kind of person that cares about advocating for or against a cause.

comment by Desrtopa · 2013-06-12T22:39:21.497Z · LW(p) · GW(p)

It also bothers me that the so many of the animals saved are fish; they dominate the weighted mean, have very different lifespans from chickens, and to the best of my knowledge cannot be 'factory farmed' in the same way. (It seems to me that fish farms are much more like their natural habitat than chicken farms are like their natural habitat, but that may be mistaken.)

Well, they can move more, but on the other hand they tend to pollute each others' environment in a way that terrestrial farmed animals do not, meaning that not all commercially fished species can survive being farmed with modern techniques, and those which can are not necessarily safe for humans to eat in the same quantities.

comment by A1987dM (army1987) · 2013-06-15T18:43:23.922Z · LW(p) · GW(p)

There are decent arguments (e.g. this) for eating less meat even if you don't care about non-human animals as a terminal value.

comment by Pablo (Pablo_Stafforini) · 2013-06-15T22:10:10.623Z · LW(p) · GW(p)

You may want to take a look at this brief list of relevant writings I compiled in response to a comment by SaidAchmiz.

comment by selylindi · 2013-06-13T19:41:21.849Z · LW(p) · GW(p)

YMMV, but the argument that did it for me was Mylan Engel, Jr's argument, as summarized and nicely presented here.

On the assumption that the figures given by the OP are approximately right, with my adjustments for personal values, it would be cost-effective for me to pay you $18 (via BTC) to go from habitual omnivory to 98% ovo-lacto-vegetarianism for a year, or $24 (via BTC) to go for habitual omnivory to 98% veganism for a year, both prorated by month, of course with some modicum of evidence that the change was real. Let me know if you want to take up the offer.

Replies from: CCC, Qiaochu_Yuan, SaidAchmiz
comment by CCC · 2013-06-17T09:00:14.487Z · LW(p) · GW(p)

Looking over that argument, in the second link, I notice that those same premises would appear to support the conclusion that the most morally correct action possible would be to find some way to sterilize every vertabrate (possibly through some sort of genetically engineered virus). If there is no next generation - of anything, from horses to cows to tigers to humans to chickens - then there will be no pain and suffering experienced by that next generation. The same premises would also appear to support the conclusion that, having sterilised every vertabrate on the planet, the next thing to do is to find some painless way of killing every vertebrate on the planet, lest they suffer a moment of unnecessary pain or suffering.

I find both of these potential conclusions repugnant; I recognise this as a mental safety net, warning me that I will likely regret actions taken in support of these conclusions in the long term.

comment by Qiaochu_Yuan · 2013-06-13T20:30:27.308Z · LW(p) · GW(p)

This is an argument for vegetarianism, not for caring about animal suffering: many parts of this argument have nothing to do with animal suffering but are arguments that humans would be better off if we ate less meat, which I'm also willing to entertain (since I do care about human suffering), but I was really asking about animal suffering.

$18 a year is way too low.

Replies from: selylindi, Eugine_Nier
comment by selylindi · 2013-06-13T20:55:57.018Z · LW(p) · GW(p)

I'm not offering a higher price since it seems cost ineffective compared to other opportunities, but I'm curious what your price would be for a year of 98% veganism. (The 98% means that 2 non-vegan meals per month are tolerated.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-13T21:43:46.236Z · LW(p) · GW(p)

In the neighborhood of $1,000.

comment by Eugine_Nier · 2013-06-15T08:27:39.035Z · LW(p) · GW(p)

but are arguments that humans would be better off if we ate less meat, which I'm also willing to entertain

I'm less willing to entertain said arguments seeing as how they come from people who are likely to have their bottom lines already written.

comment by Said Achmiz (SaidAchmiz) · 2013-06-14T00:12:38.068Z · LW(p) · GW(p)

I started reading the argument (in your second link), racked up a full hand of premises I disagreed with or found to be incoherent or terribly ill-defined before getting to so much as #10, and stopped reading.

Then I decided that no, I really should examine any argument that convinced an intelligent opponent, and read through the whole thing (though I only skimmed the objections, as they are laughably weak compared to the real ones).

Turns out my first reaction was right: this is a silly argument. Engel lists a number of premises, most of which I disagree with, launches into a tangent about environmental impact, and then considers objections that read like the halfhearted flailings of someone who's already accepted his ironclad reasoning. As for this:

OBJ6: What if I just give up one of these beliefs [(p1) – (p16)]?

Engel says, “After all, as a philosopher [assuming we all love wisdom and want to know the best way in which to live], you are interested in more than mere consistency; you are interested in truth. Consequently, you will not reject just any belief(s) you think most likely to be false. Now, presumably, you already think your belief system is for the most part reasonable, or you would have already made significant changes in it. So, you will want to reject as few beliefs as possible. Since (p1) – (p16) are rife with implications, rejecting several of these propositions would force you to reject countless other beliefs on pain of incoherence, whereas accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part” (883).

It makes me want to post the "WAT" duck in response. Like, is he serious? Or is this actually a case of carefully executed trolling? I begin to suspect the latter...

Edit: Oh, and as Qiaochu_Yuan says, the argument assumes that we care about animal suffering, and so does not satisfy the request in the grandparent.

Replies from: selylindi
comment by selylindi · 2013-06-17T02:10:58.883Z · LW(p) · GW(p)

Based on your description here of your reaction, I get the impression that you mistook the structure of the argument. Specifically, you note, as if it were sufficient, that you disagree with several of the premises. Engel was not attempting to build on the conjunction (p1*p2*...*p16) of the premises; he was building on their disjunction (p1+p2+...+p16). Your credence in p1 through p16 would have to be uniformly very low to keep their disjunction also low. Personally, I give high credence to p1, p9, p10, and varying lower degrees of assent to the other premises, so the disjunction is also quite high for me, and therefore the conclusion has a great deal of strength; but even if I later rejected p1, p9, and p10, the disjunction of the others would still be high. It's that robustness of the argument, drawing more on many weak points than one strong one, that convinced me.

I don't understand your duck/troll response to the quote from Engel. Everything he has said in that paragraph is straightforward. It is important that beliefs be true, not merely consistent. That does mean you oughtn't simply reject whichever premises get in the way of the conclusions you value. p1-p16 are indeed entangled with many other beliefs, and propagating belief and value updates of rejecting more of them is likely, in most people, to be a more severe change than becoming vegetarian. Really, if you find yourself suspecting that a professional philosopher is trolling people in one of his most famous arguments, that's a prime example of a moment to notice the fact that you're confused. It's possible you were reading him as saying something he wasn't saying.

Regarding the edit: the argument does not assume that you care about animal suffering. I brought it up precisely because it didn't make that assumption. If you want something specifically about animal suffering, presumably a Kantian argument is the way to go: You examine why you care about yourself and you find it is because you have certain properties; so if something else has the same properties, to be consistent you should care about it also. (Obviously this depends on what properties you pick.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-17T05:56:13.982Z · LW(p) · GW(p)

Based on your description here of your reaction, I get the impression that you mistook the structure of the argument.

That's possible, but I don't think that's the case. But let me address the argument in a bit more detail and perhaps we'll see if I am indeed misunderstanding something.

First of all, this notion that the disjunction of the premises leads to accepting the conclusion is silly. No one of the premises leads to accepting the conclusion. You have to conjoin at least some of them to get anywhere. It's not like they're independent, leading by entirely separate lines of reasoning to the same outcome; some clearly depend on others to be relevant to the argument.

And I'm not sure what sort of logic you're using wherein you believe p1 with low probability, p2 with low probability, p3 ... etc., and their disjunction ends up being true. (Really, that wasn't sarcasm. What kind of logic are you applying here...?) Also, some of them are actually nonsensical or incoherent, not just "probably wrong" or anything so prosaic.

The quoted paragraph:

“After all, as a philosopher [assuming we all love wisdom and want to know the best way in which to live], you are interested in more than mere consistency; you are interested in truth. Consequently, you will not reject just any belief(s) you think most likely to be false.

You're right, I guess I have no idea what he's saying here, because this seems to me blatantly absurd on its face. If you're interested in truth, of course you're going to reject those beliefs most likely to be false. That's exactly what you're going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth.

Now, presumably, you already think your belief system is for the most part reasonable, or you would have already made significant changes in it. So, you will want to reject as few beliefs as possible.

??? You will want to reject those and only those beliefs that are false. If you think your belief system is reasonable, then you don't think any of your beliefs are false, or else you'd reject them. If you find that some of your beliefs are false, you will want to reject them, because if you're interested in truth then you want to hold zero false beliefs.

Since (p1) – (p16) are rife with implications, rejecting several of these propositions would force you to reject countless other beliefs on pain of incoherence, whereas accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part” (883).

I think that accepting many of (p1) – (p16) causes incoherence, actually. In any case, Engel seems to be describing a truly bizarre approach to epistemology where you care less about holding true beliefs than about not modifying your existing belief system too much, which seems like a perfect example of caring more about consistency than truth, despite him describing his view in the exact opposite manner, and... I just... I don't know what to say.

And when I read your commentary on the above, I get the same "... what the heck? Is he... is he serious?" reaction.

I don't understand your duck/troll response to the quote from Engel. Everything he has said in that paragraph is straightforward. It is important that beliefs be true, not merely consistent. That does mean you oughtn't simply reject whichever premises get in the way of the conclusions you value.

What does this mean? Should I take this as a warning against motivated cognition / confirmation bias? But what on earth does that have to do with my objections? We reject premises that are false. We accept premises that are true. We accept conclusions that we think are true, which are presumably those that are supported by premises we think are true.

p1-p16 are indeed entangled with many other beliefs, and propagating belief and value updates of rejecting more of them is likely, in most people, to be a more severe change than becoming vegetarian.

... and? Again, we should hold beliefs we think are true and reject those we think are false. How on earth is picking which beliefs to accept and which to reject on the basis of what will require less updating... anything but absurd? Isn't that one of the Great Epistemological Sins that Less Wrong warns us about?

As for the duck comment... professional philosophers troll people all the time. Having never encountered Engel's writing before now, I of course did not know that this was his most famous argument, nor any basis for being sure of serious intent in that paragraph.

Regarding the edit: the argument does not assume that you care about animal suffering. I brought it up precisely because it didn't make that assumption.

Engel apparently claims that his reader already holds these beliefs, among others:

(p11) It is morally wrong to cause an animal unnecessary pain or suffering. (p12) It is morally wrong and despicable to treat animals inhumanely for no good reason. (p13) We ought to euthanize untreatably injured, suffering animals to put them out of their misery whenever feasible.

(And without that, the argument falls down.)

Replies from: selylindi
comment by selylindi · 2013-07-11T19:53:07.525Z · LW(p) · GW(p)

(Hi, sorry for the delayed response. I've been gone.)

And I'm not sure what sort of logic you're using wherein you believe p1 with low probability, p2 with low probability, p3 ... etc., and their disjunction ends up being true. (Really, that wasn't sarcasm. What kind of logic are you applying here...?)

Just the standard stuff you'd get in high school or undergrad college. Suppose we have independent statements S1 through Sn, and you assign each a subjective probability of P(Si). Then you have the probability of the disjunction P(S1+S2+S3+...+Sn) = 1-P(~S1)*P(~S2)*P(~S3)*...*P(~Sn). So if in a specific case you have n=10 and P(Si)=0.10 for all i, then even though you're moderately disposed to reject every statement, you're weakly disposed to accept the disjunction, since P(disjunction)=0.65. This is closely related to the preface paradox.

You're right, of course, that Engel's premises are not all independent. The general effect on probability of disjunctions remains always in the same direction, though, since P(A+B)≥P(A) for all A and B.

“After all, as a philosopher [assuming we all love wisdom and want to know the best way in which to live], you are interested in more than mere consistency; you are interested in truth. Consequently, you will not reject just any belief(s) you think most likely to be false.

You're right, I guess I have no idea what he's saying here, because this seems to me blatantly absurd on its face. If you're interested in truth, of course you're going to reject those beliefs most likely to be false. That's exactly what you're going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth.

OK, yes, you've expressed yourself well and it's clear that you're intepreting him as having claimed the opposite of what he meant. Let me try to restate his paragraph in more LW-ish phrasing:

"As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality. Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)"

If you're interested in reconsidering Engel's argument given his intended interpretation of it, I'd like to hear your updated reasons for/against it.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-12T01:46:55.162Z · LW(p) · GW(p)

Welcome back.

Just the standard stuff you'd get in high school or undergrad college. [...]

Ok. I am, actually, quite familiar with how to calculate probabilities of disjunctions; I did not express my objection/question well, sorry. What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe "We ought to take steps to make the world a better place" with P = 0.3? Like, maybe we should and maybe we shouldn't? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?

In short, for a lot of these propositions, it seems nonsensical to talk about levels of credence, and so what makes sense for reasoning about them is just propositional logic. In which case, you have to assert that if ANY of these things are true, then the entire disjunction is true (and from that, we conclude... something. What, exactly? It's not clear).

And yet, I can't help but notice that Engel takes an approach that's not exactly either of the above. He says:

“While you do not have to believe all of (p1) – (p16) for my argument to succeed, the more of these propositions you believe, the greater your commitment to the immorality of eating meat”

I don't know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don't rightly know what it would mean for an argument to work like that.

(In other words, my response to the Engel quote above is: "Uh, really? Why...?")

As for your restatement of Engel's argument... First of all, I've reread that quote from Engel at the end of the PDF, and it just does not seem to me like he is saying what you claim he's saying. It seems to me that he is suggesting (in the last sentence of the quote) we reason backwards from which beliefs would force less belief revision to which beliefs we should accept as true.

But, ok. Taking your formulation for granted, it still seems to be... rather off. To wit:

"As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality.

Well, here's the thing. It is certainly true that holding nothing but true beliefs will necessarily imply that your beliefs are consistent with each other. (Although it is possible for there to be apparent inconsistencies, which would be resolved by the acquisition of additional true beliefs.) However, it's possible to find yourself in a situation where you gain a new belief, find it to be inconsistent with one or more old beliefs, and yet find that, inconsistency aside, both the new and the old beliefs each are sufficiently well-supported by the available evidence to treat them as being true.

At this point, you're aware that something is wrong with your epistemic state, but you have no real way to determine what that is. The rational thing to do here is of course to go looking for more information, more evidence, and see which of your beliefs are confirmed and which are disconfirmed. Until then, rearranging your entire belief system is premature at best.

"Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)"

Why do you character the quoted belief as "motivated"? We are assuming, I thought, that I've arrived at said belief by the same process as I arrive at any other beliefs. If that one's motivated, well, it's presumably no more motivated than any of my other beliefs.

And, in any case, why are we singling out this particular belief for consistency-checking? Engel's claim that "accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part" seems the height of silliness. Frankly, I'm not sure what could make someone say that but a case of writing one's bottom line first.

Again I say: the correct thing to do is to hold (that is, to treat as true) those beliefs which you think are more likely true than false, and not any beliefs which you think are more likely false than true. Breaking that rule of thumb for consistency's sake is exactly the epistemic sin which we are supposedly trying to avoid.

But you know what — all of this is a lot of elaborate round-the-bush-dancing. I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that. That is to say, rather than analyzing whether the structure of Engel's argument works in theory, let's put it to the test on his actual claims, yes?

Replies from: selylindi
comment by selylindi · 2013-07-12T16:09:04.519Z · LW(p) · GW(p)

What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe "We ought to take steps to make the world a better place" with P = 0.3? Like, maybe we should and maybe we shouldn't? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?

I'd be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.

I don't know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don't rightly know what it would mean for an argument to work like that.

As far as I'm aware, that's exactly how logical arguments work, formally. See the second paragraph here.

Why do you character the quoted belief as "motivated"?

Meat tastes good and is a great source of calories and nutrients. That's powerful motivation for bodies like us. But you can strike that word if you prefer.

And, in any case, why are we singling out this particular belief for consistency-checking?

We aren't. We're requiring only and exactly that it not be singled out for immunity to consistency-checking.

I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that

That's it! That's exactly the structure of Engel's argument, and what he was trying to get people to do. :)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-12T17:36:53.358Z · LW(p) · GW(p)

I'd be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.

That is well and good, except that "making the world a better place" seems to be an overarching moral goal. At some point, we hit terminal values or axioms of some sort. "Whether a proposition would follow from a moral theory" is conceivably something you could bet on, but what do you do when the proposition in question is part of the relevant moral theory?

As far as I'm aware, that's exactly how logical arguments work, formally. See the second paragraph here.

Certainly not. Engel does not offer any deductive system for getting from the premises to the conclusion. In the derivation of an argument (as alluded to by the linked SEP article), premises and intermediate conclusions have to be ordered (at least partially ordered). Engel seems to be treating his premises as undifferentiated lumps, which you can take in any order, without applying any kind of deduction to them; you just take each ounce of premise and pour it into the big bucket-'o-premise, and see how much premise you end up with; if it's a lot of premise, the conclusion magically appears. The claim that it doesn't even matter which premises you hold to be true, only the quantity of them, seems to explicitly reject logical deduction.

That's it! That's exactly the structure of Engel's argument, and what he was trying to get people to do. :)

Alright then. To the object level!

Engel claims that you hold the following beliefs:

Let's see...

(p1) Other things being equal, a world with less pain and suffering is better than a world with more pain and suffering.

Depends on how "pain" and "suffering" are defined. If you define "suffering" to include only mental states of sapient beings, of sufficient (i.e. at least roughly human-level) intelligence to be self-aware, and "pain" likewise, then sure. If you include pain experienced by sub-human animals, and include their mental states in "suffering", then first of all, I disagree with your use of the word "suffering" to refer to such phenomena, and second of all, I do not hold (p1) under such a formulation.

(p2) A world with less unnecessary suffering is better than a world with more unnecessary suffering.

See (p1).

(p3) Unnecessary cruelty is wrong and prima facie should not be supported or encouraged.

If by "cruelty" you mean ... etc. etc., basically the same response as (p1). Humans? Agreed. Animals? Nope.

(p4) We ought to take steps to make the world a better place.

Depends on the steps. If by this you mean "any steps", then no. If by this you mean "this is a worthy goal, and we should find appropriate steps to achieve and take said steps", then sure. We'll count this one as a "yes". (Of course we might differ on what constitutes a "better" world, but let's assume away such disputes for now.)

(p4’) We ought to do what we reasonably can to avoid making the world a worse place.

Agreed.

(p5) A morally good person will take steps to make this world a better place and even stronger steps to avoid making the world a worse place.

First of all, this is awfully specific and reads like a way to sneak in connotations. I tend to reject such formulations on general principles. In any case, I don't think that "morally good person" is a terribly useful concept except as shorthand. We'll count this one as a "no".

(p6) Even a minimally decent person would take steps to reduce the amount of unnecessary pain and suffering in the world, if s/he could do so with very little effort.

Pursuant to the caveats outlined in my responses to all of the above propositions... sure. Said caveats partially neuter the statement for Engel's purposes, but for generosity's sake let's call this a "yes".

(p7) I am a morally good person.

See response to (p5); this is not very meaningful. So, no.

(p8) I am at least a minimally decent person.

Yep.

(p9) I am the sort of person who certainly would take steps to help reduce the amount of pain and suffering in the world, if I could do so with very little effort.

I try not to think of myself in terms of "what sort of person" I am. As for whether reducing the amount of pain and suffering is a good thing and whether I should do it — see (p4) and (p4'). But let's call this a "yes".

(p10) Many nonhuman animals (certainly all vertebrates) are capable of feeling pain.

This seems relatively uncontroversial.

(p11) It is morally wrong to cause an animal unnecessary pain or suffering.

Nope. (And see (p1) re: "suffering".)

(p12) It is morally wrong and despicable to treat animals inhumanely for no good reason.

Nope.

(p13) We ought to euthanize untreatably injured, suffering animals to put them out of their misery whenever feasible.

Whether we "ought to" do this depends on circumstances, but this is certainly not inherently true in a moral sense.

(p14) Other things being equal, it is worse to kill a conscious sentient animal than it is to kill plant.

Nope.

(p15) We have a duty to help preserve the environment for future generations (at least for future human generations).

I'll agree with this to a reasonable extent.

(p16) One ought to minimize one’s contribution toward environmental degradation, especially in those ways requiring minimal effort on one’s part.

Sure.

So, tallying up my responses, and ignoring all waffling and qualifications in favor of treating each response as purely binary for the sake of convenience... it seems I agree with 7 of the 17 propositions listed. Engel then says:

“While you do not have to believe all of (p1) – (p16) for my argument to succeed, the more of these propositions you believe, the greater your commitment to the immorality of eating meat”

So according to this, it seems that I should have a... moderate commitment to the immorality of eating meat? But here's the problem:

How does the proposition "eating meat is immoral" actually follow from the propositions I assented to? Engel claims that it does, but you can't just claim that a conclusion follows from a set of premises, you have to demonstrate it. Where is the demonstration? Where is the application of deductive rules that takes us from those premises to the conclusion? There's nothing, just a bare set of premises and then a claimed conclusion, with nothing in between, no means of getting from one to the other.

Replies from: shminux, selylindi
comment by Shmi (shminux) · 2013-07-12T21:55:36.405Z · LW(p) · GW(p)

Engel claims that it does, but you can't just claim that a conclusion follows from a set of premises, you have to demonstrate it.

My usual reply to a claim that a philosophical statement is "proven formally" is to ask for a computer program calculating the conclusion from the premises, in the claimant's language of choice, be is C or Coq.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-12T23:02:00.890Z · LW(p) · GW(p)

Oh, really? ;)

string calculate_the_conclusion(string the_premises[])
{
return "The conclusion. Q.E.D.";
}

This function takes the premises as a parameter, and returns the conclusion. Criterion satisfied?

Replies from: shminux, fractalman
comment by Shmi (shminux) · 2013-07-13T04:16:10.860Z · LW(p) · GW(p)

Yes, it explicates the lack of logic, which is the whole point.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-13T04:47:53.184Z · LW(p) · GW(p)

I confess to being confused about your intended point. I thought you were more or less agreeing with me, but now I am not so sure?

Replies from: shminux
comment by Shmi (shminux) · 2013-07-13T05:18:12.126Z · LW(p) · GW(p)

Yes I was. My point was that if one writes a program that purports to prove that

"eating meat is immoral" actually follow from the propositions...

then the code can be examined and the hidden assumptions and inferences explicated. In the trivial example you wrote the conclusion is assumed, so the argument that it is proven from the propositions (by this program) is falsified.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-13T05:25:17.561Z · LW(p) · GW(p)

Ah. Yeah, agreed. Of course, enough philosophers disdain computer science entirely that the "arguments" most in need of such treatment would be highly unlikely to receive it. "Argument by handwaving" or "argument by intimidation" is all too common among professional philosophers.

The worst part is how awkward it feels to challenge such faux-arguments. "Uh... this... what does this... say? This... doesn't say anything. This... this is actually just a bunch of nonsense. And the parts that aren't nonsense are just... just false. Is this... is this really supposed to be the argument?"

Replies from: shminux
comment by Shmi (shminux) · 2013-07-13T05:29:59.243Z · LW(p) · GW(p)

Hence my insistence on writing it up in a way a computer would understand.

comment by fractalman · 2013-07-13T00:59:26.991Z · LW(p) · GW(p)

That doesn't even pass a quick inspection test for"can do something different when handed different parameters" .

The original post looks at least as good as: int calculate_the_conclusion(string premises_acceptedbyreader[]) { int result=0; foreach(mypremise in reader's premise){result++;} return result. }

-note the "at least".

comment by selylindi · 2013-07-13T04:33:23.504Z · LW(p) · GW(p)

As far as I'm aware, that's exactly how logical arguments work, formally. See the second paragraph here.

Certainly not.

OK, since you are rejecting formal logic I'll agree we've reached a point where no further agreement is likely.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-13T04:45:11.951Z · LW(p) · GW(p)

Uh, with all respect, claiming that I am the one rejecting formal logic here is outlandishly absurd.

I have to ask: did you, in fact, read the entirety of my post? Honest question; I'm not being snarky here.

If you did (or do) read it, and still come to the conclusion that what's going on here is that I am rejecting formal logic, then I guess we have exhausted the fruitfulness of the discussion.

comment by Raemon · 2013-06-13T23:07:05.612Z · LW(p) · GW(p)

I don't think there's a subthread here about posthumans here yet, which surprises me. Most of the other points I'd think to make have been made by others.

Several times you specify that you care about humanity, because you are able to have relationships with humans. A few questions:

1) SaidAchmiz, whose views seem similar to yours, specified they hadn't owned pets. Have you owned pets?

While this may vary from person to person, it seems clear to me that people are able to form relationships with dogs, cats, rats, and several other types of mammals (this is consistent with the notion that more-similar animals are able to form relationships with each other, on a sliding scale).

I've also recently made a friend with two pet turtles. One of the turtles seems pretty bland and unresponsive, but the other seems incredibly interested in interaction. I expect that some amount of the perceived relationship between my friend and their turtle is human projection, but I've still updated quite a bit on the relative potential-sentience of turtles. (Though my friend's veterinarian did said the turtle is an outlier in terms of how much personality a turtle expresses)

2) You've noted that you don't care about babyeaters. Do you care about potential posthumans who share all values you currently have, but have new values you don't care about one way or another, are vastly more intelligent/empathetic/able-to-form-complex-relationships that you can't understand? Do you expect those humans to care about you?

I'm not sure how good an argument it is that "we should care about things dumber than us because we'd want smarter things to care about us", in the context of aliens who might not share our values at all. But it seems at least a little relevant, when specifically concerning the possibility of trans-or-posthumans.

3) To the extent that you are not able to form relationships with other humans (because they are stupider than you, because they are less empathetic, or just because they're jerks, or don't share enough interests with you), do you consider them to have less moral worth? If not, why not?

Intellectually, I'm interested in the question: what moral framework would Extrapolatedd-Qiaochu-Yuan endorse (since, again, I'm an anti-realist).

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-13T23:23:34.786Z · LW(p) · GW(p)

Have you owned pets?

I had fish once, but no complicated pets.

it seems clear to me that people are able to form relationships with dogs, cats, rats, and several other types of mammals (this is consistent with the notion that more-similar animals are able to form relationships with each other, on a sliding scale).

People are also able to form relationships of this kind with, say, ELIZA or virtual pets in video games or waifus. This is an argument in favor of morally valuing animals, but I think it's a weak one without more detail about the nature of these relationships and how closely they approximate full human relationships.

Do you care about potential posthumans who share all values you currently have, but have new values you don't care about one way or another, are vastly more intelligent/empathetic/able-to-form-complex-relationships that you can't understand? Do you expect those humans to care about you?

Depends. If they can understand me well enough to have a relationship with me analogous to the relationship an adult human might have with a small child, then sure.

To the extent that you are not able to form relationships with other humans (because they are stupider than you, because they are less empathetic, or just because they're jerks, or don't share enough interests with you), do you consider them to have less moral worth? If not, why not?

I hid a lot of complexity in "in principle." This objection also applies to humans who are in comas, for example, but a person being in a coma or not sharing my interests is a contingent fact, and I don't think contingent facts should affect what beings have moral worth. I can imagine possible worlds reasonably close to the actual one in which a person isn't in a coma or does share my interests, but I can't imagine possible worlds reasonably close to the actual one in which a fish is complicated enough for me to have a meaningful relationship with.

comment by drnickbone · 2013-06-13T11:33:37.599Z · LW(p) · GW(p)

An important question is whether there is a net loss or gain of sentient life by avoiding eating meat. Or, if there is a substitution between different sentient life-forms, is there a net gain to quality of life?

  1. Do we know where the biomass that currently goes into farmed animals would end up if we stopped using farmed animals? Would it go into humans, or into vehicles (biofuels) or into wildlife via land taken out of agricultural production?

  2. Should we assume that farmed animals have a negative quality of life (so that in utilitarian terms, the world would be better if they stopped existing and weren't replaced by other sentient beings)? The animals themselves would probably not assess their lives as having negative value (as far as I'm aware, farmed animals do not attempt to commit suicide at every available opportunity).

  3. Do farmed animals have a lower quality of life than animals living in the wild? Remember that nature is not a nice place either...

My personal guess is that without meat, we would end up with more humans, though mostly poorer humans. Since even the poorest humans would probably have a higher quality of life than the animals they substituted, it looks like a net gain from the point of view of total utility. But whether that is really a good thing or not may depend on whether you are a total utilitarian or an average utilitarian.

Replies from: Raemon, Lukas_Gloor, MugaSofer, seanwelsh77
comment by Raemon · 2013-06-13T18:34:06.738Z · LW(p) · GW(p)

(as far as I'm aware, farmed animals do not attempt to commit suicide at every available opportunity)

I object to this as the general metric for "should a life be brought into existence?" (I'm something approximating an average utilitarian. To the extent that I'm a total utilitarian, I think Eliezer's post about Lives Worth Celebrating is relevant)

Also, less controversial, I'd like to note that factory-farmed animals really don't have much opportunity to end their own lives even if they wanted to.

Replies from: Desrtopa
comment by Desrtopa · 2013-06-13T18:49:41.575Z · LW(p) · GW(p)

For that matter, even if they did have the opportunity, livestock species may not have the abstract reasoning abilities to recognize that suicide is even a possible thing.

Pigs might have the intelligence for that, but for cows and chickens, I doubt it. It's not like suicide is an evolutionarily favorable adaptation, it's a product of abstract reasoning about death that most animals are not likely to be be capable of.

comment by Lukas_Gloor · 2013-06-13T17:16:46.313Z · LW(p) · GW(p)

Good points, but I suspect they are dominated by another part of the calculation: In the future, with advanced technology, we might be able to seed live on other planets or even simulate ecosystems. By getting people now to care about suffering in nonhumans, we make it more likely that future generations care for them as well. And antispeciesism also seems closely related to anti-substratism (e.g. caring about the simulation of humans, even though they're not carbon-based).

If you are the sort of person that cares about all sorts of suffering, raising antispeciesist awareness might be very positive for far future-related reasons, regardless of whether the direct (short-term) impact is actually positive, neutral, or even slightly negative.

Replies from: drnickbone
comment by drnickbone · 2013-06-14T17:39:16.456Z · LW(p) · GW(p)

The other long-term consideration is that whatever we do to animals, AIs may well do to us.

We don't want future AIs raising us in cramped cages, purely for their own amusement, on the grounds that their utility is much more important than ours. But we also don't want them to exterminate us on "compassionate" grounds. (Those poor humans, why let them suffer so? Let's replace them by a few more happy, wire-heading AIs like us!)

Replies from: Lukas_Gloor, Jiro
comment by Lukas_Gloor · 2013-06-14T20:11:20.862Z · LW(p) · GW(p)

Don't many/most people here want there to be posthumans, which may well cross the species-barrier? I don't think there is an "essence of humanity" that carries over from humans to posthumans by virtue of descendance, so that case seems somewhat analogous to the wireheading AIs case already. And whether the AI would do wireheading or keep intact a preference architecture depends on what we/it values. If we do value complex preferences, and if we want to have many beings in the world that have them mostly fulfilled, I'd assume there would be more awesome or more effective ways of design than current humans However, if this view implies that killing is bad because it violates preferences, then replacement would, to some extent, be a bad thing and the AI might not do it.

comment by Jiro · 2013-06-14T19:16:41.234Z · LW(p) · GW(p)

That argument would seem to apply to plants or even to non-intelligent machines as well as to animals, unless you include a missing premise stating that AI/human interaction is similar to human/animal interaction in a way that 1) human/plant or human/washing machine interaction is not, and 2) is relevant. Any such missing premise would basically be an entire argument for vegetarianism already--the "in comparison to AIs" part of the argument is an insubstantial gloss on it.

Furthermore, why would you expect what we do to constrain what AIs do anyway? I'd sooner expect that AIs would do things to us based on their own reasons regardless of what we do to other targets.

Replies from: freeze
comment by freeze · 2015-09-03T15:49:47.592Z · LW(p) · GW(p)

Perhaps this is true if the AI is supremely intelligent, but if the AI is only an order of magnitude for intelligent than us, or better by some other metric, the way we treat animals could be significant.

More relevantly, if an AI is learning anything at all about morality from us or from the people programming it I think it is extremely wise that the relevant individuals involved be vegan for these reasons (better safe than sorry). Essentially I argue that there is a very significant chance the way we treat other animals could be relevant to how an AI treats us (better treatment corresponding to better later outcomes for us).

Replies from: Jiro, Lumifer
comment by Jiro · 2015-09-03T16:07:11.202Z · LW(p) · GW(p)

"Other animals" is a gerrymandered reference class. Why would the AI specifically care about how we treat "other animals", as opposed to "other biological entities", "other multicellular beings", or "other beings who can do mathematics"?

Replies from: freeze
comment by freeze · 2015-09-03T17:29:11.512Z · LW(p) · GW(p)

Because other animals are also sentient beings capable of feeling pain. Other multicellular beings aren't in general.

Replies from: Jiro
comment by Jiro · 2015-09-03T19:32:55.314Z · LW(p) · GW(p)

That's the kind of thing I was objecting to. "'Other animals' are capable of feeling pain" is an independent argument for vegetarianism. Adding the AI to the argument doesn't really get you anything, since the AI shouldn't care about it unless it was useful as an argument for vegetarianism without the AI.

It's also still a gerrymandered reference class. "The AI cares about how we treat other beings that feel pain" is just as arbitrary as "the AI cares about how we treat 'other animals'"--by explaining the latter in terms of the former, you're just explaining one arbitrary category by pointing out that it fits into another arbitrary category. Why doesn't the AI care about how we treat all beings who can do mathematics (or are capable of being taught mathematics), or how we treat all beings at least as smart as ourselves, or how we treat all beings that are at least 1/3 the intelligence of ourselves, or even how we treat all mammals or all machines or all lesser AIs?

Replies from: Lumifer, freeze
comment by Lumifer · 2015-09-03T19:46:23.538Z · LW(p) · GW(p)

Heh.

Have you been nice to your smartphone today? Treat your laptop with sufficient respect?

DID YOU EVER LET YOUR TAMAGOTCHI DIE?

comment by freeze · 2015-09-03T20:15:12.022Z · LW(p) · GW(p)

Perhaps it should. Being vegan covers all these bases except machines/AIs, which arguably (including by me) also ought to hold some non-negligible moral weight.

Replies from: Jiro
comment by Jiro · 2015-09-03T20:40:03.791Z · LW(p) · GW(p)

The question is really "why does the AI have that exact limit". Phrased in terms of classes, it's "why does the AI have that specific class"; having another class that includes it doesn't count, since it doesn't have the same limit.

Replies from: freeze
comment by freeze · 2015-09-06T14:58:01.675Z · LW(p) · GW(p)

After significant reflection what I'm trying to say is that I think it is obvious that non-human animals experience suffering and that this suffering carries moral weight (we would call most modern conditions torture and other related words if the methods were applied to humans).

Furthermore, there are a lot of edge cases of humanity where people can't learn mathematics or otherwise are substantially less smart than non-human animals (the young, if future potential doesn't matter that much; or the very old, mentally disabled, people in comas, etc.). I would prefer to live in a world where an AI thinks beings that do suffer but aren't necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.

Replies from: Jiro
comment by Jiro · 2015-09-07T22:18:18.224Z · LW(p) · GW(p)

I would prefer to live in a world where an AI thinks beings that do suffer but aren't necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.

But the original argument is that we shouldn't eat animals because AIs would treat us like we treat animals. That argument implies an AI whose ethical system can't be specified or controlled in detail, so we have to worry how the AI would treat us.

If you have enough control over the ethics used by the AI that you can design the AI to care about suffering, then this argument doesn't show a real problem--if you could program the AI to care about suffering, surely you could just program it to directly care about humans. Then we could eat as many animals as we want and the AI still wouldn't use that as a basis to mistreat us.

Replies from: freeze
comment by freeze · 2015-10-16T16:41:07.459Z · LW(p) · GW(p)

Yes, I guess I was operating under the assumption that we would not be able to constrain the ethics of a sufficiently advanced AI at all by simple programming methods.

Though I've spend an extraordinarily large amount of time lurking on this and similar sites, upon reflection I'm probably not the best poised person to carry out a debate about the hypothetical values of an AI as depending on ours. And indeed this would not be my primary justification for avoiding nonhuman suffering. I still think its avoidance is an incredibly important and effect meme to propagate culturally.

comment by Lumifer · 2015-09-03T15:53:33.101Z · LW(p) · GW(p)

Go start recruiting Jains as AI researchers... X-/

Replies from: freeze
comment by freeze · 2015-09-03T17:28:28.419Z · LW(p) · GW(p)

I don't see why. Jainism is far from the only philosophy associated with veganism.

Replies from: Lumifer
comment by Lumifer · 2015-09-03T18:49:09.769Z · LW(p) · GW(p)

Jainism has a remarkably wide concept of creatures not to be harmed (e.g. specifically including insects). I don't see why are you so focused on the diet.

Replies from: freeze
comment by freeze · 2015-09-03T20:12:45.803Z · LW(p) · GW(p)

Vegans as a general category don't unnecessarily harm and certainly don't eat insects either. I'm not just focused on the diet actually.

Come to think of it, what are we even arguing about at this point? I didn't understand your emoticon there and got thrown off by it.

Replies from: Lumifer
comment by Lumifer · 2015-09-03T20:21:17.922Z · LW(p) · GW(p)

I'm yet to meet a first-world vegan who would look benevolently at a mosquito sucking blood out of her.

I don't think we're arguing at all. That, of course, doesn't mean that we agree.

The emoticon hinted that I wasn't entirely serious.

comment by MugaSofer · 2013-06-15T22:03:03.024Z · LW(p) · GW(p)

This rather assumes we're striving for as many lives as possible, does it not?

I mean, that's a defensible position, but I don't think it should be assumed.

comment by seanwelsh77 · 2013-06-14T02:24:49.675Z · LW(p) · GW(p)

A difficulty of utilitarianism is the question of felicific exchange rates. If you cast morality as a utility function then you are obliged to come up with answers to bizarre hypothetical questions like how many ice-creams is the life of your first born worth because you have defined the right in terms of maximized utility.

If you cast morality as a dispute avoidance mechanism between social agents possessed with power and desire then you are less likely to end up in this kind of dead-end but the price of this casting is the recognition that different agents will have different values and that objectivity of morals is not always possible.

Replies from: drnickbone
comment by drnickbone · 2013-06-14T16:54:06.055Z · LW(p) · GW(p)

Agreed, but the OP was talking about "effective altruism" , rather than about "effective morality" in general. It's difficult to talk about altruism at all except within some sort of consequentialist framework. And while there is no simple way of comparing goods, consideration of "effective" altruism (how much good can I do for a relatively small amount of money?) does force us to look at and make very difficult tradeoffs between different goods.

Incidentally, I generally subscribe to rule consequentialism though without any simple utility function, and for much the reasons you discuss. Avoiding vicious disputes between social agents with different values is, as I understand it, one of the "good things" that a system of moral rules needs to achieve.

Replies from: seanwelsh77
comment by seanwelsh77 · 2013-06-14T23:05:11.863Z · LW(p) · GW(p)

Rule consequentialism is what a call a multi-threaded moral theory - a blend of deontology and consequentialism if you will. I advocate multi-threaded theories. The idea that there is a correct single-threaded theory of morality seems implausible. Moral rules to me are a subset of modal rules for survival-focused agents.

To work out if something is right run a bunch of 'algorithms' (in parallel threads if you like) not just one. (No commitment made to Turing computability of said 'algorithms' though...)

So...

#assume virtue ethics

If I do X what virtues does this display/exhibit?

#assume categorical imperative

If everyone does X how would I value the world then?

#assume principle of utility

Will X increase the greatest happiness for the greatest number?

#assume golden rule

If X were done to me instead of my doing X would I accept this?

#emotions

If I do X will this trigger any emotional reaction (disgust, guilt, shame, embarrassment, joy, ecstasy, triumph etc)

#laws

Is there is law or sanction if I do X?

#precedent

Have I done X before, how did that go?

#relationships

If I do X what impact will that have on relationships I have?

#motives goal

Do I want to do X?

#interest welfare prudence

Is X in my interest? Safe? Dangerous etc

#value

Does X have value? To me, to others etc

Sometimes one or two reasons will provide a slam dunk decision. It's illegal and I don't want to do it anyway. Othertimes, the call is harder.

Personally, I find a range of considerations more persuasive than one. I am personally inclined to sentimentalism at the meta-ethical tier and particularism at the normative and applied ethical tiers.

Of course, strictly speaking particularism implies that normative ethical theories are false over-generalizations and that a theory of reasons rests on a theory of values. Values are fundamentally emotive. No amount of post hoc moral rationalization will change that.

comment by johnlawrenceaspden · 2013-06-17T20:23:09.891Z · LW(p) · GW(p)

Hang on, aren't you valuing the non-existence of an animal as 0 and the existence of a farm animal as some negative number per unit time?

Doesn't that imply that someone who kills farm animals, or prevents their existence in the first place is an altruist?

And what about wild animals, which presumably suffer more than farm animals? Should an altruist try to destroy them too?

Is your ideal final society just humans, plants and pets? I'd be quite unhappy in such a world, I imagine, so do I get it in the neck too?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-17T21:06:54.337Z · LW(p) · GW(p)

Hang on, aren't you valuing the non-existence of an animal as 0 and the existence of a farm animal as some negative number per unit time?

Yes.

~

Doesn't that imply that someone who kills farm animals [is an altruist?]

Only if they kill the farm animals painlessly and only if there aren't any other problems. For example, I don't think the strategy of bomb factory farms or sneak in and kill all their livestock will be net positive strategies. However, if a factory farm owner were to shut down the farm and order a painless slaughter of all the animals, that would be good.

~

[Doesn't that imply that someone who] prevents their existence in the first place is an altruist?

Yes. I suspect vegetarians make an impact by doing that.

~

And what about wild animals, which presumably suffer more than farm animals? Should an altruist try to destroy them too?

At this moment, it seems unclear. Wild animals are definitely a problem. I don't think they suffer more than farm animals, but they might. I'm not sure what the best intervention strategy is, but it's clear that some kind of strategy is needed, both in the short-run and long-run.

~

Is your ideal final society just humans, plants and pets?

Not necessarily.

~

I'd be quite unhappy in such a world, I imagine, so do I get it in the neck too?

Of course not.

Replies from: Desrtopa, johnlawrenceaspden, johnlawrenceaspden
comment by Desrtopa · 2013-06-18T14:44:07.987Z · LW(p) · GW(p)

At this moment, it seems unclear. Wild animals are definitely a problem. I don't think they suffer more than farm animals, but they might. I'm not sure what the best intervention strategy is, but it's clear that some kind of strategy is needed, both in the short-run and long-run.

I've heard a considerable number of people on this site echo the position that wild animals suffer so much their existence must be a net negative. This strikes me as awfully unlikely; they live in the situations they're adapted to, and have the hedonic treadmill principle going for them as well. You can observe at a zoo how many animals can become neurotic when they're removed from the sorts of circumstances they're accustomed to in the wild, but all their physical needs are accounted for.

Animals are adapted to be reproductively successful in their environments, not to be maximally happy, but considering the effects constant stress can have on the fitness of animals as well as humans, it would be quite maladaptive for them to be unhappy nearly all the time.

Replies from: Jabberslythe, peter_hurford
comment by Jabberslythe · 2013-06-18T19:35:39.233Z · LW(p) · GW(p)

For animals that are R-selected or, in other words, having many offspring in the hopes that some will survive, the vast majority of the offspring die very quickly. Most species of Fish, Amphibians and many less complex animals do this. 99.9% of them dieing in before reaching adulthood might be a good approximation for some species. A painful death doesn't seem worth a brief life as a wild animal.

It's true that most people wouldn't be functioning optimally if they were not somewhat happy and extrapolating this to other animals who seem to be similar to us in basic emotion, I would agree that an adult wild animal seem like they would live an alright life.

Replies from: Desrtopa
comment by Desrtopa · 2013-06-18T21:39:57.419Z · LW(p) · GW(p)

Most species of Fish, Amphibians and many less complex animals do this. 99.9% of them dieing in before reaching adulthood might be a good approximation for some species. A painful death doesn't seem worth a brief life as a wild animal.

Juvenile r-type species tend to have so little neurological development, I think their capacity for experience is probably pretty minimal in any case.

comment by Peter Wildeford (peter_hurford) · 2013-06-18T17:08:32.211Z · LW(p) · GW(p)

I tend to agree. But there's also an awful lot of predation, disease, and starvation in wild habitats. I recommend reading Brian Tomasik's "The Importance of Wild-Animal Suffering". Whether the sum of all of this adds up to net negative lives is something I'm unsure about.

comment by johnlawrenceaspden · 2013-06-18T13:47:46.536Z · LW(p) · GW(p)

Crikey, full marks for honesty! I've never seen the position put quite so starkly before. It sounds a bit like 'the crime is life, the sentence is death'.

I don't see why you wouldn't want me dead, since I'd loathe a world without the wild, and would probably be unhappy. Certainly I would die to prevent it if I could see a way to.

In fact I think I'd sacrifice my own life to save a single (likeable) mammal species if I could. But that's probably too much an emotional response to discuss rationally.

And what about the vegan argument that you could feed four times as many people if we were all vegans? Would you consider a world of 28 billion people living on rice an improvement?

When you say 'Not necessarily', should I take that to mean 'just humans and plants, actually', or 'just humans and yeast', or have I taken that the wrong way?

If we could wirehead the farm animals, would you become an enthusiastic meat-eater?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-18T17:06:23.914Z · LW(p) · GW(p)

It sounds a bit like 'the crime is life, the sentence is death'.

That's a very misleading way of putting it. The situation is one of dire, unending, inescapable torture for all of life. How would death, or better yet nonexistence, not be preferable?

~

I don't see why you wouldn't want me dead, since I'd loathe a world without the wild, and would probably be unhappy. Certainly I would die to prevent it if I could see a way to.

I'd speculate you wouldn't actually be suicidal in a world without the wild. Furthermore, I certainly wouldn't want you killed just because you're unhappy, because that's reversible. And even if it weren't, I think a policy of killing people for being unhappy would have tremendously bad short-run and long-run consequences.

Also, I don't think elimination of the wild is the only option. Mass welfare plans are potentially feasible. We could eliminate the wild and replicate it with holograms or robots that don't feel pain. Forcing animals to suffer just so you can have a beautiful wild doesn't sound moral to me. And it's possible that a number of species actually live net positive lives already.

Lastly, none of my outside-the-mainstream positions on wildlife need distract from the very real problem of factory farming. I think that case should be dealt with first.

~

In fact I think I'd sacrifice my own life to save a single (likeable) mammal species if I could.

Why? If you care about their existence, why don't you also care about their welfare?

~

And what about the vegan argument that you could feed four times as many people if we were all vegans?

I'm unsure (no position one way or the other yet) on the accuracy of that argument.

~

Would you consider a world of 28 billion people living on rice an improvement?

It depends on a lot of other factors. More people living good lives seems like an improvement to me, all else being equal. I think it would be worth giving up richness and variety in food in order to facilitate this, though obviously that one aspect would be regrettable.

Why do you ask? What are you getting at?

~

When you say 'Not necessarily', should I take that to mean 'just humans and plants, actually', or 'just humans and yeast', or have I taken that the wrong way?

You've taken it the wrong way. You asked if my "ideal final society" includes "just humans, plants and pets". I think there's a strong possibility it can include more than that (i.e. wild animals, robots, etc.).

My ideal final society would be some sort of transhumanist utopia, I think.

~

If we could wirehead the farm animals, would you become an enthusiastic meat-eater?

I'm currently unsure because I don't understand accurately the nature of wireheading. But if one could hypothetically remove all suffering from the factory farming process, I would then morally permit eating meat.

Replies from: johnlawrenceaspden, johnlawrenceaspden, johnlawrenceaspden, CCC, johnlawrenceaspden, johnlawrenceaspden
comment by johnlawrenceaspden · 2013-06-21T13:34:13.329Z · LW(p) · GW(p)

The situation is one of dire, unending, inescapable torture for all of life. How would death, or better yet nonexistence, not be preferable?

Are you sure about this? The lives of our medieval ancestors seem unedurably horrifying to me, and yet many of those people exhibited strong desires to live.

All wild animals exhibit strong desires to live. Why not take them at their word?

comment by johnlawrenceaspden · 2013-06-21T13:53:12.686Z · LW(p) · GW(p)

Why? If you care about their existence, why don't you also care about their welfare?

I think I care about both, but don't ask me where my desires come from. Some weird evolution-thing combined with all the experiences of my life and some randomness, most prob'ly.

comment by johnlawrenceaspden · 2013-06-21T13:49:59.940Z · LW(p) · GW(p)

Lastly, none of my outside-the-mainstream positions on wildlife need distract from the very real problem of factory farming. I think that case should be dealt with first.

I could not agree more! But it does sound like we have very different ideas about what 'dealing with it' means.

I'd like all farms to be like the farm I grew up next to. I was much more of an animal lover as a child than I am now, but even then I thought that the animals next door seemed happy.

Ironically I used to worry about the morality of killing them for food, but it never occurred to me that their lives were so bad that they should be killed and then not eaten.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-21T14:09:35.461Z · LW(p) · GW(p)

I'd like all farms to be like the farm I grew up next to.

I mean, I'd be fine with that.

but it never occurred to me that their lives were so bad that they should be killed and then not eaten.

Rather, instead of not being killed for food, they shouldn't be tortured for food either.

comment by CCC · 2013-06-18T18:36:32.281Z · LW(p) · GW(p)

I don't see why you wouldn't want me dead, since I'd loathe a world without the wild, and would probably be unhappy. Certainly I would die to prevent it if I could see a way to.

I'd speculate you wouldn't actually be suicidal in a world without the wild. Furthermore, I certainly wouldn't want you killed just because you're unhappy, because that's reversible. And even if it weren't, I think a policy of killing people for being unhappy would have tremendously bad short-run and long-run consequences.

If a non-human animal is unhappy, you would prefer it to be painlessly killed. If a human is unhappy, you would prefer it not to be painlessly killed.

Am I mis-stating something here? If not, could you please explain the difference?

If we could wirehead the farm animals, would you become an enthusiastic meat-eater?

I'm currently unsure because I don't understand accurately the nature of wireheading. But if one could hypothetically remove all suffering from the factory farming process, I would then morally permit eating meat.

As I understand the concept, it involves connecting a wire to the animal's brain in such a way that it always experiences euphoric pleasure (and presumably disconnecting the parts of the brain that experience suffering).

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-18T19:20:20.310Z · LW(p) · GW(p)

If a non-human animal is unhappy, you would prefer it to be painlessly killed. If a human is unhappy, you would prefer it not to be painlessly killed. Am I mis-stating something here? If not, could you please explain the difference?

Humans (and potentially some nonhumans like dolphins and apes) are special in that they have forward-looking desires, including an enduring desire to not die. I don't want to trample on these desires, so I'd only want the human killed with their consent (though some exceptions might apply).

Nonhuman animals without these forward-looking desires aren't harmed by death, and thus I'm fine with them being killed, provided it realizes a net benefit. (And making a meal more delicious is not a net benefit.)

Replies from: johnlawrenceaspden, johnlawrenceaspden, CCC
comment by johnlawrenceaspden · 2013-06-21T13:26:42.772Z · LW(p) · GW(p)

(And making a meal more delicious is not a net benefit.)

why not? (blah,blah, googolplex of spectacular meals vs death of tb bacillus, blah)

comment by johnlawrenceaspden · 2013-06-21T14:21:37.648Z · LW(p) · GW(p)

Humans (and potentially some nonhumans like dolphins and apes) are special in that they have forward-looking desires, including an enduring desire to not die. I don't want to trample on these desires, so I'd only want the human killed with their consent (though some exceptions might apply).

This is interesting. Even though I usually love life minute to minute, and think I am one of the happiest people I know, I don't have a strong desire to be alive in a year's time, or even tomorrow morning. And yet I constantly act to prevent my death and I fully intend to be frozen, 'just in case'. This seems completely incoherent to me, and I notice that I am confused.

Wild animals go to some lengths to prolong their lives. Whether they are mistaken about the value of their lives or not, what is the difference between them and me?

P.S. I'm not winding you up here. In the context of a discussion about cryonics, ciphergoth found the above literally unbelievable and recommend I seek medical help! After that I introspected a lot. After a year or so of reflection, I'm as sure as I can be that it's true.

Replies from: Morendil
comment by Morendil · 2013-06-21T15:35:57.671Z · LW(p) · GW(p)

I don't have a strong desire to be alive in a year's time, or even tomorrow morning.

If you did have such a desire, how do you suppose it might manifest?

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2013-06-21T15:58:30.497Z · LW(p) · GW(p)

Very similarly to my actual behaviour of course. As I say, I notice that I am confused.

But if you're saying that my behaviour implies that I feel the desire that I don't perceive feeling, then surely we can apply the same reasoning to animals. They clearly want to continue their own lives.

Replies from: None
comment by [deleted] · 2013-06-21T16:23:44.349Z · LW(p) · GW(p)

Very similarly to my actual behaviour of course.

Okay, well, what would such a strong desire feel like, do you think? I take it you say you have an absence of such a desire because something is lacking where you expect it should be if you had the desire. What is that?

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2013-06-21T17:26:26.521Z · LW(p) · GW(p)

Yes, I feel I know what it is to want something. I'm very good at wanting e.g. alcohol, cigarettes, food, intellectual satisfaction, and glory on the cricket field. And I don't feel that sort of desire towards 'future existence'.

I mean, I think that if I was told that I had a terminal cancer tomorrow, that I'd just calmly start making preparations for a cryonics-friendly suicide, and not worry about it too much. Even though I think that the chances of cryonics actually working are minute.

Whereas I'm pretty sure that if I get out for a duck in tomorrow's cricket match, that I'll feel utterly wretched for at least half an hour, even though it won't matter in the slightest in the grander scheme of things.

And yet, were someone to offer me the choice of 'duck or death', of course I'd take the duck.

It's really weird. I feel like I somehow fail to identify with my possible future selves over more than about a week or so. I've tried most vices and not worried about the consequences much. And yet I never did do myself serious harm, and a few years ago I stopped riding motorcycles because I got scared.

It's as though someone who is not me is taking a lot of my decisions for me, and he's more cautious and more long-termist than me.

Replies from: Morendil, None
comment by Morendil · 2013-06-22T08:35:52.327Z · LW(p) · GW(p)

I don't feel that sort of desire towards 'future existence'.

It sounds as if you use the words "desire" in two different senses - concrete, gut-level craving on the one hand, vs abstract, making-plans recognition of long-term value on the other hand.

That doesn't sound so unusual - I don't, for instance, feel a burning desire to be alive tomorrow - most of the time. I'm pretty sure that if someone had a gun on me and demanded I hand over my last jar of fig jam, that desire would suddenly develop. But in general, I'm confident anyway that I'll still be here tomorrow.

Hypothesis: desire is usually abstract, in particular when the object of desire is a given, but becomes a feeling when that object is denied or about to be denied.

(I'm rather doubtful that most animals experience "desires" that conform to this dynamic.)

comment by [deleted] · 2013-06-21T17:45:54.740Z · LW(p) · GW(p)

Well, it makes sense to me that future time can't really be an object of desire all on its lonesome. People have spent time trying to work out what is being feared when we fear death, or what is being desired when we desire to live longer. A very common strategy is to say that what we fear is the loss of future goods, or the cancelation of present projects, and what we desire are future goods or the completion of present projects.

So in a sense, I think I'm right there with you in wanting (in some kind of preference ordering way) to live longer, but without having any real phenomenal desire to live longer.

comment by CCC · 2013-06-20T09:26:27.008Z · LW(p) · GW(p)

Ah, thank you. That explains it quite neatly.

I imagine that, ideally, there would be some sort of behavioural test for such forward-looking desires that could be administered; otherwise, I'm not sure that they could be reasonably claimed to be absent.

comment by johnlawrenceaspden · 2013-06-21T14:11:31.324Z · LW(p) · GW(p)

Would you consider a world of 28 billion people living on rice an improvement?

It depends on a lot of other factors. More people living good lives seems like an improvement to me, all else being equal. I think it would be worth giving up richness and variety in food in order to facilitate this, though obviously that one aspect would be regrettable.

Why do you ask? What are you getting at?

I'm trying to see where your morality is coming from. It looks like 'assign a real value to every (multicellular) living creature according to how much fun it's having, add all the values up, and bigger is better'.

Whereas I greatly prefer 'A few people living in luxury in a beautiful vast wilderness' to 'Countless millions living on rice in a world where everything you see is a human creation'. I don't have a theory to explain why. I just do.

I'm sure that that's my evolved animal nature speaking about 'where is the best place to set up home'. And probably I'm dutchbookable, and maybe by your lights I'm evil.

But it seems odd to try to come up with new desires according to a theory. I'd rather go with the desires I've already got.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-21T14:17:06.296Z · LW(p) · GW(p)

I'm trying to see where your morality is coming from. It looks like 'assign a real value to every (multicellular) living creature according to how much fun it's having, add all the values up, and bigger is better'.

That sounds about right.

Obviously, so long as we have different terminal values, our conclusions will be different.

comment by johnlawrenceaspden · 2013-06-21T14:23:59.350Z · LW(p) · GW(p)

I'm currently unsure because I don't understand accurately the nature of wireheading. But if one could hypothetically remove all suffering from the factory farming process, I would then morally permit eating meat.

All suffering? Even, say, the chance of the farmer getting a torn nail? Why such high standards in this case?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-21T14:28:45.879Z · LW(p) · GW(p)

The more suffering that could be removed, the better, but eventually you'll hit a point where removing more suffering is no longer feasible or worth focusing on, because there will be suffering easier to remove elsewhere.

Really, what I'm looking for, is the point where the net suffering to produce the food is equal to or less than the net benefit the production of the food provides.

comment by johnlawrenceaspden · 2013-06-18T13:52:08.912Z · LW(p) · GW(p)

Voting up, by the way. very thought-provoking. I have clever vegan friends I must discuss this with.

comment by ThrustVectoring · 2013-06-16T03:40:33.264Z · LW(p) · GW(p)

As far as improving the world through behavioral changes go, advertising e-cigarettes is probably much more cost effective than advertising vegetarianism. You could even target it to smokers (either through statistics and social information, or just be grabbing low-income people in general and restaurant, fast food, and retail workers in particular).

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-16T04:57:27.497Z · LW(p) · GW(p)

Not that I necessarily doubt you, but what makes you think that?

Replies from: ThrustVectoring
comment by ThrustVectoring · 2013-06-18T02:41:57.578Z · LW(p) · GW(p)

What hurts smokers isn't nicotine exactly, it's all the other stuff that gets into their lungs when they burn tobacco. A big part of why quitting smoking is hard is because nicotine helps form habits - specifically, the habit of getting out a cigarette, lighting it, and inhaling. E-cigarettes push the same habit buttons as tobacco cigarettes, so its much easier for smokers to go tobacco-free and vastly improve their health and quality of life by switching over to inhaling the vapors of mixes of nicotine, propylene glycol, and flavorings.

Replies from: RyanCarey, peter_hurford
comment by RyanCarey · 2013-06-18T03:05:04.575Z · LW(p) · GW(p)

And neither that I doubt you, but what makes you think it's cost-effective?

Replies from: ThrustVectoring
comment by ThrustVectoring · 2013-06-18T16:21:53.240Z · LW(p) · GW(p)

Ah, misunderstood your question. Its more on the benefit side of things - the effectiveness of ads is within an order of magnitude, but you get human QALYS instead of preventing cruelty to chickens.

comment by Peter Wildeford (peter_hurford) · 2013-06-18T04:02:44.803Z · LW(p) · GW(p)

What RyanCarey said. I understand the principle behind E-cigarettes and support them, but I'm not yet convinced that advocating for them would produce more net welfare improvement per dollar than advocating for people to eat less meat.

Replies from: ThrustVectoring
comment by ThrustVectoring · 2013-06-18T16:25:33.039Z · LW(p) · GW(p)

It depends on the relative effectiveness of ads and the coonversion ratio you're willing to accept between human and animal suffering. So my statement can be reduced more to 'I don't think chicken suffering is important'

I don't think that some animals are capable of suffering, but can't think of how to make my point without talking about animal suffering. I mean, how many rocks would you be willig to break for a QALY? Thats about how many chickens I would be willing to kill.

Replies from: Raemon, Jabberslythe, peter_hurford
comment by Raemon · 2013-06-18T16:55:44.041Z · LW(p) · GW(p)

I mean... that's a theoretically coherent statement, but isolating "e-cigarettes" as a thing to talk about instead of just saying "I don't value chickens" seems odd.

What is it about humans you value? Do you value humans with extreme retardation, or a hypothetical inability to form relationships?

comment by Jabberslythe · 2013-06-18T19:00:39.908Z · LW(p) · GW(p)

Most people believe that chickens suffer. They seem have all the right parts of the brain and the indicative behaviors and everything. What's your theory that says that humans do but chickens don't?

Replies from: None, ThrustVectoring
comment by [deleted] · 2013-06-18T19:58:08.530Z · LW(p) · GW(p)

Thrust said he didn't care about chickens suffering, not that they don't.

One question that doesn't seem to get asked in these discussions is, if chickens have this certain mental machinery doing certain things when I hurt them, why should I care, given that I don't already? Is there a sequence of value comparisons showing that such a non-preference is incoherent? Or a moral argument that I am not considering? If not, I'd rather just follow my actual preferences.

Replies from: Jabberslythe
comment by Jabberslythe · 2013-06-18T20:25:44.158Z · LW(p) · GW(p)

Thrustvectoring said:

I don't think that some animals are capable of suffering

From what Thrust has said, I think it's ambiguous between whether he cares he thinks animals can't suffer and doesn't care about them for that reason or he just doesn't care about animal suffering as you describe. Or , more likely, he is in some middle state.

As to your second point, yes that's the approach. And it seems largely that is what is happening when it comes up in the discussion here.

Replies from: ThrustVectoring
comment by ThrustVectoring · 2013-06-19T17:12:38.308Z · LW(p) · GW(p)

It's kind of both. If a chicken is in pain, that doesn't bother me that much. Also, I don't think that chickens have the mental apparatus necessary to suffer like people can suffer.

comment by ThrustVectoring · 2013-06-19T17:11:21.986Z · LW(p) · GW(p)

People tend to read a lot more into behavior than is really there. I mean, ants run away when you slam your fist down on the counter next to them, and it sure looks like they're scared, but that's more a statement about your mind than the ants'.

I mean, chickens are largely still functional without a head. Yes, there's something going on in a chicken's brain. There isn't anything worth celebrating going on in there, though.

Replies from: KatieHartman
comment by KatieHartman · 2013-06-21T17:45:58.022Z · LW(p) · GW(p)

For the record, the chicken that survived had retained most of the brainstem. He was able to walk ("clumsily') and attempted some reflexive behaviors, but he was hardly "functional" to anyone who knows enough about chickens to assume that they do more than walk and occasionally lunge at the ground.

The chicken's ability to survive with only the brain stem isn't shocking. Anencephalic babies can sometimes breathe, eat, cry, and reflexively "respond" to external stimuli. One survived for two and a half years. This was a rare case, but so was the chicken - there were other attempts to keep decapitated chickens alive, and none have been successful.

This isn't to say that we don't have a tendency to anthropomorphize animals or treat reflexive behaviors as meaningful - we do. But pointing that out isn't where the conversation ends. Chickens are an easy target because common knowledge dictates that they're stupid animals, because most people haven't spent any substantial amount of time with them and assume there isn't anything particularly interesting about their behavior, and because we have a vested interest in believing that there's nothing of value going on in their brains.

comment by Peter Wildeford (peter_hurford) · 2013-06-18T17:13:43.658Z · LW(p) · GW(p)

how many rocks would you be willig to break for a QALY? Thats about how many chickens I would be willing to kill.

Why don't you think chickens suffer? This is against The Cambridge Declaration on Consciousness and the information gathered here (with citations) on this admittedly biased website.

comment by waveman · 2013-06-13T02:34:38.703Z · LW(p) · GW(p)

It would have been better, I think, to submit an argument for veganism (or vegetarianism) for scrutiny here first. Then an argument about the best way to promote it. As it stands, the two issues are confused.

My own view is that for me, the productivity hit and adverse health impact outweigh the benefits. (vegan diet contributed to the loss of sight in my left eye among other things).

If we stop eating meat, these animals will not thereafter frolic gaily in the meadow. They will not exist at all. The merits of veganism make for a big enough topic on their own. You may also want to justify why this is a priority issue.

I am concerned about attempts to coopt LW in other causes that seem to me to not be rational at their core.

Replies from: Kaj_Sotala, peter_hurford
comment by Kaj_Sotala · 2013-06-13T11:04:12.133Z · LW(p) · GW(p)

My personal reason for pursuing vegetarianism (and ultimately veganism) is simple: I want the result of me having existed, as compared to an alternative universe where I did not exist, to be less overall suffering in the world. If I eat meat for my whole life, I'll already have contributed to the creation of such a vast amount of suffering that it will be very hard to do anything that will reliably catch up with that. Each day of my life, I'll be racking up more "suffering debt" to pay off, and I'd rather not have my mere existence contribute to adding more suffering.

Replies from: Kawoomba, Vladimir_Nesov, MTGandP
comment by Kawoomba · 2013-06-13T11:30:58.399Z · LW(p) · GW(p)

I want the result of me having existed, as compared to an alternative universe where I did not exist, to be less overall suffering in the world.

That's probably the abridged version, because if that were the actual goal, a doomsday machine would do the trick.

Replies from: army1987, Kaj_Sotala
comment by A1987dM (army1987) · 2013-06-14T21:08:02.249Z · LW(p) · GW(p)

If you count pleasure as negative suffering...

comment by Kaj_Sotala · 2013-06-13T12:24:48.360Z · LW(p) · GW(p)

That's probably the abridged version

Yes.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-13T17:38:41.941Z · LW(p) · GW(p)

Do you have a fleshed-out version formulated somewhere? *tries to hide iron fireplace poker behind his back*

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-06-13T19:49:11.975Z · LW(p) · GW(p)

No. The "fleshed-out version" is rather complex, incomplete, and constantly-changing, as it's effectively the current compromise that's been forged between the negative utilitarian, positive utilitarian, deontological, and purely egoist factions within my brain. It has plenty of inconsistencies, but I resolve those on a case-by-case basis as I encounter them. I don't have a good answer to the doomsday machine, because I currently don't expect to encounter a situation where my actions would have considerable influence on the creation of a doomsday machine, so I haven't needed to resolve that particular inconsistency.

Of course, there is the question of x-risk mitigation work and the fact that e.g. my work for MIRI might reduce the risk of a doomsday machine, so I have been forced to somewhat consider the question. My negative utilitarian faction would consider it a good thing if all life on Earth were eradicated, with the other factions strongly disagreeing. The current compromise balance is based around the suspicion that most kinds of x-risk would probably lead to massive suffering in the form of an immense death toll and then a gradual reconstruction that would eventually bring Earth's population back to its current levels, rather than all life on the planet going extinct. (Even for AI/Singularity scenarios there is great uncertainty and a non-trivial possibility for such an outcome.) All my brain-factions agree on this being a Seriously Bad scenario to happen, so there is currently an agreement that work aimed at reducing the outcome of this scenario is good, even if it indirectly influences the probability of an "everyone dies" scenario in one way or another. The compromise is only possible because we are currently very unsure of what would have a very strong effect on the probability of an "everyone dies" scenario.

I am unsure of what would happen if we had good evidence of it really being possible to strongly increase or decrease the probability of an "everyone dies" scenario: with the current power balances, I expect that we'd just decide not to do anything either way, with the negative utilitarian faction being strong enough to veto attempts to save humanity, but not strong enough to override everyone else's veto when it came to attempts to destroy humanity. Of course, this assumes that humanity would basically go on experiencing its current levels of suffering after being saved: if saving humanity would also involve a positive Singularity after which it was very sure that nobody would need to experience involuntary suffering anymore, then the power balance would very strongly shift to favor saving humanity.

comment by Vladimir_Nesov · 2013-06-13T14:14:21.053Z · LW(p) · GW(p)

I want the result of me having existed, as compared to an alternative universe where I did not exist, to be...

This seems like an arbitrary distinction. The value relevant to your ongoing decisions is in opportunity cost of the decisions (and you know that). Why take the popular sentiment seriously, or even merely indulge yourself in it, when it's known to be wrong?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-06-13T19:51:50.813Z · LW(p) · GW(p)

It is indeed wrong, but it seems to mostly produce the same recommendations as framing the issue in terms of opportunity costs while being more motivating. "Shifting to vegetarianism has a high expected suffering reduction" doesn't compel action in nearly the same way as "I'm currently racking up a suffering debt every day of my life" does.

comment by MTGandP · 2013-06-15T22:46:05.483Z · LW(p) · GW(p)

I'll already have contributed to the creation of such a vast amount of suffering that it will be very hard to do anything that will reliably catch up with that.

Actually, it's pretty easy: just donate enough money to organizations like Vegan Outreach such that you're confident that you have caused the creation of a new vegetarian/vegan.

comment by Peter Wildeford (peter_hurford) · 2013-06-13T03:26:22.327Z · LW(p) · GW(p)

It would have been better, I think, to submit an argument for veganism (or vegetarianism) for scrutiny here first. Then an argument about the best way to promote it. As it stands, the two issues are confused.

Perhaps I'm a bad advocate, but I don't think there is an "argument" for veganism/vegetarianism, outside what you would see in the pamphlets, videos, or "Why Eat Less Meat?" linked within. I suppose I could upload my "Why Eat Less Meat" piece?

Another problem I'm having is that there are like sixty million objections that someone might raise against veganism/vegetarianism, and it would be impossible to answer them all.

~

My own view is that for me, the productivity hit and adverse health impact outweigh the benefits. (vegan diet contributed to the loss of sight in my left eye among other things).

I'm not going to be a lecturer on vegan health or say you "did it wrong", but the eye thing definitely strikes me as an atypical result. I'm doing a vegetarian diet right now with no health or productivity demerits.

~

If we stop eating meat, these animals will not thereafter frolic gaily in the meadow. They will not exist at all.

Of that, I'm obviously aware. I count that as suffering reduced.

~

The merits of veganism make for a big enough topic on their own. You may also want to justify why this is a priority issue.

It's potentially a priority issue if it can be accomplished so cheaply; hence the cost-effectiveness estimate. I wasn't even here to argue that veganism was a global priority. Right now, I think at best it would be in the "top five". Even if this essay were read as an advocacy piece instead of an evaluation piece, it's advocating for philanthropy toward vegetarianism rather than vegetarianism itself.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T05:56:31.239Z · LW(p) · GW(p)

I have to agree with waveman that we should establish that vegetarianism is a worthwhile cause before we devote LW posts to figuring out how best to promote it. We could, in theory, investigate how best to promote all sorts of things, but let's not actually advocate promoting arbitrary values or ideologies that may or may not be good ideas. Doing so seems like a straightforward way of wasting our time and doing actual harm (by, among other things, creating the impression that the cause in question has been accepted by the LW community as being worthwhile). (i.e. "What is the best way to get out the word about cheese-only diets?" implicates that we've already determined cheese-only diets to be not only a good idea, but worth actively advocating.)

Even if this essay were read as an advocacy piece instead of an evaluation piece, it's advocating for philanthropy toward vegetarianism rather than vegetarianism itself.

It seems nonsensical to view advocacy for philanthropy toward vegetarianism as different from advocacy for vegetarianism itself, if you take the view (as you seem to do) that vegetarianism is a moral issue.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-13T06:06:16.044Z · LW(p) · GW(p)

we should establish that vegetarianism is a worthwhile cause before we devote LW posts to figuring out how best to promote it.

I don't know how to establish it as a worthwhile cause to those who don't already value nonhuman animals, so I skipped that step.

For those who do already value nonhuman animals, though, I had hoped this essay was such an evaluation, given that it is a cost-effectiveness estimate and evidence survey. It's not a comparison of advocacy efforts, since no other advocacy efforts are considered.

-

It seems nonsensical to view advocacy for philanthropy toward vegetarianism as different from advocacy for vegetarianism itself, if you take the view (as you seem to do) that vegetarianism is a moral issue.

That's true. I suppose one could consider advocating vegetarianism without personally becoming vegetarian, though that would be somewhat hypocritical.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T06:44:03.020Z · LW(p) · GW(p)

I don't know how to establish it as a worthwhile cause to those who don't already value nonhuman animals, so I skipped that step.

I do sympathize with the difficulty of persuading someone with whom you do not share the relevant values, but I'm afraid I can't help but object to "this part of the argument is hard, so I skipped it".

Changing values is not impossible. I don't think valuing nonhuman animals is a terminal value; the terminal value in question probably looks something more like "valuing the experiences of minds that are capable of conscious suffering" or something to that general effect. (That is, if we insist on tracing this preference to a value per se, rather than assuming that it's just signaling or somesuch.) And most people here do, I think, place at least some importance on reflective equilibrium, which is a force for value change.

The problem I have with your approach (and I hope you'll forgive me for this continued criticism of what is, to be truthful, a fairly interesting post) is that it's a nigh-fully-general justification for advocating arbitrary things, like so:

"Here is an analysis of how to most cost-effectively promote the eating of babies. I don't know how to establish baby-eating as a worthwhile cause for people who don't already think that eating babies is a good idea, so I skipped that step."

Ditto " ... saving cute kittens from rare diseases ...", ditto " ... reducing the incidence of premarital sex ...", ditto pretty much anything ever.

What I would be curious to see is whether the LW populace perhaps already thinks that vegetarianism is a settled question. If so, my objections might be misplaced. Was this covered in one of the surveys? Hmm...

Edit: Aha.

VEGETARIAN:
No: 906, 76.6%
Yes: 147, 12.4%
No answer: 130, 11%

For comparison, 3.2% of US adults are vegetarian.

Replies from: davidpearce
comment by davidpearce · 2013-06-13T17:36:48.277Z · LW(p) · GW(p)

SaidAchmiz, I wonder if a more revealing question would be to ask if / when in vitro meat products of equivalent taste and price hit the market, will you switch? Lesswrong readers tend not to be technophobes, so I assume the majority(?) of lesswrongers who are not already vegetarian will make the transition. However, you say above that you are "not interested in reducing the suffering of animals". Do you mean that you are literally indifferent one way or the other to nonhuman animal suffering - in which case presumably you won't bother changing to the cruelty-free alternative? Or do you mean merely that you don't consider nonhuman animal suffering important?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T17:56:25.509Z · LW(p) · GW(p)

Do you mean that you are literally indifferent one way or the other to nonhuman animal suffering - in which case presumably you won't bother changing to the cruelty-free alternative? Or do you mean merely that you don't consider nonhuman animal suffering important?

In (current) practice those are the same, as you realize, I'm sure. My attitude is closest to something like "no amount of animal suffering adds up to any amount of human suffering", or more generally "no amount of utility to animals [to the extent that the concept of utility to a non-sapient being is coherent] adds up to any amount of utility to humans". However, note that I am skeptical of the concept of consistent aggregation of utility across individuals in general (and thus of utilitarian ethical theories, though I endorse consequentialism), so adjust your appraisal of my views accordingly.

In vitro meat products could change that; that is, the existence of in vitro meat would make the two views you listed meaningfully different in practice, as you suggest. If in vitro meat cost no more than regular meat, and tasted no worse, and had no worse health consequences, and in general if there was no downside for me to switch...

... well, in that case, I would switch, with the caveat that "switch" is not exactly the right term; I simply would not care whether the meat I bought were IV or non, making my purchasing decisions based on price, taste, and all those other mundane factors by means of which people typically make their food purchasing decisions.

I guess that's a longwinded way of saying that no, I wouldn't switch exclusively to IV meat if doing so cost me anything.

comment by Shmi (shminux) · 2013-06-12T21:21:14.119Z · LW(p) · GW(p)

I start with the claim that it's good for people to eat less meat, whether they become vegetarian -- or, better yet, vegan -- because this means less nonhuman animals are being painfully factory farmed.

If your reason for vegetarianism is mainly prevention of animal suffering, shouldn't you be concentrating on ethical farming? Or are you against raising a happy cow and painlessly killing it some time later?

If you value the welfare of nonhuman animals from a consequentialist perspective

if you value happy animals, than you ought to value happy farm animals, and more vegetarianism results in fewer of those.

and I personally come up with a cost-effectiveness estimate of $0.02 to $65.92

4-digit precision on the accuracy equivalent of 0.1 sigfig? If so, then it's hard for me to take any of your calculations seriously.

Replies from: peter_hurford, Kaj_Sotala, Watercressed
comment by Peter Wildeford (peter_hurford) · 2013-06-13T01:36:43.844Z · LW(p) · GW(p)

If your reason for vegetarianism is mainly prevention of animal suffering, shouldn't you be concentrating on ethical farming? Or are you against raising a happy cow and painlessly killing it some time later?

I don't think so. I wouldn't be against happy cows with painless deaths, but I think achieving that outcome, especially via the advocacy available to me, is very unlikely.

if you value happy animals, than you ought to value happy farm animals, and more vegetarianism results in fewer of those.

I don't understand. This assumes there are happy farm animals. If any farm animals are happy, they're certainly in the extreme minority.

Replies from: SaidAchmiz, Douglas_Knight
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T17:02:23.355Z · LW(p) · GW(p)

It's not clear to me that there are happy animals at all, for some species. Are there happy chickens? Happy cows? Where? (Can chickens or cows even be "happy" in the sense we understand happiness?)

Or is the conclusion that since the existence of these animals can only result in suffering, the outcome where farms animals stop existing is desirable?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-13T19:36:47.021Z · LW(p) · GW(p)

Or is the conclusion that since the existence of these animals can only result in suffering, the outcome where farms animals stop existing is desirable?

I'm unsure if there are happy animals at all. Wild animal suffering also sounds pretty bad. But, at least for factory farmed animals, I agree that "the existence of these animals can only result in suffering, the outcome where farms animals stop existing is desirable".

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-13T20:07:06.790Z · LW(p) · GW(p)

Yeah, wild animal suffering is the other thing I was thinking about. Anyway, that conclusion sounds pretty reasonable (given caring about animal suffering in the first place)... except that it seems to lead to wanting the entire animal kingdom to stop existing (or most of it, anyway). I'm not sure that's a reductio ad absurdum, or if it is, what it's a reductio of, exactly (caring about animal suffering? caring about suffering in general? utilitarianism?!), but it should at least give us pause. I don't think this is a bullet I would bite.

For what it's worth, given that I do care about humans, and given that some humans seem to be very bothered by the suffering of animals, I would certainly value the reduction of animal suffering for the purpose of making people feel better — although I don't care about this enough to willingly incur significant personal or societal costs in the bargain. So, for instance, if in vitro meat became available, it tasted the same, cost no more (or only a little more), and made a lot of people feel better, that would, for me, be an important thing to consider.

But I think I value the existence of animal species, and ecologies, for their own sake. I'm not sure how to describe this; scientific curiosity? Valuing biological diversity? In any case, I think that, all else being equal, the extinction of entire kinds of creatures would be a sad outcome. (Although I can see a logical-extreme sort of counterargument: what if we create a new species explicitly for the purposes of easy torturability, and then torture them? They've been created from whole cloth simply to give us something to inflict pain on! Should we mourn their extinction? These hypothetical victimcows might be compared to actual cows in relevant ways. Of course, this argument does not work in the case of wild animal species.)

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-17T04:33:46.166Z · LW(p) · GW(p)

except that it seems to lead to wanting the entire animal kingdom to stop existing (or most of it, anyway).

I'm not sure that has to be the case. One could aim to provide adequate welfare for the entire animal kingdom, though that would require significant resources. Similarly, I think some human lives aren't worth living, but I don't think the proper response is genocide.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-17T06:00:29.613Z · LW(p) · GW(p)

You said:

But, at least for factory farmed animals, I agree that "the existence of these animals can only result in suffering, the outcome where farms animals stop existing is desirable".

I was merely extrapolating. Or do you think there are relevant differences between wild animals and domesticated ones, such that we could provide welfare, as it were, for wild animals (without them having to hunt/kill anything, I surmise is the implication), but not for domesticated ones? I mean, both of those scenarios are light-years away from feasibility, so I can only assume we're talking about some in-principle difference. Are we?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-17T17:48:26.570Z · LW(p) · GW(p)

I think there is a fundamental difference in wild animals and factory farmed animals -- if factory farming were to stop, there would no longer be any factory farmed animals. They are created specifically for that purpose. One can't provide welfare for factory farmed animals without stopping factory farming, and then there wouldn't be any factory farmed animals.

Though, I suppose, one could raise animals in ideal welfare conditions and then painlessly kill them for food. I would be fine with that.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-17T18:00:28.355Z · LW(p) · GW(p)

There's something strange with your terms there... are you using "factory farmed" as a descriptor of... kinds (species, etc.) of animals? Or animals that happen to exist in conditions of factory farming? I am confused.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-17T18:47:27.737Z · LW(p) · GW(p)

Factory farmed animals are animals that happen to exist in conditions of factory farming. And "factory farming" is meant to convey not just mass production, but also the present quality of farming with regard to animal welfare.

comment by Douglas_Knight · 2013-06-13T16:51:24.972Z · LW(p) · GW(p)

Do you see a difference between factory farming and other farming?

This comment seems to say that you don't. The original post, by bothering to mention factory farming asserts that you do. But the rest of the post does not seem to reflect any conclusions drawn from such a belief.

If you are a consequentialist, not a deontologist and if non-factory animals suffer less than factory animals, you should take that into account, even if you believe that their lives are net negatives. But I think you should introspect about whether you really are a consequentialist.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-13T20:09:17.874Z · LW(p) · GW(p)

Do you see a difference between factory farming and other farming?

Sort of. Different farms treat animals differently, and there are certainly some farms that treat animals well. But they're all small, local farms and not a source of the majority of the meat.

Perhaps you're suggesting that instead of pro-vegetarianism advocacy, we do pro-"farms that treat animals well" advocacy. The problem is, I suspect, it would take an awful, awful lot of money to first scale a farm large enough to get meat to everyone while still treating all the animals well.

If you are a consequentialist, not a deontologist and if non-factory animals suffer less than factory animals, you should take that into account, even if you believe that their lives are net negatives.

Can you explain how it's not currently being taken into account and what effect you think it would have on the calculation? And why it might indicate some sort of hidden deontology on my part?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-06-13T22:37:13.185Z · LW(p) · GW(p)

You seem driven by thresholds, like a good life and especially a good death and you do not seem interested in replacing a life of high suffering with a life of low suffering, just because the life of low suffering is a net negative. Such thresholds tend to be characteristic of deontologists.

In particular, I observed this on the thread about fish. Here I asked you about replacing worse farms with better but still bad farms and your response was that truly good farms are too expensive, ignoring the possibility of farms that are full of suffering, just lower levels of suffering.

Maybe it is implausible to change how farming is done (though I think you are mistaken about the diversity of practices), but getting people to switch from pork to beef or from chicken to fish seems quite plausible to me.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-06-17T04:38:55.677Z · LW(p) · GW(p)

You seem driven by thresholds, like a good life and especially a good death and you do not seem interested in replacing a life of high suffering with a life of low suffering, just because the life of low suffering is a net negative.

What makes me look like I'm interested in thresholds? Replacing a life of high suffering with a life of low suffering is good. Replacing that same life of high suffering with a life of no suffering is even better.

~

Here I asked you about replacing worse farms with better but still bad farms and your response was that truly good farms are too expensive, ignoring the possibility of farms that are full of suffering, just lower levels of suffering.

I don't understand how I ignored your point. Could you re-explain?

~

Maybe it is implausible to change how farming is done (though I think you are mistaken about the diversity of practices), but getting people to switch from pork to beef or from chicken to fish seems quite plausible to me.

I've strongly considered convincing people to shift away from chicken, eggs, and fish to other forms of meat, given arguments around suffering per kg of meat demanded. This is also why I'm personally a vegetarian and not a vegan.

comment by Kaj_Sotala · 2013-06-13T10:55:20.516Z · LW(p) · GW(p)

If your reason for vegetarianism is mainly prevention of animal suffering, shouldn't you be concentrating on ethical farming? Or are you against raising a happy cow and painlessly killing it some time later?

In principle, it might be better to support companies making ethical meat than to entirely boycott meat. In practice, companies lie about their practices all the time, and things that are marketed as something often turn out to be something else entirely. At least for me personally, becoming certain enough about the ethicalness of a meat product that I'd feel confident about buying it would require far more time and energy than just achieving the certainty by avoiding meat overall.

comment by Watercressed · 2013-06-12T22:15:39.829Z · LW(p) · GW(p)

It's not really fair to call a range of .02 to 65.92 four digit precision just because the upper bound was written with four digits.

comment by seanwelsh77 · 2013-06-14T01:06:20.632Z · LW(p) · GW(p)

I have no argument with your desire to establish the most cost-effective way to get the most bang for your bucks. I simply do not accept the premise that it is wrong to eat meat.

Consider the life of a steer in Cape York. It is born the property of a grazier. It is given health care of a sort (dips, jabs, anti-tick treatment). It lives a free life grazing for a few hundred days in fenced enclosures protected by the grazier's guns from predators. Towards the end, it is mustered by jackaroos and jillaroos, shipped in a truck to the lush volcanic grasslands of the Atherton Tableland to be fattened up. On its last day, it is trucked to an abattoir to be stunned and killed.

If the grazier did not exist the steer would not exist. Now I could make some argument about 'utility' but I won't. And indeed there is a distinction between the factory farming you object to (grain-fed beef) versus older ways (grass-fed beef).

I would not like to be given this treatment myself but I am not a domesticated animal. I am not a beast or a dumb animal. I am a top predator. We have evolved to prefer meat and vegetables in our diet. We have arranged the ecosystem to satisfy our desire for meat. I value steers dead, butchered and then grilled or roasted. I have no interest, rational, emotional or otherwise, in funding a life for free steers in the wild.

A fundamental political problem for vegan advocacy is that people enjoy meat and that it is 'natural' to eat it. Now being natural is not a right maker but going against nature and being dependent on vitamin supplements to avoid anemia is not a right maker either. Stick people in the bush with no food and a bow and arrow and they will figure out how to shoot cute kangaroos and koalas quick smart rather than starve. Submit homo sapiens to enough stress and those predator instincts and drives that are suppressed in the civilized ecosystem come to the fore.

Desire drives us all. Argument that goes against basic human desire goes uphill. Vegetarian advocacy has been around for a long time (since Buddha, Mahavira) as have the moral arguments. Alas, human moral functionality is limited. Your research dollars would be better spent on finding an ethical alternative to meat that tastes way better than soy burgers. When vegans can provide a product that rivals that of the butchers in taste and appeal, then they will succeed.

Until then, they are a tiny minority that get recruits and suffers defections at more or less similarly measurable rates. In the meantime, I prefer organic and free-range products.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-14T01:51:48.844Z · LW(p) · GW(p)

I have no interest, rational, emotional or otherwise, in funding a life for free steers in the wild.

Without engaging with any of your other points, I'd just like to point out that the OP considers the good outcome to be one where farm animals don't exist at all, rather than one where they're free in the wild. (Because if animals don't exist then they can't suffer.)

Replies from: seanwelsh77, Richard_Kennaway
comment by seanwelsh77 · 2013-06-14T02:14:44.177Z · LW(p) · GW(p)

Quite so. The OP I think is more concerned about factory farming than the more traditional grazing approaches to cattle. But I think if you push a morality too far up against the hill of human desire it will collapse. Many activists overestimate the "care factor". My ability to care is pretty limited. I can't and won't care about 7 billion other humans on this planet except in the thinnest and most meaningless senses (i.e. stated preferences in surveys which are near worthless) let along the x billion animals. In terms of revealed preferences (where I put my dollars and power) I favour the near and the dear over the stranger and the genetically unrelated.

comment by Richard_Kennaway · 2013-06-14T08:13:43.248Z · LW(p) · GW(p)

(Because if animals don't exist then they can't suffer.)

Ex-ter-min-ate! Ex-ter-min-ate!! EX-TER-MIN-ATE!!!

That explains the Daleks. They're failed FAIs that were built to eliminate suffering from the universe.

Replies from: elharo
comment by elharo · 2013-06-16T13:03:59.586Z · LW(p) · GW(p)

Fanboy mode on:

The Daleks are well established as natural, non-human, sentient biological organisms inside armor. Details have varied over the years, but I don't think they've ever qualified as AIs.

Replies from: wedrifid
comment by wedrifid · 2013-06-22T01:54:21.048Z · LW(p) · GW(p)

The Daleks are well established as natural, non-human, sentient biological organisms inside armor. Details have varied over the years, but I don't think they've ever qualified as AIs.

They have always been biological but they are also typically genetically engineered at a rather fundamental level to produce desired psychological traits. While I would not use "AIs" myself in such circumstances I see some merit in differentiating between the bioloical vs electronic distinction and the natural vs artificial intelligence distinction.

comment by ThrustVectoring · 2013-06-13T01:54:34.743Z · LW(p) · GW(p)

I find this whole idea pretty abhorrent. You're pretty much advocating spending money to make people either feel guilty about what they choose to eat, or change their diet in ways that they aren't currently willing to do.

You don't cooperate on the prisoner's dilemma with a rock. A chicken isn't that much different than a rock.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-06-13T05:37:07.810Z · LW(p) · GW(p)

You're pretty much advocating spending money to make people either feel guilty about what they choose to eat, or change their diet in ways that they aren't currently willing to do.

This seems like a fully general argument against all moral advocacy whatsoever.