Wirehead your Chickens

post by Shmi (shminux) · 2018-06-20T05:49:29.344Z · LW · GW · 53 comments

Contents

53 comments

TL;DR: If you care about farm animal welfare, work on minimizing actual animal suffering, not a human proxy for animal suffering.

Epistemic status: had a chat about this with a couple of local EA enthusiasts who attended the EA Global 2018 in San-Francisco, and apparently this was not high on the agenda. I have only done a cursory search online about this, and nothing of note came up.

When you read about farm animal welfare, what generally comes up is vegetarianism/veganism, humane treatment of farm animals, and sometimes vat-grown meat. This emphasis is quite understandable emotionally. Cows, pigs, chickens in industrial farms are in visible severe discomfort most of their lives, which are eventually cut short long before the end of their natural lifespan, often in painful and gruesome ways.

An animal welfare activist would ask themselves a question like "what is it like to be a chicken in a chicken farm?" and end up horrified. Their obvious solutions are those outlined above: have fewer farm animals and treat them "humanely." Less conventional approaches that reduce animal suffering get an immediate instinctive pushback, because we would not find them acceptable for ourselves. This is what I call the human proxy for animal suffering. Maybe there is a more standard name for this kind of anthropomorphizing? Anyway, let's list a few obvious approaches:

Many of these are probably way easier and more practical than shaming people into giving up tasty steak. But our morality immediately fights back, at least for most of us. "What do you mean, cut off baby chicken's legs so it does not have leg pain later? You, monster!"

Because most people do not truly care about reducing animal suffering, they care about a different metric altogether, a visible human proxy for animal suffering that they find immediately relatable. And so it appears that there is virtually no research or funding into the real suffering reduction, even though we know those will work. Because they work on humans already. Drug addicts are quite happy while under influence. Epidural works wonders for temporary pain removal, and so does spinal cord injury in many cases. The list of proven but not ethically acceptable ways to reduce suffering in humans is pretty long.

If you are an effective altruist who is concerned with farm animal welfare, what is stopping you from working on finding ways to apply what works but is not ethical for humans to what works and reduces actual suffering in animals?

53 comments

Comments sorted by top scores.

comment by [deleted] (tommsittler) · 2018-06-20T15:35:05.186Z · LW(p) · GW(p)

Thanks for the post. I agree that this is an important question. I do, however, have many disagreements.

  1. Many of us value things other than pleasure and the avoidance of pain when it comes to humans. Perhaps we ought to value these things also when it comes to non-human animals. It seems difficult to defend hedonism for non-human animals while rejecting it for humans. What is the relevant difference?
  2. The obvious alternative is to take animals out of the equation altogether with cultured meat. Lewis Bollard makes this point explicitly: "I think when people start to talk about completely re-engineering the minds of chickens so that they’re essentially brain-dead and don’t realise the environment they’re in, it just seems like a better option to only grow the meat part of the bird and not grow the mind at all." The solutions you propose, like to "identify and surgically or chemically remove the part of the brain that is responsible for suffering" are highly speculative with current science. One of your examples is literally science fiction. It seems to me that cultured meat would be easier to achieve technologically, while being similar or superior in consumer acceptance. Marie Gibbons claims that we might "see clean meat sold commercially within a year". Metaculus thinks the probability of a restaurant serving cultured meat by 2021 is 75%.
  3. For a short argument with two major flaws, I found the tone unnecessarily dismissive. You had had a short conversation about your new idea at EA Global, and concluded that your idea is not being widely pushed by others who work in this area. At this point it would have been appropriate to think of some obvious counter-arguments. Instead in this post I see a lot of speculation about which impure motives and fallacies in reasoning could explain why your idea hasn't been adopted. Some quotes: "This emphasis is quite understandable emotionally.", "this kind of anthropomorphizing", ""What do you mean, cut off baby chicken's legs so it does not have leg pain later? You, monster!"", "Because most people do not truly care about reducing animal suffering, they care about a different metric altogether, a visible human proxy for animal suffering that they find immediately relatable.", "actual suffering".
Replies from: Benquo, shminux, Benquo
comment by Benquo · 2018-06-24T17:02:43.371Z · LW(p) · GW(p)

1 seems like an irrelevant objection. Animal welfare interventions are marketed as Effective Altruist or utilitarian interventions because of the amount of suffering we can avert by improving conditions in factory farms or reducing the amount of food produced that way. This doesn't imply that other people don't have other reasons to care about animals. The OP's argument is that the specifically utilitarian, aggregativist argument for animal welfare interventions favors wireheading chickens over other interventions pushed more frequently.

Replies from: tommsittler, Dacyn
comment by [deleted] (tommsittler) · 2018-06-26T20:17:09.854Z · LW(p) · GW(p)

Effective altruism does not imply utilitarianism. Utilitarianism (on most definitions) does not imply hedonism. I would guess less than 10% of EAs (or of animal-focused EAs) would consider themselves thoroughgoing hedonists, of the kind that would endorse e.g. injecting a substance that would numb humans to physical pain or amputating human body parts, if this reduced suffering even a little bit. So on the contrary, I think the objection is relevant.

comment by Dacyn · 2018-06-24T19:27:22.151Z · LW(p) · GW(p)

There can be amounts of things other than suffering, though. Caring about the "number of chickens that lead meaningful lives" doesn't mean that one isn't a utilitarian. (For the record, I agree with the OP that the notion of "leading meaningful lives" isn't so important for animals, but I think it's possible to disagree with this and still be advocating an EA intervention.)

Replies from: Benquo
comment by Benquo · 2018-06-25T06:56:52.705Z · LW(p) · GW(p)

There can, but in practice the amount of suffering is usually the stated reason to care.

Replies from: Dacyn
comment by Dacyn · 2018-06-25T18:35:26.291Z · LW(p) · GW(p)

Ah sorry, I seem to have misread your comment. Makes sense now, thanks!

comment by Shmi (shminux) · 2018-06-20T22:54:01.593Z · LW(p) · GW(p)

Re point 2: I agree that vat-grown meat would eventually be a viable approach, and the least controversial one. I am less optimistic about the timeframe, the taste. the acceptance rate and the costs.

Re point 1: See my reply to Ozy and others.

Re point 3: While I disagree with your assessment of "major flaws" (talk about being dismissive!), I accept the critique of the tone of the post sounding dismissive. If I were writing a formal report or a presentation at an EA event, I would take a lot more time and a lot more care to sound appropriately professional. I will endeavor to spend more time on polishing the presentation next time I write a controversial LW post.

Replies from: Paperclip Minimizer
comment by Paperclip Minimizer · 2018-06-21T09:32:46.420Z · LW(p) · GW(p)

Normal upvote for showing openness. (I strongly downvoted the OP because of the tone.)

comment by Benquo · 2018-06-24T17:05:54.060Z · LW(p) · GW(p)

2 objects to the OP using an example from science fiction, and immediately goes on to propose a science fiction intervention.

Replies from: DanielFilan
comment by DanielFilan · 2018-06-24T21:21:27.274Z · LW(p) · GW(p)

Cultured meat doesn't seem like a 'science fiction intervention' to me. It's true that it has appeared in several works of science fiction, but it is also being actively developed by several labs and companies, with prototypes having already been made - for more detail on both halves of this sentence, see the Wikipedia page.

Replies from: Benito, Benquo
comment by Ben Pace (Benito) · 2018-06-24T22:56:03.228Z · LW(p) · GW(p)

Regardless, it seems really weird to me that being in a science fiction novel is a critique of a new idea. If I think about intellectual work happening in the past 30 years that has the potential to be the most important work, I think about superintelligence, uploads, nanotech, the fermi paradox, making humanity interplanetary, and a bunch of other ideas who could have happily been critiqued as 'from science fiction' when first examined.

Could we grow animals that desire to be eaten, or perhaps don't feel pain? I recall many years ago in Richard Dawkins' The Greatest Show On Earth about wild wolves that became incredibly tame and underwent massive morphological changes in just a few generations of pure artificial selection. I'm not sure what traits you'd select on for animals in factory farms, but it's an interesting idea.

(Edit: In case it's not clear, I'm responding to the top-level comment's initial criticism, not DanieFilan's point. I probably should've just replied directly, it was just after reading Daniel's comment that I thought up my comment.)

Replies from: gjm
comment by gjm · 2018-06-25T12:20:12.827Z · LW(p) · GW(p)

The following question seems interesting: Of the technological advances that have made a substantial difference to the world since the time when science fiction first emerged as a genre, what fraction (weighted by impact, if you like) appeared in science fiction before they became fact, and how closely did the reality resemble the fiction?

Replies from: DanielFilan
comment by DanielFilan · 2018-06-25T21:51:40.271Z · LW(p) · GW(p)

Of course, it's also important to consider the fraction of technologies introduced in science fiction that then came into existence.

Replies from: gjm
comment by gjm · 2018-06-25T22:34:29.957Z · LW(p) · GW(p)

Yes. That might actually be a better question -- except that the actually-relevant population is presumably something like "technologies introduced in science fiction that seemed like they might actually be possible in the not-outrageously-far future".

comment by Benquo · 2018-06-25T06:48:19.404Z · LW(p) · GW(p)

I'm aware that there are labs that claim to have produced prototype lab meat, at enormous expense. This is some evidence that lab meat is feasible at scale, but mass-produced lab meat is still something in the future, not the present, and therefore to some extent inherently speculative.

Cold fusion enjoys similar status, as did until recently the EmDrive. Some such ventures work out; others don't.

Replies from: DanielFilan
comment by DanielFilan · 2018-06-25T21:50:13.106Z · LW(p) · GW(p)

I would agree that cultured meat is "to some extent inherently speculative". What I'm reacting to is your assertion that it's science fiction in the same way that the Ameglian Major Cow is science fiction. I think that it is both significantly less speculative than the prospect of making an Ameglian Major Cow, and also not "science fiction" as most people would understand the term.

Replies from: Benquo
comment by Benquo · 2018-06-25T22:54:00.477Z · LW(p) · GW(p)

I agree that there's a difference in degree. But, the difference between a more and less highly speculative technology is not really the distinction "literally science fiction" implies, and it's important to call out things like that even if there is some other, more valid argument the person could or should have made. I agree that some of the examples, especially the Ameglian Major Cow, were much more speculative than lab-grown meat. On the other hand, administering opioids to factory farmed animals may be substantially less speculative.

Replies from: DanielFilan
comment by DanielFilan · 2018-06-26T00:11:55.567Z · LW(p) · GW(p)

First, I agree that administering opioids to farmed animals is less speculative than cheap mass-produced cultured meat, i.e. lab meat, but I don't think that that's relevant to the conversation, since it wasn't what tommsittler was referring to by "literally science fiction".

I think you're saying something like <<Because lab meat doesn't yet exist, it's highly speculative technology, and therefore you shouldn't distinguish it from the Ameglian Major Cow by calling the Amegilan Major Cow "literally science fiction", even though the Ameglian Major Cow is much more speculative than lab meat -- if the Ameglian Major Cow is "science fiction", then so is lab meat, which is why it makes sense to say "2 objects to the OP using an example from science fiction, and immediately goes on to propose a science fiction intervention">>. I'm not sure this is right, so please correct me if it's wrong.

My response is that the degree in how speculative the technologies are is in fact relevant: there exists a prototype for one and not the other, it's easier to see how you would make one than the other, and one seems to be higher esteemed than domain experts (this last factor maybe isn't crucial, but does seem relevant especially for those of us like me who don't have domain expertise), and these differences make one a relevantly safer bet than the other. These differences are evidenced by the fact that one has mostly been developed in a soft science fiction series, and one is the subject of active research and development. As such, it makes sense to call the Ameglian Major Cow "literally science fiction" and it does not make sense to call lab meat "science fiction".

comment by JamesFaville (elephantiskon) · 2018-06-21T21:43:39.485Z · LW(p) · GW(p)

I've seen this discussed before by Rob Wiblin and Lewis Bollard on the 80,000 Hours podcast (edit: tomsittler [LW · GW] actually beat me to the punch in mentioning this).

Robert Wiblin: Could we take that even further and ultimately make animals that have just amazing lives that are just constantly ecstatic like they’re on heroin or some other drug that makes people feel very good all the time whenever they are in the farm and they say, “Well, the problem has basically been solved because the animals are living great lives”?
Lewis Bollard: Yeah, so I think this is a really interesting ethical question for people about whether that would, in people’s minds, solve the problem. I think from a pure utilitarian perspective it would. A lot of people would fine that kind of perverse having, for instance, particularly I think if you’re talking about animals that might psychologically feel good even in terrible conditions. I think the reason why it’s probably going to remain a thought experiment, though, is that it ultimately relies on the chicken genetics companies and the chicken producers to be on board...

I encourage anyone interested to listen to this part of the podcast or read it in the transcript, but it seems clear to me right now that it will be far easier to develop clean meat which is widely adopted than to create wireheaded chickens whose meat is widely adopted.

In particular, I think that implementing these strategies from the OP will be at least as difficult as creating clean meat:

  • breed animals who enjoy pain, not suffer from it
  • breed animals that want to be eaten, like the Ameglian Major Cow from the Hitchhiker's Guide to the Galaxy

I think that getting these strategies widely adopted is at least as difficult as getting enough welfare improvements widely adopted to make non-wireheaded chicken lives net-positive

  • identify and surgically or chemically remove the part of the brain that is responsible for suffering
  • at birth, amputate the non-essential body parts that would give the animals discomfort later in life

I think that breeding for smaller brains is not worthwhile because smaller brain size does not guarentee reduced suffering capacity and getting it widely adopted by chicken breeders is not obviously easier than getting many welfare improvements widely adopted.

I'm not as confident that injecting chickens with opioids would be a bad strategy, but getting this widely adopted by chicken farms is not obviously easier to me than getting many other welfare improvements widely adopted. I would be curious to see the details of the study romeostevensit [LW · GW] mentioned, but my intuition is that outrage at that practice would far exceed outrage at current factory farm practices because of "unnaturalness", which would make adoption difficult even if the cost of opioids is low.

comment by romeostevensit · 2018-06-20T08:46:59.913Z · LW(p) · GW(p)

IIRC an analysis was done of the cost to administer opioids to livestock at scale and it winds up at pennies a pound. The only reason we don't is negative consumer perception (appreciable quantities do not wind up in the consumed meat), similar to irradiation vs additives for food preservation. Animal charities have been reluctant to pursue further research because of fear of pushing a narrative that makes it okay/gives people an ethical out since opioids don't actually eliminate all the suffering just alleviate some fraction. There is similar contention around 'improved' living standard for the livestock.

Replies from: joey-savoie
comment by Joey Savoie (joey-savoie) · 2018-06-20T22:17:42.348Z · LW(p) · GW(p)

Do you have a link to that analysis?

Replies from: romeostevensit
comment by romeostevensit · 2018-06-22T02:14:06.699Z · LW(p) · GW(p)

I tried to track it down but could not. There appear to be some incentives fueling development in this area. (http://www.abc.net.au/news/rural/2015-04-21/oral-pain-relief-cattle/6409468)

comment by Shmi (shminux) · 2018-06-20T21:11:15.681Z · LW(p) · GW(p)

I thought I had mentioned "farm animals" enough times to make this unambiguous...

comment by ozymandias · 2018-06-20T15:12:58.863Z · LW(p) · GW(p)

I think you have failed to address the issue of why these solutions are acceptable for chickens and not for humans. The obvious explanation for why people disagree with you on this point is not that they don't care about animal suffering, any more than people who don't want to amputate the non-essential body parts that might give humans discomfort later in life don't care about human suffering. It is that they think those actions are unethical for animals, just like they are for humans.

Replies from: Benquo, shminux
comment by Benquo · 2018-06-24T17:12:30.805Z · LW(p) · GW(p)

This seems like an irrelevant objection, given that the OP is explicitly arguing about a conditional (IF mundane improvements in factory farming is a good intervention point for aggregate welfare reasons, THEN wireheading chickens is an even better intervention on those grounds), not unconditionally favoring the latter policy over the former.

For EA to make any sense at all as a way of organizing to do good, it needs to be able to clearly distinguish a rank-ordering of interventions on the basis of merit in a strictly utilitarian or other aggregative analysis with some particular defined outcome, from the question of which interventions have additional sources of support such as other moral considerations.

It also needs to be possible to have a discussion of whether a position is coherent separately from the question of whether it's the position we in fact hold, if that position is a claimed justification for demanding resources.

comment by Shmi (shminux) · 2018-06-20T16:06:45.780Z · LW(p) · GW(p)
It is that they think those actions are unethical for animals, just like they are for humans.

And this is precisely my point. We optimize a human proxy, not actual suffering.

Replies from: ozymandias, drethelin, aristide-twain
comment by ozymandias · 2018-06-20T19:23:23.245Z · LW(p) · GW(p)

That's not a proxy for suffering; it is caring about more than just suffering. You might oppose making animals' brains smaller because it also reduces their ability to feel pleasure, and you value pleasure in addition to pain. You might oppose amputating non-essential body parts because that reduces the animal's capacity for pleasurable experiences of the sort the species tends to experience. You might oppose breeding animals that enjoy pain because of the predictable injuries and shorter lifespan that would result: physical health and fitness is conventionally included in many definitions of animal welfare. You might also be a deontologist who is opposed to certain interventions as a violation of the animal's rights or dignity.

Not being a negative utilitarian is not a bias.

Replies from: shminux, Paperclip Minimizer
comment by Shmi (shminux) · 2018-06-20T20:53:24.076Z · LW(p) · GW(p)
That's not a proxy for suffering; it is caring about more than just suffering

Yes, I agree with all that! I am not advocating that one approach is right and all the others are wrong. I have no prescriptive intentions about animals. I am advocating being honest with oneself about your preferences. If you proclaim to care about the reduction of animal suffering yet really care about many other metrics just as much, spend time reflecting on what your real values are, instead of doing motte-and-bailey when pressed. (This is a generic "you", not you personally.)

Replies from: Paperclip Minimizer
comment by Paperclip Minimizer · 2018-06-21T09:38:48.633Z · LW(p) · GW(p)

It seems like you are the one doing some kind of motte-and-bailey, given you made a post called "Wirehead your Chickens" arguing for wireheading chickens and having a rather dismissive tone towards the opposing side, and now you're saying the real point was that negative utilitarian rhetoric is too emphasized compared to the moral systems which are actually used by EAs. (By the way, the prominence of negative utilitarian rhetoric is one of My Issues With EA Let Me Show You Them.)

Replies from: shminux
comment by Shmi (shminux) · 2018-06-21T16:42:02.329Z · LW(p) · GW(p)

Sorry about the miscommunication. Disengaging, since I do not find focusing on form over substance all that productive. I have accepted your criticism about the tone as valid.

comment by Paperclip Minimizer · 2018-06-21T09:35:00.644Z · LW(p) · GW(p)

I'm surprised that you're mentioning only non-negative utilitarianism and deontology, rather than the capability utilitarianism you recently signal-boosted, which I think is a more psychologically realistic explanation of people's reactions to the idea of wireheading.

comment by drethelin · 2018-06-20T16:53:05.188Z · LW(p) · GW(p)

People have values other than suffering/non-suffering, such as autonomy. You may say "animals don't suffer from lack of autonomy" or "I don't care about animal autonomy" but you need to make that case rather than saying people are just being dumb.

Replies from: shminux
comment by Shmi (shminux) · 2018-06-20T20:53:49.427Z · LW(p) · GW(p)

See my reply to ozy.

comment by astridain (aristide-twain) · 2018-06-20T16:55:24.418Z · LW(p) · GW(p)

But you make it sound as though these people are objectively “wrong”, as if they're *trying* to actually reduce animal suffering in the absolute but end up working on the human proxy because of a bias. That may be true of some, but surely not all. What ozymandias was, I believe, trying to express, is that some of the people who'd reject your solutions consciously find them ethically unacceptable, not merely recoil from them because they'd *instinctively* be against their being used on humans.

Replies from: shminux
comment by Shmi (shminux) · 2018-06-20T20:54:49.092Z · LW(p) · GW(p)

Clearly I have not phrased it well in my post. See my reply to ozy. I am advocating self-honesty about your values, not a particular action.

comment by Yannick_Muehlhaeuser · 2018-06-21T11:26:50.428Z · LW(p) · GW(p)

I think you raised a very important question and i very much agree that one should be honest with oneself what one truly cares about.

When it comes to the interventions you proposed i am nor really sure about the practicality. (2) sounds doable but i'd guess that the side effects of losing the ability to strong pain are severe and would lead to self-hurting behaviour and maybe increased fighting among the animals. But if it was possible to find a drug that could be administered to animals to reduce their suffering (maybe just in certain situations) without major side-effects, that could in fact be an effective intervention and may be worth looking into, mainly for the reason that it maybe wouldn't come with big costs to the corporations doing the farming. It may, however, help to sustain factory farming past the point it could be abolished otherwise, which would probably cause more net suffering.

I don't know how much time breeding animals that are radically different from ours takes and I'm generally a bit more sceptical whether it's worth persuing that.

In general the main problem with this way of fighting animal suffering is that most people concerened about animals wouldn't support it and they probably also would have no problem admitting that they care about more than just reducing suffering. I think that it's probably better to persue strategies for animal suffering reduction that most people in the movement could get behind.

So i think their could be some value of researching this approach but I am sceptical overall.

Replies from: shminux
comment by Shmi (shminux) · 2018-06-21T17:02:29.092Z · LW(p) · GW(p)

Yeah, most of my suggestions were semi-intentionally outside the Overton window, and the reaction to them is appropriately emotional. A more logical approach from an animal welfare proponent would be something along the lines of "People have researched various non-mainstream ideas before and found them all suboptimal, see this link ..." or "This is an interesting approach that has not been investigated much, I see a number of obvious problems with it, but it's worth investigating further." etc.

On the one hand, "it's probably better to pursue strategies for animal suffering reduction that most people in the movement could get behind" is a very reasonable view. On the other hand, a big part of EA is looking into unconventional ways to do good, and focusing on what's acceptable for the mainstream right off the bat does not match that.

comment by ChristianKl · 2018-06-20T17:47:25.510Z · LW(p) · GW(p)

I think a huge problem is that we don't have a good metric for suffering of arbitrary mammals. If you want to breed chickens that enjoy pain you need a way to measure enjoy that doesn't Goodhard in ways that negate the project.

Replies from: shminux
comment by Shmi (shminux) · 2018-06-20T21:01:37.049Z · LW(p) · GW(p)

Some things we do know, such as how animals, including humans, feel when administered various types of painkillers. There is no speculation about it. But it is of course more tempting to focus on rejecting harder-to-implement suggestions if the intention is to reject the whole approach.

comment by Jiro · 2018-06-25T21:14:58.209Z · LW(p) · GW(p)

Most non-rationalists think that whether doing Y on target X is good depends on whether X would prefer Y in a base state where X is unaltered by Y and is aware of the possibility of Y, even if having Y would change his perception or is completely concealed from his perception.

If you're going to create animals who want to be eaten (or who enjoy actions that would otherwise cause suffering), you need to assess whether this is good or bad based on whether a base state animal with unaltered desires would want to be eaten or would want to be subject to those actions. If you're going to amputate animals' body parts, you need to consider whether a base state animal with those parts would want them amputated.

The proposals above all fail this standard.

Replies from: Dacyn
comment by Dacyn · 2018-06-25T21:35:57.438Z · LW(p) · GW(p)

It is not clear that there is any such base state: what would it mean for an animal to "be aware of the possibility" that it could be made to have a smaller brain, have part of its brain removed, or modified so that it enjoys pain? Maybe you have more of a case with amputation and the desire to be eaten, since the animal can at least understand amputation and understand what it means to be eaten (though maybe not what it would mean to not be afraid of being eaten). But "The proposals above all fail this standard" seems to be overgeneralizing.

Replies from: Jiro
comment by Jiro · 2018-06-27T15:16:15.605Z · LW(p) · GW(p)

There are two related but separate ideas. One is that if you want to find out if someone is harmed by X, you need to consider whether they would prefer X in a base state, even if X affects their preferences. Another is that if you want to find out if someone is harmed by X, you need to consider what they would prefer if they knew about and understood X, even if they don't.

Modifying an animal to have a smaller brain falls in the second category; pretty much any being who can understand the concept would consider it harmful to be modified to have a smaller brain, so it should also be considered harmful for beings who don't understand the concept. It may also fall in the first category if you try to argue "their reduced brain capacity will prevent them from knowing what they're missing by having reduced brain capacity". Modifying it so that it enjoys pain falls in the second category for the modification, and the first category for considering whether the pain is harmful.

Replies from: Dacyn
comment by Dacyn · 2018-06-27T21:08:22.916Z · LW(p) · GW(p)

I guess it just seems to me that it's meaningless to talk about what someone would prefer if they knew about/understood X, given that they are incapable of such knowledge/understanding. You can talk about what a human in similar circumstances would think, but projecting this onto the animal seems like anthropomorphizing to me.

You do have a good point that physiological damage should probably still be considered harmful to an animal even if it doesn't cause pain, since the pre-modified animal can understand the concept of such damage and would prefer to avoid it. However, this just means that giving the animal a painkiller doesn't solve the problem completely, not that it doesn't do something valuable.

comment by Benquo · 2018-06-24T17:04:03.770Z · LW(p) · GW(p)

The prevalence of irrelevant objections in the comments here seems like substantial evidence that animal welfare is often being advocated for as an EA cause in ways that diverge substantially from advocates' true reasoning.

comment by mbzrl · 2018-06-20T14:30:31.491Z · LW(p) · GW(p)

I think that extrapolating this post is part of what makes lab-grown meat so appealing. Can't have animal suffering in the quest for animal meat if there is no animal in the middle. I'd wager we're closer to that than to convincing vast swaths of people to stop anthropomorphizing what animals would want for themselves!

comment by Rafael Harth (sil-ver) · 2018-06-21T19:54:31.105Z · LW(p) · GW(p)

I don't have an opinion on this, but I want to voice my approval for bringing up something this controversial without sugarcoating your points. And I want to point out that the title doesn't really reveal the subject, I'd have read it sooner if I had known it was about animal suffering.

comment by levin · 2018-06-21T20:46:55.091Z · LW(p) · GW(p)

I agree with the point that we should be investing more into research on direct reduction of suffering(as a phenomenon that happens in brains), rather than reducing the proxies for it.

This is true for humans as well as for animals: e.g. investing into discovering direct stimulation/surgery approaches to reducing or even turning off pain (or just the painfullness of pain, see pain asymbolia) might have greater impact on life satisfaction than it's opportunity cost for say cancer research.

I am not at all knowledgeble on the subject (and would love to be corrected), but I suspect that ever since lobotomy was declared unethical no interventions for pain other than chemical ones have been seriously investigated.

comment by Yannick_Muehlhaeuser · 2018-06-21T20:24:59.883Z · LW(p) · GW(p)

I think even if we believe that plant-based and clean meat as well as change in attitudes can get us to a world free of, at least, factory farming, it may be worth looking into the strategies as plans for what we might call worst case scenarios, like it it turns out that clean meat will remain too expensive, plant-based alternatives fail to catch on and a signicicant part of the population fails to be convinced by the ethical arguments.

I also think that those ideas may be more important in countries that are only just building factory farms compared to western countries.

comment by ignoranceprior · 2018-06-22T15:46:35.879Z · LW(p) · GW(p)

If you're interested in this idea, you may want to join the "Reducing pain in farm animals" Facebook group. (It's currently very small.)

comment by Paperclip Minimizer · 2018-06-20T12:58:30.412Z · LW(p) · GW(p)

If you are an effective altruist who is concerned with farm animal welfare, what is stopping you from working on finding ways to apply what works but is not ethical for humans to what works and reduces actual suffering in animals?

The fact that, as you said yourself, these solutions aren't ethically acceptable.

Replies from: shminux
comment by Shmi (shminux) · 2018-06-20T21:09:46.581Z · LW(p) · GW(p)

Yep. And I am advocating being honest about caring about something other than animal suffering. Like about human ethics transferred to animals.

Replies from: Paperclip Minimizer
comment by Paperclip Minimizer · 2018-06-21T09:44:16.943Z · LW(p) · GW(p)

See my answer in Ozy's subthread

comment by woodchopper · 2018-06-25T12:23:43.945Z · LW(p) · GW(p)

> identify and surgically or chemically remove the part of the brain that is responsible for suffering,

There is no part of the brain responsible for consciousness. Consciousness is a process and it involves the entire system from the inputs to your brain (like me telling you that you're ignorant) to the peripheral nerves to the complex sub-sectors of the brain.

> breed animals who enjoy pain, not suffer from it

You cannot enjoy pain. That's quite literally a contradiction.

> Many of these are probably way easier and more practical than shaming people into giving up tasty steak

None of the ideas you have posited are easy, practical, or make any sense whatsoever. Shaming people into giving up tasty steak is a weird way to frame the problem. Shaming people for placing a momentary experience given to them by steak on their taste buds as worth torturing cows to death for is a viable and important strategy, because it is fundamentally sound.

> Because most people do not truly care about reducing animal suffering, they care about a different metric altogether, a visible human proxy for animal suffering that they find immediately relatable.

The best way of reducing animal suffering would be to reduce the number of animals currently in existence and reduce the number brought into existence. Ending factory farming is a very effective way of doing this, considering that an extremely large proportion of the most sentient creatures on the planet (mainly mammals with very complex brains) are brought into existence by the direct action of humans, for meat consumption.

One of your ideas, shrinking or even removing the brain, is already being developed. We are making meat without the animal, which means without the brain. We are using technology to do so. This is cultured meat. We are also replicating most of the properties of meat and making plant based meat (see Impossible Foods, Beyond Meat). Both of these approaches are effective and practical.

Is it practical to wirehead tens of billions of chickens every year? No, it's not. It's impossible with current technology. We could surgically implant carfentanil secreting devices in the spinal cords of every chicken, but the process of doing this would drive chicken meat costs up so high that the world would just go vegan instead of paying for them.

I urge you to think more clearly about this issue, instead of trying to find ways to justify your current lifestyle.