Richard Dawkins on vivisection: "But can they suffer?"

post by XiXiDu · 2011-07-04T16:56:20.407Z · LW · GW · Legacy · 49 comments

Contents

49 comments

The great moral philosopher Jeremy Bentham, founder of utilitarianism, famously said,'The question is not, "Can they reason?" nor, "Can they talk?" but rather, "Can they suffer?" Most people get the point, but they treat human pain as especially worrying because they vaguely think it sort of obvious that a species' ability to suffer must be positively correlated with its intellectual capacity.

[...]

Nevertheless, most of us seem to assume, without question, that the capacity to feel pain is positively correlated with mental dexterity - with the ability to reason, think, reflect and so on. My purpose here is to question that assumption. I see no reason at all why there should be a positive correlation. Pain feels primal, like the ability to see colour or hear sounds. It feels like the sort of sensation you don't need intellect to experience. Feelings carry no weight in science but, at the very least, shouldn't we give the animals the benefit of the doubt?

[...]

I can see a Darwinian reason why there might even be be a negative correlation between intellect and susceptibility to pain. I approach this by asking what, in the Darwinian sense, pain is for. It is a warning not to repeat actions that tend to cause bodily harm. Don't stub your toe again, don't tease a snake or sit on a hornet, don't pick up embers however prettily they glow, be careful not to bite your tongue. Plants have no nervous system capable of learning not to repeat damaging actions, which is why we cut live lettuces without compunction.

It is an interesting question, incidentally, why pain has to be so damned painful. Why not equip the brain with the equivalent of a little red flag, painlessly raised to warn, "Don't do that again"?

[...] my primary question for today: would you expect a positive or a negative correlation between mental ability and ability to feel pain? Most people unthinkingly assume a positive correlation, but why?

Isn't it plausible that a clever species such as our own might need less pain, precisely because we are capable of intelligently working out what is good for us, and what damaging events we should avoid? Isn't it plausible that an unintelligent species might need a massive wallop of pain, to drive home a lesson that we can learn with less powerful inducement?

At very least, I conclude that we have no general reason to think that non-human animals feel pain less acutely than we do, and we should in any case give them the benefit of the doubt. Practices such as branding cattle, castration without anaesthetic, and bullfighting should be treated as morally equivalent to doing the same thing to human beings.

Link: boingboing.net/2011/06/30/richard-dawkins-on-v.html

Imagine a being so vast and powerful that its theory of mind of other entities would itself be a sentient entity. If this entity came across human beings, it might model those people at a level of resolution that every imagination it has of them would itself be conscious.

Just like we do not grant rights to our thoughts, or the bacteria that make up a big part of our body, such an entity might be unable to grant existential rights to its thought processes. Even if they are of an extent that when coming across a human being the mere perception of it would incorporate a human-level simulation.

But even for us humans it might not be possible to account for every being in our ethical conduct. It might not work to grant everything the rights that it does deserve. Nevertheless, the answer can not be to abandon morality altogether. If only for the reason that human nature won't permit this. It is part of our preferences to be compassionate.

Our task must be to free ourselves . . . by widening our circle of compassion to embrace all living creatures and the whole of nature and its beauty.

— Albert Einstein

How do we solve this dilemma? Right now it's relatively easy to handle. There are humans and then there is everything else. But even today — without  uplifted animals, artificial intelligence, human-level simulations, cyborgs, chimeras and posthuman beings — it is increasingly hard to draw the line. For that science is advancing rapidly, allowing us to keep alive people with severe brain injury or save a premature fetus whose mother is already dead. Then there are the mentally disabled and other humans who are not  neurotypical. We are also increasingly becoming aware that many non-human beings on this planet are far more intelligent and cognizant than expected.

And remember, as will be the case in future, it has already been the case in our not too distant past. There was a time when three different human species lived at the same time on the same planet. Three intelligent species of the homo genus, yet very different. Only 22,000 years ago we, H. sapiens, have been sharing this oasis of life with Homo floresiensis and Homo neanderthalensis.

How would we handle such a situation at the present-day? At a time when we still haven't learnt to live together in peace. At a time when we are still killing even our own genus. Most of us are not even ready to become vegetarian in the face of global warming, although livestock farming amounts to 18% of the planet’s greenhouse gas emissions.

So where do we draw the line?

49 comments

Comments sorted by top scores.

comment by fubarobfusco · 2011-07-04T19:01:17.916Z · LW(p) · GW(p)

What is suffering, anyway? It's not just response to injury or danger. A tree secretes resin to seal the wound if you cut its trunk, but this doesn't mean that it is suffering. A bacterium might swim away from a chemical or a temperature gradient it's ill-equipped to survive; but this doesn't mean that it is suffering. The existence of a self-protective or harm-avoiding response does not imply suffering.

My model of human suffering includes a term for contemplating the loss of possible futures. It is not just "Ouch! This painful situation hurts! I'd like to get out of it!" but "This is horrible! I'm going to die — or if I live, I'm going to be scarred or afflicted forever." The dying soldier who knows she'll never see home again; the abuse victim who can feel his will to resist slipping away; the genocide victim who sees not only her own life but her entire culture and all its creations and its wisdom being destroyed.

When we think about (nonhuman) animals, we tend to project human feelings onto organisms that are not capable of them. A pet snake cannot love you, and believing that it can do so is actively dangerous; it's an erroneous mental model that leads to people getting killed. Similarly, no turkey ever had the course of thinking that Russell imagines in his famous example about inductive reasoning.

It may well be that humans who project human feelings onto nonhuman animals are also kinder to other humans. Looking into the eyes of a cow and imagining that it has propositional thoughts about its situation may mean that you are more empathetic towards humans who actually do have propositional thoughts. Or it may simply be an incorrect generalization from the fact that the cow has two big dark eyes and sets off our face detectors.

Replies from: XiXiDu, SilasBarta, FiftyTwo
comment by XiXiDu · 2011-07-04T19:11:34.122Z · LW(p) · GW(p)

What is suffering, anyway?

I hope Luke or Yvain are going to write a post on it.

Replies from: SilasBarta
comment by SilasBarta · 2011-07-06T17:37:57.540Z · LW(p) · GW(p)

I hope Yvain does too.

comment by SilasBarta · 2011-07-06T17:41:18.391Z · LW(p) · GW(p)

A tree secretes resin to seal the wound if you cut its trunk, but this doesn't mean that it is suffering. A bacterium might swim away from a chemical or a temperature gradient it's ill-equipped to survive; but this doesn't mean that it is suffering. The existence of a self-protective or harm-avoiding response does not imply suffering.

To go even further, a spring returns to its equilibrium position when you pull on it and let go.

comment by FiftyTwo · 2011-07-05T00:42:00.503Z · LW(p) · GW(p)

What is suffering, anyway? I've seen it defined as possessing desires that can be frustrated.

comment by PhilGoetz · 2011-07-07T04:53:11.699Z · LW(p) · GW(p)

Making the simple, binary, "human / non-human" distinction is extremely convenient when only humans have power. If we acknowledged that there were principles by which some non-humans merited rights, we would also find some humans who did not merit rights following those same principles.

comment by Jayson_Virissimo · 2011-07-06T18:47:13.082Z · LW(p) · GW(p)

Would Dawkins agree that, ceteris paribus, it is more permissible to torture a Stoic because they have diminished capacity for suffering?

comment by gwern · 2011-07-04T20:41:26.701Z · LW(p) · GW(p)

Isn't it plausible that a clever species such as our own might need less pain, precisely because we are capable of intelligently working out what is good for us, and what damaging events we should avoid? Isn't it plausible that an unintelligent species might need a massive wallop of pain, to drive home a lesson that we can learn with less powerful inducement?

I don't think it's plausible at all. The smarter a species is, the more scope it has to go wrong outside of basic actions, the more urges it has unconnected to basic eating and mating. Even a smart species can't directly calculate the fitness of every action. (Heck, I couldn't calculate the fitness of eating right versus eating wrong.) At best, I'd say intelligence has an indeterminate relationship, and if I were allowed to appeal to humans for evidence, I'd point out all sorts of utility-raising and fitness-lowering behaviors like condoms or memes where the intelligence has rather backfired on the genes.

comment by Will_Newsome · 2011-07-05T13:44:21.212Z · LW(p) · GW(p)

Dawkins is normally a much sharper thinker than this, his arguments could have been made much more compelling. Anyway, I am going to sidestep the moral issue and look at the epistemic question.

Evolutionarily speaking the fundamental non-obvious insight is that there's little advantage to be had in signalling weakness and vulnerability if you don't happen to be a social and therefore intelligent animal with a helpful tribe close by. There's no reason to wire pain signals halfway 'round the brain and back just to suffer in more optimal ways if there's no one around to take advantage of thereby. We can strengthen this argument with a complementary but disjunctive mechanistic analysis. It is important to look at humans' cingulate cortex (esp. ACC), insula, pain asymbolia and related insular oddities, reward signal propagation, et cetera. This would be a decent paper to read but I'm too lazy to read it, or this one for that matter. Do note that much brain research is exaggeration and lies, especially about the ACC, as I had the unfortunate pleasure of discovering recently.

Philosophy is perhaps better suited to this question. Metaphysically speaking it must be acknowledged that animals are obviously not as perfect as humans, and are therefore less Godlike, and therefore less sentient, as can all be proven in the same vein as Leibniz's famous Recursive Universal Dovetailing Measure-Utility Inequality Theorem. His arguments are popularly referred to as the "No Free Haha-God-Is-Evil" theorems, though most monads are skeptical of the results' practical applicability to monads in most monads. Theologians admit that they are puzzled by the probably impossible logical possibility of an acausal algorithm employing some variation on Thompson's "Reality-Warping Elysium" process, but unfortunately any progress towards getting any bits about a relevant Chaitin's omega results in its immediate diagonalization out of space, time, and all mathematically interesting axiom sets. This qua "this" can also be proven by "Goedel's ontological proof" if you happen to be Goedel (naturally).

My default position is that suffering as we know it is fundamentally tied in with extremely important and extremely complex social decision theoretic game theoretic calculus modeling stuff, and also all that metaphysics stuff. I will non-negligibly update if someone can show me a good experiment demonstrating something like "learned helplessness" in non-hominids or non-things-that-hunted-in-packs-for-a-long-time-then-were-artificially-molded-into-hominid-companions. That high-citation rat study looked like positive bias upon brief inspection, but maybe that was positive bias.

On the meta level though, the nicest thing about going sufficiently meta is that you don't have to worry about enlightened aqua versus turquoise policy debates. Which by the way continues to reliably invoke the primal forces of insanity. It's like using a tall metal rod as a totem pole for spiritual practice, in a lightning storm, while your house burns down, with the entire universe inside it, and also the love of your life, who is incredibly attractive. Maybe a cool post would be "Policy is the Mind Killer", about how all policy discussion should be at least 16 meta levels up, because basically everything anyone ever does is a lost purpose. (It has not yet been convincingly shown that humanity is not a lost purpose, but I think this is a timeful/timeless confusion and can be dissolved in short order with right view.) Talking about how to talk about thinking about morality is a decent place to start from and work our way up or down, and in the meantime posts like multifoliaterose's one on Lab Pascals are decent mind-teasers maybe. But object level policy debates just entrench bad cognitive habits. Dramatic cognitive habits. Gauche weapons from a less civilized age... of literal weapons. Your strength as a rationalist is your ability to be understood by Douglas Hofstadter and no one else. Ideally that would include yourself. And don't forget to cut through in the same motion, of course. Anyway this is just unsolicited advice aimed without purpose, and I acknowledge that debating lilac versus mauve can be fun some times. ...I'm not gay, it's just an extended metaphor extension.

Off-the-cuff hypothesis that I arrogantly deem more interesting than the discussion topic: The prefrontal cortex is exploiting executive oversight to rent-seek in the neural Darwinian economy, which results in egodystonic wireheading behaviors and self-defeating use of genetic, memetic, and behavioral selection pressure (a scarce resource), especially at higher levels of abstraction/organization where there is more room for bureaucratic shuffling and vague promises of "meta-optimization", where the selection pressure actually goes towards the cortical substructural equivalent of hookers and blow. Analysis across all levels of organization could be given but is omitted due to space, time, and thermodynamic constraints. The pre-frontal cortex is basically a caricature of big government, but it spreads propagandistic memes claiming the contrary in the name of "science" which just happens to be largely funded by pre-frontal cortices. The bicameral system is actually very cooperative despite misleading research in the form of split-brain studies attempting to promote the contrary. In reality they are the lizards. This hypothesis is a possible explanation for hyperbolic discounting, akrasia, depression, Buddhism, free will, or come to think of it basically anything that at some point involved a human brain. This hypothesis can easily be falsified by a reasonable economic analysis.

Replies from: PhilGoetz, khafra, khafra, Jonathan_Graehl, J_Taylor, None
comment by PhilGoetz · 2011-07-07T04:55:02.005Z · LW(p) · GW(p)

Does anybody else understand this? "enlightened aqua versus turquoise policy debates" - is that a thing?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-07T11:11:44.389Z · LW(p) · GW(p)

Folk 'round these parts are smart enough not to get dragged down in Blue versus Green political debate, most of the time. But there is much speculation about policy that is even more insane than Blue versus Green even if it does happen to be more sophisticated and subtle. (Mauve versus lilac is more sophisticated than blue versus green.) For example, it is a strong attractor in insanityspace for people to hear about el singularidad and say "Well obviously this means we should kill everyone in the world except the FAI team because that's what utilitarianism says, I can't believe you people are advocating such extreme measures, that is sick and I have contempt for you, and if you're not advocating such extreme measures then you must be inconsistent and not actually believe anything you say! DRAMA DRAMA DRAMA!". Or some of the responses to multifoliaterose's infinite lab universes post. I'm under the impression that the Buddhists talk about this kind of obsession with drama in the context of Manjushri's sword. Anyway, policy debate makes people stupid, but instead of going up a few meta levels and dealing with that stupidity directly, they choose to make the context of their stupidity a dramatic and emotionally charged one. I have no aim in complaining about this besides maybe highlighting the behavior such that people can realize if they're starting to slip into it.

It's funny how people always complain about death, but not about inferential distance. Inferential distance is a much blacker plague upon the world than death, and the technology to eliminate it is probably about as difficult to engineer as strong anti-aging tech. Technologies that improve communication are game-breaking technologies. E.g. language, writing, printing press, the internet, and the mind-blowing stuff you learn about once you're of high enough rank in the Bayesian Conspiracy.

Replies from: jsalvatier
comment by jsalvatier · 2011-07-07T15:10:26.947Z · LW(p) · GW(p)

You're clearly a smart guy and have interesting things to say, but your posts give off a strong crank vibe. I've noticed this in your comments before, so I don't think it's an isolated issue. Perhaps this doesn't show up in your social interactions elsewhere so it's not a serious issue for you, but if it does I think it would be well worth your while to pay attention to.

Here's some speculations about what it is that sends off my crank alert:

  • You use a lot of references/vocabulary that will be opaque to lots of people
  • You jump between points rapidly
Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-07T17:18:07.675Z · LW(p) · GW(p)

Thank you for your helpful response. It would take a long time to explain the psychology involved on my part, but I do indeed have a fairly thorough understanding of the social psychology involved on the part of others. Sometimes I legitimately expect persons to understand what I am saying and am surprised when they do not, but most often I do not anticipate that folk will understand what I am saying and am unsurprised when they do not. I often comment anyway for three reasons. First, because it would be prohibitively motivationally expensive for me to fully explain each point, and yet I figure there's some non-negligible chance that someone will find something I say to be interesting despite the lack of clarity. Second, because I can use the little bit of motivation I get from the thought of someone potentially getting some insight from something I say, as inspiration to write some of my thoughts down, which I usually find very psychologically taxing. Third, because of some sort of unvirtuous passive-aggression or frustration caused by people being uncharitable in interpreting me, and thus a desire to defect in communication as repayment. The latter comes from a sort of contempt, 'cuz I've been working on my rationalist skillz for a while now as a sort of full-time endeavor and I can see many ways in which Less Wrong is deficient. I am completely aware that such contempt--like all contempt--is useless and possibly inaccurate in many ways. I might start cutting back on my Less Wrong commenting soon. I have an alternative account where I make only clear and high quality comments, I might as well just use that one only. Again, thanks for taking the time to give feedback.

Replies from: jsalvatier, Nick_Tarleton, jsalvatier
comment by jsalvatier · 2011-07-07T18:21:57.461Z · LW(p) · GW(p)

I'd be very interested in seeing posts on specifics on how LW is deficient/could improve.

comment by Nick_Tarleton · 2011-07-07T18:50:00.124Z · LW(p) · GW(p)

Third, because of some sort of unvirtuous passive-aggression or frustration caused by people being uncharitable in interpreting me, and thus a desire to defect in communication as repayment.

You know this causes them to defect in turn by actively not-trying to understand you, right?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-07T18:59:01.221Z · LW(p) · GW(p)

Of course. "I do indeed have a fairly thorough understanding of the social psychology involved on the part of others."

comment by jsalvatier · 2011-07-07T18:03:36.847Z · LW(p) · GW(p)

Interesting. Good context to have.

I would expect such contempt to be actively harmful to you (in that people will like you and listen to you less).

I hope I did not come off as adversarial.

comment by khafra · 2012-01-12T22:01:03.646Z · LW(p) · GW(p)

I will non-negligibly update if someone can show me a good experiment demonstrating something like "learned helplessness" in non-hominids or non-things-that-hunted-in-packs-for-a-long-time-then-were-artificially-molded-into-hominid-companions.

Elephants?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-01-16T20:23:10.417Z · LW(p) · GW(p)

Thanks for the link, I'll check it out soon. It's funny, just a few days before I read your comment I noticed that I was confused by elephants. What is up with elephants? They're weird. Anyway, thanks.

comment by khafra · 2012-03-12T15:08:20.254Z · LW(p) · GW(p)

I just returned to the parent comment by way of comment-stalking muflax, and got even more out of it this time. You live in an interesting place, Will; and I do enjoy visiting.

Still not sure where the "dovetailing" of Leibniz comes in; or what the indefinite untrustworthy basement layers of Ken Thompson have to do with Elysium; but perhaps I'll get it on my next reading.

Nerfhammer's excellent wikipedia contributions reminded me of your disdain for the heuristics and biases literature. The disdain seems justified (for example, the rhyme-as-reason effect depends on Bayesian evidence: a guideline immortalized in verse has likely been considered longer than the average prose observation); but, are there any alternatives for working toward more effective thinking?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-12T18:46:03.672Z · LW(p) · GW(p)

You live in an interesting place, Will; and I do enjoy visiting.

It's gotten about twice as interesting since I wrote that comment. E.g. I've learned a potentially very powerful magick spell in the meantime.

"Reality-Warping Elysium" was a Terence McKenna reference; I don't remember its rationale but I don't think it was a very good one.

Nerfhammer's excellent wikipedia contributions reminded me of your disdain for the heuristics and biases literature.

I think I may overstate my case sometimes; I'm a very big Gigerenzer fan, and he's one of the most cited H&B researchers. (Unlike most psychologists, Gigerenzer is a very competent statistician.) But unfortunately the researchers who are most cited by LessWrong types, e.g. Kahneman, are those whose research is of quite dubious utility. What's frustrating is that Eliezer knows of and appreciates Gigerenzer and must know of his critiques of Kahneman and his (overzealous semi-Bayesian) style of research, but he almost never cites that side of the H&B research. Kaj Sotala, a cognitive science student, has pointed out some of these things to LessWrong and yet the arguments don't seem to have entered into the LessWrong memeplex.

The two hallmarks of LessWrong are H&B and Bayesian probability: the latter is often abused, especially in the form of algorithmic probability, and decision theorists have shown that it's not as fundamental as Eliezer thought it was; and the H&B literature, like all psychology literature, is filled with premature conclusions, misinterpretations, questionable and contradictory results, and generally an overall lack of much that can be used to bolster rationality. (It's interesting and frustrating to see many papers demonstrating "biases" in opposite directions on roughly the same kind of problem, with only vague and ad hoc attempts to reconcile them.) If there's a third hallmark of LessWrong then it's microeconomics and game theory, especially Schelling's style of game theory, but unfortunately it gets relatively neglected and the posts applying Schellingian and Bayesian reasoning to complex problems of social signaling hermeneutics are very few and far-between.

I may have adjusted too much, but... Before I read a 1980s(?) version of Dawes' "Rational Choice in an Uncertain World" I had basically the standard LessWrong opinion of H&B, namely that it's flawed like all other science but you could basically take its bigger results for granted as true and meaningful; but as I read Dawes' book I felt betrayed: the research was clearly so flawed, brittle, and easily misinterpreted that there's no way building an edifice of "rationality" on top of it could be justifiable. A lot of interesting research has surely gone on since that book was written, but even so, that the foundations of the field are so shoddy indicates that the field in general might be non-negligibly cargo cult science. (Dawes even takes a totally uncalled for and totally incorrect potshot at Christians in the middle of the book; this seems relatively innocuous, but remember that Eliezer's naive readers are doing the same thing when they try to apply H&B results to the reasoning of normal/superstitious/religious folk. It's the same failure mode; you have these seemingly solid results, now you can clearly demonstrate how your enemies' reasoning is wrong and contemptible, right? It's disturbing that this attitude is held even by some of the most-respected researchers in the field.)

I remain stressed and worried about Eliezer, Anna, and Julia's new organization for similar reasons; I've seen people (e.g. myself) become much better thinkers due to hanging out with skilled thinkers like Anna, Steve Rayhawk, Peter de Blanc, Michael Vassar, et cetera; but this improvement had nothing to do with "debiasing" as such, and had everything to do with spending a lot of time in interesting conversations. I have little idea why Eliezer et al think they can give people anything more than social connections and typical self-help improvements that could be gotten from anywhere else, unless Eliezer et al plan on spending a lot of time actually talking to people about actual unsolved problems and demonstrating how rationality works in practice.

but, are there any alternatives for working toward more effective thinking?

Finding a mentor or at least some peers and talking to them a lot seems to work somewhat, having high intelligence seems pretty important, not being neurotypical seems as important as high intelligence, reading a ton seems very important but I'm not sure if it's as useful for people who don't start out schizotypal. I think that making oneself more schizotypal seems like a clear win but I don't know how one would go about doing it; maybe doing a lot of nitrous or ketamine, but um, don't take my word for it. There's a fundamental skill of taking some things very seriously and other things not seriously at all that I don't know how to describe or work on directly. Yeah, I dunno; but it seems a big thing that separates the men from the boys and that is clearly doable is just reading a ton of stuff and seeing how it's connected, and building lots of models of the world based on what you read until you're skilled at coming up with off-the-cuff hypotheses. That's what I spend most of my time doing. I'm certain that getting good at chess helps your rationality skills and I think Michael Vassar agrees with me; I definitely notice that some of my chess-playing subskills for thinking about moves and counter-moves get used more generally when thinking about arguments and counter-arguments. (I'm rated like 1800 or something.)

Replies from: gwern, Eugine_Nier, XiXiDu
comment by gwern · 2012-03-13T01:34:53.729Z · LW(p) · GW(p)

E.g. I've learned a potentially very powerful magick spell in the meantime.

Well shoot, don't tell us about it - our disbelief might stop it from working.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-13T01:39:50.222Z · LW(p) · GW(p)

If I don't tell you what it is or what it does then I think I'm okay. Admittedly I don't have much experience in the field.

comment by Eugine_Nier · 2012-03-14T23:31:34.110Z · LW(p) · GW(p)

If there's a third hallmark of LessWrong then it's microeconomics and game theory, especially Schelling's style of game theory, but unfortunately it gets relatively neglected and the posts applying Schellingian and Bayesian reasoning to complex problems of social signaling hermeneutics are very few and far-between.

I blame the fact the Eliezer doesn't have a sequence talking about them.

comment by XiXiDu · 2012-03-12T20:02:35.704Z · LW(p) · GW(p)

I have little idea why Eliezer et al think they can give people anything more than social connections and typical self-help improvements...

Less Wrong has been created with the goal in mind of getting people to support SIAI.

Less Wrong is mainly a set of beliefs and arguments selected for their persuasiveness in convincing people that creating friendly AI is of utmost importance (follow the link above if you think I am wrong).

The two hallmarks of LessWrong are H&B and Bayesian probability: the latter is often abused...

I believe that the most abused field is artificial intelligence. The ratio of evidence to claims about artificial intelligence is extremely low.

comment by Jonathan_Graehl · 2011-07-05T18:56:48.210Z · LW(p) · GW(p)

Your last graf is tantalizing but incomprehensible. Expound, please.

comment by J_Taylor · 2012-01-25T02:44:05.796Z · LW(p) · GW(p)

If you possessed a talent for writing decent prose, you could be the next Lovecraft. Mind, Lovecraft's prose was less-than-decent, but that is beside the point.

My default position is that suffering as we know it is fundamentally tied in with extremely important and extremely complex social decision theoretic game theoretic calculus modeling stuff, and also all that metaphysics stuff. I will non-negligibly update if someone can show me a good experiment demonstrating something like "learned helplessness" in non-hominids or non-things-that-hunted-in-packs-for-a-long-time-then-were-artificially-molded-into-hominid-companions. That high-citation rat study looked like positive bias upon brief inspection, but maybe that was positive bias.

Aside from this paragraph, I am almost entirely unsure what you were stating in that post. However, it produced feelings of interest and dread.

By chance, do you have any capacity to summarize it? If this is the case, would you please be willing to do so?

comment by [deleted] · 2012-01-16T23:14:33.445Z · LW(p) · GW(p)

Do note that much brain research is exaggeration and lies, especially about the ACC, as I had the unfortunate pleasure of discovering recently.

Mind expanding on that?

Also, "Recursive Universal Dovetailing Measure-Utility Inequality Theorem" is an extremely awesome name. Phrased like this I actually finally got why you're raving so much about Leibniz. Gotta try re-reading him from that perspective. Your comments really should come with a challenge rating and "prerequisites: Kolmogorov level 5, feat Bicameral Mind" list.

comment by atucker · 2011-07-05T08:23:47.482Z · LW(p) · GW(p)

my primary question for today: would you expect a positive or a negative correlation between mental ability and ability to feel pain? Most people unthinkingly assume a positive correlation, but why?

Basically, I think that the extent to which my brain labels something as suffering is based on my ability to empathize with that thing on an emotional level. My ability to do that is based on how well my mirror neurons can induce a similar feeling in me.

This winds up basically saying that I care about/notice suffering based on how similar a being is to myself. I see humans losing loved ones and being injured as suffering. When my dog cowers during thunderstorms I think she's suffering. When I see a snake dying painfully, I feel bad, but not nearly as bad as I (imagine I) would seeing a mammal die. I think killing ants is somewhat sadistic, but I don't particularly care about the ant's death. If said ants are in my house, its fair game to attempt a genocide on them. Plants are totally okay to murder, IMO.

The more I find out about how various animals are smart in ways similar to me, the more I feel that they can suffer.

Going back to the question, I think the reason that people think that increased intelligence means increased capacity to suffer is that for the reference class of things we've come across, intelligence correlates with similarity to ourselves as humans.

I imagine that I would empathize pretty well with a Neanderthal.

comment by Armok_GoB · 2011-07-04T19:40:11.018Z · LW(p) · GW(p)

This is somehting I've always taken very seriously but not found all that mysterious in itself. (although being wrong about related things such as values and conciousness can be amplified)

There is one thing everyone should try sometime: you have direct access to the fictional characters you write; Ask them about these very important things and how they want to be treated! Not that you'll do whatever they say, but so you know what they think to themselves about it.

comment by Lila · 2011-07-05T14:19:49.837Z · LW(p) · GW(p)

I don't intuit any particular correlation between suffering and intelligence. I am not on board with Bentham's idea that capacity for suffering is what counts, morally speaking. It's not intelligence but sapience that I find morally significant.

Replies from: syllogism, PhilGoetz
comment by syllogism · 2011-07-07T01:53:42.186Z · LW(p) · GW(p)

So the vivisection experiments would be okay, to your mind, even if all the experimenter got out of them was amusement?

You should be careful declaring that you ascribe literally zero moral weight to non-human animals. It doesn't match up with most people's moral intuitions well at all.

There also exist a lot of non-"sapient" humans, as birth defects and brain damage give us a fair continuum of humans with different mental capacities to think about.

comment by PhilGoetz · 2011-07-07T04:48:22.407Z · LW(p) · GW(p)

How is sapience different from intelligence? What do you think it means?

comment by Alexei · 2011-07-04T19:13:26.422Z · LW(p) · GW(p)

Most of us are not even ready to become vegetarian in the face of global warming

This is the best argument I've heard for becoming a vegetarian, but I get enough pleasure from eating meat to continue to do so, because I believe the global warming problem will be trivially solved with advanced technology (AI or nanotech) in near future (<100 years). For these same reasons, I am also reconsidering if bothering with recycling (in personal life) is worthwhile.

Replies from: Raemon, None
comment by Raemon · 2011-07-05T17:01:58.302Z · LW(p) · GW(p)

How strongly do you believe that Global Warming will be fixed trivially by sufficiently advanced technology? How good an excuse do you think this is to basically ignore all long term consequences of your actions?

How suspicious are you of the fact that this provides an excuse to keep doing what you wanted to do anyway?

comment by [deleted] · 2011-07-04T20:58:30.838Z · LW(p) · GW(p)

My reason for being* a vegetarian is that I claim it's not fair to raise animals with a low quality of live and slaughter them - I'm sure you know about this argument so I'm curious as to any way to discharge it?

(* I'm not yet a vegetarian but I beleive I should be and will be soon)

Replies from: FiftyTwo
comment by FiftyTwo · 2011-07-05T00:44:26.388Z · LW(p) · GW(p)

Fairness as a concept only applies to beings of equal moral status, its not fair that rocks are treated differently from humans, but thats irrelevant as they don't possess qualities that make humans morally significant. The question is what these qualities are and whether animals share them.

Replies from: Jayson_Virissimo, syllogism, None
comment by Jayson_Virissimo · 2011-07-05T16:31:05.416Z · LW(p) · GW(p)

Fairness as a concept only applies to beings of equal moral status...

What experiment could we run that would give us evidence for whether two beings have equal moral status or not?

comment by syllogism · 2011-07-07T01:45:22.198Z · LW(p) · GW(p)

I approximately go by Bentham's criterion for what makes humans morally significant: we can suffer, as can animals. Rocks cannot. There is no reason to believe one configuration of a rock is "better" for it than another, as it has no kind of mind. I see no reason to believe a plant has any kind of mind either.

A pig, however, does prefer some states of reality over others, to quite a great degree. I think it's reasonable to say that the conditions we raise most pigs in mean their lives are a net negative: they'd be better off experiencing nothing than experiencing the lives and deaths we create for them.

I suggest you've tied together two questions. You're working backwards from "I'm going to keep eating meat", and have wound up at a conclusion that animals must not be morally considerable, because of that. Instead, separate the issue into two questions:

1) Are animals morally considerable? Is there anything I can do to an animal that is unethical? Is it okay to kick a dog, if I gain some momentary amusement at listening to it yelp?

2) How should the trade-off between my benefit and moral consideration to others work, exactly?

comment by [deleted] · 2011-07-05T01:21:35.742Z · LW(p) · GW(p)

I must be misusing the word fair, I'm not familiar with the usage you're hinting at.

I'm not trying to anthropomorphise but it's a reasonable extrapolation that they experience their own lives (certainly we have no real understanding of consciousness to apply to measure and prove or disprove this yet, but we cannot wait for it). With that assumption the life of (for example) a cow as livestock can be seen as a significantly worse experience for the animal than their natural/wild lives would have been, although still not optimal since they are subject to predators and other bad things. To not accept this as unkind or "unfair" treatment seems to be based on the assumption that they are really unconscious automatons that regulate some meat hanging off them - essentially a algorithmic restatement of the "they don't have souls" view from the past.

comment by timtyler · 2011-07-06T11:34:42.720Z · LW(p) · GW(p)

At very least, I conclude that we have no general reason to think that non-human animals feel pain less acutely than we do, and we should in any case give them the benefit of the doubt. Practices such as branding cattle, castration without anaesthetic, and bullfighting should be treated as morally equivalent to doing the same thing to human beings.

What - on the grounds that morality is all about universally reducing suffering? That seems to be a pretty daft premise to me.

A demographic examination of those involved suggests that animal rights campaigning appears to be largely to do with the signalling function of morality. Those who promote these ideas are the ones who want to signal what goody-two-shoes they are. It is a case of: see how much I care, I even care for whales.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-07-06T16:51:10.114Z · LW(p) · GW(p)

People championing a cause for the wrong reasons does not make the cause itself invalid.

Replies from: timtyler, SilasBarta
comment by timtyler · 2011-07-06T17:01:42.315Z · LW(p) · GW(p)

People championing a cause for the wrong reasons does not make the cause itself invalid.

True. I don't think there is anything "wrong" about wanting to be seen to be good, though. That goodness is attractive and that people like to be seen to be good is one of the great things about the world. Thank goodness for those fuzzies. We could be living in a much more evilicious environment.

Replies from: endoself
comment by endoself · 2011-07-06T18:18:25.436Z · LW(p) · GW(p)

Wrong as in not-truth-seeking. If we want to find good causes, this is the wrong evidence to use.

Replies from: timtyler
comment by timtyler · 2011-07-06T19:02:10.387Z · LW(p) · GW(p)

Wrong as in not-truth-seeking.

For me that would be quite a stretch. "Wrong" doesn't have "not-truth-seeking" as a meaning in any dictionary I am familiar with.

Replies from: endoself
comment by endoself · 2011-07-06T22:23:30.450Z · LW(p) · GW(p)

I didn't find that usage unnatural at all, considering that this website is about norms that can help people seek truth. The people that Kaj was talking about are violating the norms that we discuss here.

comment by SilasBarta · 2011-07-06T17:36:15.699Z · LW(p) · GW(p)

... as long as a right reason exists in the first place.

comment by Alexei · 2011-07-04T19:22:46.851Z · LW(p) · GW(p)

I've been thinking about this issue and to answer you question: "So where do we draw the line?", I think the answer is the following:

Take a group of creatures (humans, dogs, etc...) and let them live in a rich environment for a very very very long time. Graph the intelligence of individual creatures over time. If it's not overall consistently increasing, then this creature isn't smart enough to worry about.

Some things I wonder about: do all humans (or creatures I would want to classify as humans) pass this test? What animals/non-humans pass this test?

Replies from: endoself
comment by endoself · 2011-07-04T19:31:15.516Z · LW(p) · GW(p)

Hold off on proposing solutions!

You provide an answer fully-formed with no account of how you arrived at it and without providing reasons for others to accept it. Even if you did come up with this through a valid procedure, you aren't providing evidence of having done so.