Preliminary thoughts on moral weight
post by lukeprog · 2018-08-13T23:45:13.430Z · LW · GW · 49 commentsContents
Proposed setup Potential dimensions of moral weight Clock speed of consciousness Unities of consciousness Subject unity Representational unity Phenomenal unity Unity-independent intensity of valenced aspects of consciousness Moral weights of various species Other writings on moral weight None 49 comments
This post adapts some internal notes I wrote for the Open Philanthropy Project, but they are merely at a "brainstorming" stage, and do not express my "endorsed" views nor the views of the Open Philanthropy Project. This post is also written quickly and not polished or well-explained.
My 2017 Report on Consciousness and Moral Patienthood tried to address the question of "Which creatures are moral patients?" but it did little to address the question of "moral weight," i.e. how to weigh the interests of different kinds of moral patients against each other:
For example: suppose we conclude that fishes, pigs, and humans are all moral patients, and we estimate that, for a fixed amount of money, we can (in expectation) dramatically improve the welfare of (a) 10,000 rainbow trout, (b) 1,000 pigs, or (c) 100 adult humans. In that situation, how should we compare the different options? This depends (among other things) on how much “moral weight” we give to the well-being of different kinds of moral patients.
Thus far, philosophers have said very little about moral weight (see below). In this post I lay out one approach to thinking about the question, in the hope that others might build on it or show it to be misguided.
Proposed setup
For the simplicity of a first-pass analysis of moral weight, let's assume a variation on classical utilitarianism according to which the only thing that morally matters is the moment-by-moment character of a being's conscious experience. So e.g. it doesn't matter whether a being's rights are respected/violated or its preferences are realized/thwarted, except insofar as those factors affect the moment-by-moment character of the being's conscious experience, by causing pain/pleasure, happiness/sadness, etc.
Next, and again for simplicity's sake, let's talk only about the "typical" conscious experience of "typical" members of different species when undergoing various "canonical" positive and negative experiences, e.g. consuming species-appropriate food or having a nociceptor-dense section of skin damaged.
Given those assumptions, when we talk about the relative "moral weight" of different species, we mean to ask something like "How morally important is 10 seconds of a typical human's experience of [some injury], compared to 10 seconds of a typical rainbow trout's experience of [that same injury]?
For this exercise, I'll separate "moral weight" from "probability of moral patienthood." Naively, you could then multiply your best estimate of a species' moral weight (using humans as the baseline of 1) by P(moral patienthood) to get the species' "expected moral weight" (or whatever you want to call it). Then, to estimate an intervention's potential benefit for a given species, you could multiply [expected moral weight of species] × [individuals of species affected] × [average # of minutes of conscious experience affected across those individuals] × [average magnitude of positive impact on those minutes of conscious experience].
However, I say "naively" because this doesn't actually work, due to two-envelope effects.
Potential dimensions of moral weight
What features of a creature's conscious experience might be relevant to the moral weight of its experiences? Below, I describe some possibilities that I previously mentioned in Appendix Z7 of my moral patienthood report.
Note that any of the features below could be (and in some cases, very likely are) hugely multidimensional. For simplicity, I'm going to assume a unidimensional characterization of them, e.g. what we'd get if we looked only at the principal component in a principal component analysis of a hugely multidimensional phenomenon.
Clock speed of consciousness
Perhaps animals vary in their "clock speed." E.g. a hummingbird reacts to some things much faster than I ever could. If any of that is under conscious control, its "clock speed" of conscious experience seems like it should be faster than mine, meaning that, intuitively, it should have a greater number of subjective "moments of consciousness" per objective minute than I do.
In general, smaller animals probably have faster clock speeds than larger ones, for mechanical reasons:
The natural oscillation periods of most consciously controllable human body parts are greater than a tenth of a second. Because of this, the human brain has been designed with a matching reaction time of roughly a tenth of a second. As it costs more to have faster reaction times, there is little point in paying to react much faster than body parts can change position.
…the first resonant period of a bending cantilever, that is, a stick fixed at one end, is proportional to its length, at least if the stick’s thickness scales with its length. For example, sticks twice as long take twice as much time to complete each oscillation. Body size and reaction time are predictably related for animals today… (Hanson 2016, ch. 6)
My impression is that it's a common intuition to value experience by its "subjective" duration rather than its "objective" duration, with no discount. So if a hummingbird's clock speed is 3x as fast as mine, then all else equal, an objective minute of its conscious pleasure would be worth 3x an objective minute of my conscious pleasure.
Unities of consciousness
Philosophers and cognitive scientists debate how "unified" consciousness is, in various ways. Our normal conscious experience seems to many people to be pretty "unified" in various ways, though sometimes it feels less unified, for example when one goes "in and out of consciousness" during a restless night's sleep, or when one engages in certain kinds of meditative practices.
Daniel Dennett suggests that animal conscious experience is radically less unified than human consciousness is, and cites this as a major reason he doesn't give most animals much moral weight.
For convenience, I'll use Bayne (2010)'s taxonomy of types of unity. He talks about subject unity, representational unity, and phenomenal unity — each of which has a "synchronic" (momentary) and "diachronic" (across time) aspect of unity.
Subject unity
Bayne explains:
My conscious states possess a certain kind of unity insofar as they are all mine; likewise, your conscious states possess that same kind of unity insofar as they are all yours. We can describe conscious states that are had by or belong to the same subject of experience as subject unified. Within subject unity we need to distinguish the unity provided by the subject of experience across time (diachronic unity) from that provided by the subject at a time (synchronic unity).
Representational unity
Bayne explains:
Let us say that conscious states are representationally unified to the degree that their contents are integrated with each other. Representational unity comes in a variety of forms. A particularly important form of representational unity concerns the integration of the contents of consciousness around perceptual objects—what we might call ‘object unity’. Perceptual features are not normally represented by isolated states of consciousness but are bound together in the form of integrated perceptual objects. This process is known as feature-binding. Feature-binding occurs not only within modalities but also between them, for we enjoy multimodal representations of perceptual objects.
I suspect many people wouldn't treat representational unity as all that relevant to moral weight. E.g. there are humans with low representational unity of a sort (e.g. visual agnosics); are their sensory experiences less morally relevant as a result?
Phenomenal unity
Bayne explains:
Subject unity and representational unity capture important aspects of the unity of consciousness, but they don’t get to the heart of the matter. Consider again what it’s like to hear a rumba playing on the stereo whilst seeing a bartender mix a mojito. These two experiences might be subject unified insofar as they are both yours. They might also be representationally unified, for one might hear the rumba as coming from behind the bartender. But over and above these unities is a deeper and more primitive unity: the fact that these two experiences possess a conjoint experiential character. There is something it is like to hear the rumba, there is something it is like to see the bartender work, and there is something it is like to hear the rumba while seeing the bartender work. Any description of one’s overall state of consciousness that omitted the fact that these experiences are had together as components, parts, or elements of a single conscious state would be incomplete. Let us call this kind of unity — sometimes dubbed ‘co-consciousness’ — phenomenal unity.
Phenomenal unity is often in the background in discussions of the ‘stream’ or ‘field’ of consciousness. The stream metaphor is perhaps most naturally associated with the flow of consciousness — its unity through time — whereas the field metaphor more accurately captures the structure of consciousness at a time. We can say that what it is for a pair of experiences to occur within a single phenomenal field just is for them to enjoy a conjoint phenomenality — for there to be something it is like for the subject in question not only to have both experiences but to have them together. By contrast, simultaneous experiences that occur within distinct phenomenal fields do not share a conjoint phenomenal character.
Unity-independent intensity of valenced aspects of consciousness
A common report of those who take psychedelics is that, while "tripping," their conscious experiences are "more intense" than they normally are. Similarly, different pains feel similar but have different intensities, e.g. when my stomach is upset, the intensity of my stomach pain waxes and wanes a fair bit, until it gradually fades to not being noticeable anymore. Same goes for conscious pleasures.
It's possible such variations in intensity are entirely accounted for by their degrees of different kinds of unity, or by some other plausible feature(s) of moral weight, but maybe not. If there is some additional "intensity" variable for valenced aspects of conscious experience, it would seem a good candidate for affecting moral weight.
From my own experience, my guess is that I would endure ~10 seconds of the most intense pain I've ever experienced to avoid experiencing ~2 months of the lowest level of discomfort that I'd bother to call "discomfort." That very low level of discomfort might suggest a lower bound on "intensity of valenced aspects of experience" that I intuitively morally care about, but "the most intense pain I've ever experienced" probably is not the highest intensity of valenced aspects of experience it is possible to experience — probably not even close. You could consider similar trades to get a sense for how much you intuitively value "intensity of experience," at least in your own case.
Moral weights of various species
(This section edited slightly on 2020-02-26.)
If we thought about all this more carefully and collected as much relevant empirical data as possible, what moral weights might we assign to different species?
Whereas my probabilities of moral patienthood for any animal as complex as a crab only range from 0.2 - 1, the plausible ranges of moral weight seem like they could be much larger. I don't feel like I'd be surprised if an omniscient being told me that my extrapolated values would assign pigs more moral weight than humans, and I don't feel like I'd be surprised if an omniscient being told me my extrapolated values would assign pigs .0001 moral weight (assuming they were moral patients at all).
To illustrate how this might work, below are some guesses at some "plausible ranges of moral weight" (80% prediction interval) for a variety of species that someone might come to, if they had intuitions like those explained below.
- Humans: 1 (baseline)
- Chimpanzees: 0.001 - 2
- Pigs: 0.0005 - 3.5
- Cows: 0.0001 - 5
- Chickens: 0.00005 - 10
- Rainbow trout: 0.00001 - 13
- Fruit fly: 0.000001 - 20
(But whenever you're tempted to multiply such numbers by something, remember two-envelope effects!)
What intuitions might lead to something like these ranges?
- An intuition to not place much value on "complex/higher-order" dimensions of moral weight — such as "fullness of self-awareness" or "capacity for reflecting on one's holistic life satisfaction" — above and beyond the subjective duration and "intensity" of relatively "brute" pleasure/pain/happiness/sadness that (in humans) tends to accompany reflection, self-awareness, etc.
- An intuition to care more about subject unity and phenomenal unity than about such higher-order dimensions of moral weight.
- An intuition to care most of all about clock speed and experience intensity (if intensity is distinct from unity).
- Intuitions that if the animal species listed above are conscious, they:
- have very little of the higher-order dimensions of conscious experience,
- have faster clock speeds than humans (the smaller the faster),
- probably have lower "intensity" of experience, but might actually have somewhat greater intensity of experience (e.g. because they aren't distracted by linguistic thought),
- have moderately less subject unity and phenomenal unity, especially of the diachronic sort.
Under these intuitions, the low end of the ranges above could be explained by the possibility that intensity of conscious experience diminishes dramatically with brain complexity and flexibility, while the high end of the ranges above could be explained by the possibility concerning faster clock speeds for smaller animals, the possibility of lesser unity in non-human animals (which one might value at >1x for the same reason one might value a dually-conscious split-brain patient at ~2x), and the possibility for greater intensity of experience in simpler animals.
Other writings on moral weight
- Brian Tomasik: Is animal suffering less bad than human suffering?; Which computations do I care about?; Is brain size morally relevant?; Do Smaller Animals Have Faster Subjective Experiences?; Two-Envelopes Problem for Uncertainty about Brain-Size Valuation and Other Moral Questions
- Nick Bostrom: Quantity of Experience
- Kevin Wong: Counting Animals
- Oscar Horta: Questions of Priority and Interspecies Comparisons of Happiness
- Adler et al., Would you choose to be happy? Tradeoffs between happiness and the other dimensions of life in a large population survey
49 comments
Comments sorted by top scores.
comment by habryka (habryka4) · 2018-08-16T19:02:43.993Z · LW(p) · GW(p)
Promoted to curated: The general topic of moral patienthood strikes me as important on two fronts. One, it's important in terms of understanding our values and taking good actions, and two it's important as an area in which I think it's pretty clear that human thinking is still confused, and so for many people I think it's a good place to try to dissolve confused questions and train the relevant skills of rationality. I think while this post is less polished than your much longer moral patienthood report, I think for most people it will be a better place to start engaging with this topic, at least in parts because it's length isn't as daunting.
On the more object level, I think this post makes some quite interesting points that have changed my thinking a good bit. I think most people have not considered the hypothesis that animals could be assigned a higher moral value than humans, and independently of the truth value of that hypothesis, I think the evidence presented helps people realize a bunch of implicit constraints in their thinking around moral patienthood. I've heard similar things from other people who've read the post.
It's also great to see you write a post on LW again, and I strongly recommend newcomers to read lukeprog's other [? · GW] writing [? · GW] on LW [LW · GW], if you haven't done so.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2018-08-25T02:50:37.638Z · LW(p) · GW(p)
I agree with this, and I agree with Luke that non-human animals could plausibly have much higher (or much lower) moral weight than humans, if they turned out to be moral patients at all.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2018-08-25T03:08:55.235Z · LW(p) · GW(p)
It may be worth emphasizing that "plausible ranges of moral weight" are likely to get a lot wider when we move from classical utilitarianism to other reasonably-plausible moral theories (even before we try to take moral uncertainty into account).
comment by Ben Pace (Benito) · 2018-08-14T16:56:15.621Z · LW(p) · GW(p)
I thought I'd try to re-state the three types of unity in my own words, to test my understanding.
- Subject unity: Are the experiences owned by the same person? Perhaps you are a different person every second, or perhaps there are two separate streams of consciousness running in your head, meaning two experiences do not have 'subject unity'.
- Representational unity: It is the case that the difference between two successive squares is always odd. Now suppose that you haven't yet learned this fact, but I present to you two successive squares, and you notice that the difference is odd. You have a representation of the two successive squares, and the additional fact that difference is odd. Then I show you a proof that the difference between any two successive squares is odd, to the point where you understand it so intuitively that your model is now just the two squares. If I ask you if the difference is odd, you don't even have to calculate the number, it's just a direct function of your representation. It's a simpler representation now. And so the odd-ness is now unified representationally with the successor-squares-ness, even though they previously used to be two separate facts.
- Phenomenal unity: There are many parts that make a blues song. Certain kinds of topics and words, certain kinds of guitar and other instruments, certain ways of playing them, and certain ways of dressing while you're singing. Each of these alone is a single experience that is not '1/5th of a blues song', yet together they build a whole experience that was not contained in any single part, but in the interaction between them. The fact that they were experienced together is a fact over and above the fact that they were each experienced, and so (when experienced together) they have phenomenal unity.
I have not read much in this field before, and expect all three of my descriptions are wrong in some significant way. I wrote this comment so that someone else would have a datapoint to triangulate any misconceptions off (i..e. correcting me could help communicate the core concepts).
Edit: An actually true mathematical example for representational unity, thanks Daniel Filan.
comment by lukeprog · 2019-07-31T21:37:22.822Z · LW(p) · GW(p)
Interesting historical footnote from Louis Francini:
This issue of differing "capacities for happiness" was discussed by the classical utilitarian Francis Edgeworth in his 1881 Mathematical Psychics (pp 57-58, and especially 130-131). He doesn't go into much detail at all, but this is the earliest discussion of which I am aware. Well, there's also the Bentham-Mill debate about higher and lower pleasures ("It is better to be a human being dissatisfied than a pig satisfied"), but I think that may be a slightly different issue.
comment by TAG · 2018-08-14T15:50:19.844Z · LW(p) · GW(p)
Are moral weights supposed to be real properties that are out there? If they are not, if they are just projections of human concern, then what would be right about the right answer? How will you know when you have solved the problem?
Replies from: lukeprogcomment by avturchin · 2018-08-14T11:32:26.296Z · LW(p) · GW(p)
Probably it was addressed somewhere in the links above, but I would like to mentione that for two ants it is more probable to be exact copies of each other (in terms of similarity of the observer-moments) than for two humans. Thus, despite the number of ants is larger than number of humans, the universe of possible humans experiences is larger than ant's experience, including different observer-moments of sufferings.
If we assume that two instances of an observer-moment should be regarded as one, human's sufferings-moments would dominate ant's sufferings-moments in the set of all possible experiences.
Replies from: RobbBB, rossry↑ comment by Rob Bensinger (RobbBB) · 2018-08-25T03:07:21.892Z · LW(p) · GW(p)
This is an interesting point I plausibly haven't noticed / thought about enough!
↑ comment by rossry · 2018-08-15T05:55:41.474Z · LW(p) · GW(p)
How many distinct possible ant!observer-moments are there? What is the entropy of their distribution in the status quo?
How many distinct possible human!observer-moments are there? What is the entropy of their distribution in the status quo?
(Confidence intervals okay; I just have no intuition about these quantities, and you seem to have considered them, so I'm curious what estimates you're working with.)
Replies from: avturchin↑ comment by avturchin · 2018-08-15T09:36:24.030Z · LW(p) · GW(p)
Just some preliminary thoughts.
In 1984 there was a big problem in the Soviet Union: the butterfly's population declined because children were hunting on them with butterfly nets. To solve this problem, the Soviet government banned sales of such nets. I remember that one summer my parents were not able to buy me such a net.
Today it is obvious that this was not the biggest problem for the Soviet Union, which was facing its own existential catastrophe in just a few years. The same way, discussing now the moral value of insect may be an opportunity cost if we want to prevent x-risks. Anyway, let's go.
Let's look first on the number of ant's observer-moments. One way to calculate them is the use of the number of insect's facets in eye, which is 30 000 for dragonfly. Assuming binary vision in insects, it is 2power30 000 different images which an ant can see. Humans have 7 mln color vision cone cells in each eye, which imply 8power7000000 different possible images an human eye could see. This is 8power6990000 times more than ant's possible observer states. Similar result could be achieved if we compare brain sizes in neurons of a human and an ant.
Good question about entropy. It could be also assumed that "normal" states of consciousness for humans are more diverse than for ants. "Normal states" are those which one experiences during his normal life, not under the effect of random generators combined with powerful hallucinogens. The less diverse species are in their experience, more likely these species to have exact copies of observer-moments between their specimen.
Another part of the enthropy question is the ability of a human (or an ant) to distinguish two states of his consciousness as different, probably by providing different reaction to them. In that case, humans enormously outperform ants, as we can give long textual descriptions of all nuances of our experiences. Those also could be calculated by combining the complexity of all possible human situations describing phrases with all typical reactions of an ant on new objects (here it is assumed that ants are only capable to typical reactions which may be not true).
Replies from: rossry↑ comment by rossry · 2018-08-15T12:32:28.123Z · LW(p) · GW(p)
Right, so if we're using a uniform distribution over 2^30000, there should be exactly zero ants sharing observer-moments, so in order to argue that ants' overlap in observer-moments should discount their total weight, we're going to need to squeeze that space a lot harder than that.
I've also spent some time recently staring at ~randomly generated grids of color for an unrelated project, and I think there's basically no way that the human visual system is getting so much as 5000 bits of entropy (i.e., 50x50 grid of four-color choices) out of the observer-experience of the visual field. So I think using 2^#receptors is just the wrong starting point. Similarly, assuming that neurons operate independently is going to give you a number in entirely the wrong realm of numbers entirely. (Wikipedia says an ant has ~250,000 neurons.)
I think that if you want to get to the belief that two ants might ever actually share an experience, you're going to need to work in a significantly smaller domain, like your suggestion of output actions, though applying the domain of "typical reactions of a human to new objects" is going to grossly undercount the number of human possible observer-experiences, so now I'm back to being stuck wondering how to do that at all.
Replies from: avturchin↑ comment by avturchin · 2018-08-15T14:46:11.744Z · LW(p) · GW(p)
If we take multiverse view, there will be copies, but what we need is not actual copies, but a measure of uniqueness of each observer-moments, which could be calculated as a proportion of frequencies of copies - for humans and for ants.
The problem may be done more practical by asking how much computational resources we (future FAI) need to resurrect all possible humans and all possible ants.
comment by Kaj_Sotala · 2018-08-14T10:58:23.270Z · LW(p) · GW(p)
by the possibility concerning faster clock speeds for smaller animals, the possibility of lesser unity in non-human animals (which one might value at >1x for the same reason one might value a dually-conscious split-brain patient at ~2x), and the possibility for greater intensity of experience in simpler animals.
Interestingly, some variant of each of these would also seem to apply when comparing the moral weight of adult human vs. infant/toddler humans; while human infants probably don't have a higher clock speed than adults in the same sense that small animals might have a higher clock speed than humans, there is the widely-known point about young children regardless seeming to have a much higher subjective speed than adults.
Replies from: moridinamael↑ comment by moridinamael · 2018-08-14T15:10:19.215Z · LW(p) · GW(p)
And if we are willing to ascribe moral weight to fruit flies, there must also be some corresponding non-zero moral weight to early-term human fetuses.
comment by Qiaochu_Yuan · 2018-08-14T18:52:27.014Z · LW(p) · GW(p)
This whole conversation makes me deeply uncomfortable. I expect to strongly disagree at pretty low levels with almost anyone else trying to have this conversation, I don't know how to resolve those disagreements, and meanwhile I worry about people seriously advocating for positions that seem deeply confused to me and those positions spreading memetically.
For example: why do people think consciousness has anything to do with moral weight?
Replies from: ESRogs, lukeprog, Kaj_Sotala↑ comment by ESRogs · 2018-08-15T05:07:42.781Z · LW(p) · GW(p)
why do people think consciousness has anything to do with moral weight?
Is there anything that it seems to you likely does have to do with moral weight?
I feel pretty confused about these topics, but it's hard for me to imagine that conscious experience wouldn't at least be an input into judgments I would endorse about what's valuable.
↑ comment by lukeprog · 2018-08-14T18:57:48.389Z · LW(p) · GW(p)
For anyone who is curious, I cite much of the literature arguing over criteria for moral patienthood/weight in the footnotes of this section of my original moral patienthood report. My brief comments on why I've focused on consciousness thus far are here.
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-08-14T19:48:51.847Z · LW(p) · GW(p)
(You have to press space after finishing some markdown syntax to have it be properly parsed. Fixed it for you, and sorry for the confusion.)
↑ comment by Kaj_Sotala · 2018-08-15T14:42:52.596Z · LW(p) · GW(p)
why do people think consciousness has anything to do with moral weight?
One of my strongest moral intuitions is that suffering is bad, meaning that it's good to help other minds not-suffer. Minds can only suffer if they are conscious.
Replies from: Dagon↑ comment by Dagon · 2018-08-15T16:41:09.679Z · LW(p) · GW(p)
Interesting, that is not a terribly strong intuition for me. I'm willing to suffer some amount for some causes, so at least it's not fundamental and universal.
The intuition that feels more fundamental is that joy should be maximized, and suffering is (in many cases) a reduction in joy. Which gets to "useless suffering is bad", but that's a lot weaker than "suffering is bad".
Anyhow, I suspect this difference in intuition is a deep enough disagreement that it makes it difficult to fully agree on moral values. Both are about consciousness, though, so we at least agree there. I wonder what the moral intuitions are that make one thing consciousness is not central.
Replies from: ESRogs↑ comment by ESRogs · 2018-08-15T17:14:20.498Z · LW(p) · GW(p)
I suspect this difference in intuition is a deep enough disagreement that it makes it difficult to fully agree on moral values.
It's not clear to me, from what's written here, that you two even disagree at all. Kaj says, "suffering is bad." You say, "useless suffering is bad."
Are you sure Kaj wouldn't also agree that suffering can sometimes be useful?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2018-08-15T18:08:23.203Z · LW(p) · GW(p)
Yeah, "suffering is bad" doesn't mean that I would never accept trades which involved some amount of suffering. Especially since trying to avoid suffering tends to cause more of it in the long run [LW(p) · GW(p)], so even if you only cared about reducing suffering (which I don't), you'd still want to take actions involving some amount of suffering.
Compare that even if you want to have a lot of money, never spending any money (on e.g. investments) isn't a very good strategy, even though your stated goal implies that spending money is bad.
Replies from: Dagon↑ comment by Dagon · 2018-08-15T18:41:12.178Z · LW(p) · GW(p)
Hmm, the money analogy misses me too. I'd never say "spending money is bad", even as shorthand for something, as it's simply not a base-level truth. I think of money as a lifetime flow rather than an instantaneous stock, and failing in your goals when you have unspent money is clearly a mistake.
I suspect we do agree on a lot of intuitions, but also disagree on the modeling of which of those are fundamental vs situational.
comment by Jameson Quinn (jameson-quinn) · 2020-01-11T00:17:42.559Z · LW(p) · GW(p)
This does exactly what it sets out to do: presents an issue, shows why we might care, and lays out some initial results (including both intuitive and counterintuitive ones). It's not world-shaking for me, but it certainly carries its weight.
comment by Raemon · 2019-12-02T03:28:03.189Z · LW(p) · GW(p)
It's fairly rare (lately) that I've read something that meaningfully shifted my distribution of "what sorts of moral and/or consciousness theories I'm likely to subscribe to after more learning/reflection."
I think this is probably mostly has to do with me being in a valley where there's a lot of "relatively easy" concepts I've already learned, and then [potentially] harder concepts that I'd have to put a lot of work into understanding. (I did kinda bounce of Luke's longer post on consciousness, although I think that had more to do with length than being over-my-head)
But this post seemed well targeted towards 2018_raemon's background. I had thought about high-clockspeed being relevant for the moral relevance of digital-minds, but somehow hadn't considered that this might also make hummingbirds more morally relevant than humans.
(To be clear, all of this is hedged with massive uncertainty, and I currently don't expect to end up believing hummingbirds are more relevant. But it felt like a big shift in how I carved up the space of possibilities)
comment by Richard_Ngo (ricraz) · 2018-08-14T11:45:53.412Z · LW(p) · GW(p)
Interesting points! I hadn't seriously considered the possibility of animals having more moral weight per capita than humans, but I guess it makes some sense, even if it's implausible. Two points:
1. Are the ranges conditional on each species being moral patients at all? If not, it seems like there'd be enough probability mass on 0 for some of the less complex animals that any reasonable confidence interval should include it.
2. What are your thoughts on pleasure/pain asymmetries? Would your ranges for the moral weight of positive experiences alone be substantially different to the ones above? To me, it makes intuitive sense that animals can feel pain in roughly the same way we do, but the greatest happiness I experience is so wrapped up in my understanding of the overall situation and my expectations for the future that I'm much less confident that they can come anywhere close.
Replies from: lukeprog↑ comment by lukeprog · 2018-08-14T19:02:27.195Z · LW(p) · GW(p)
Yes, I meant to be describing ranges conditional on each species being moral patients at all. I previously gave my own (very made-up) probabilities for that here. Another worry to consider, though, is that many biological/cognitive and behavioral features of a species are simultaneously (1) evidence about their likelihood of moral patienthood (via consciousness), and (2) evidence about features that might affect their moral weight *given* consciousness/patienthood. So, depending on how you use that evidence, it's important to watch out for double-counting.
I'll skip responding to #2 for now.
comment by Jacy Reese Anthis (Jacy Reese) · 2018-08-15T11:48:28.228Z · LW(p) · GW(p)
I don't think the two-elephants problem is as fatal to moral weight calculations as you suggest (e.g. "this doesn't actually work"). The two-envelopes problem isn't a mathematical impossibility; it's just an interesting example of mathematical sleight-of-hand.
Brian's discussion of two-envelopes is just to point out that moral weight calculations require a common scale across different utility functions (e.g. the decision to fix the moral weight of a human at 1 whether you're using brain size, all-animals-are-equal, unity-weighting, or any other weighing approach). It's not to say that there's a philosophical or mathematical impossibility in doing these calculations, as far as I understand.
FYI I discussed this a little with Brian before commenting, and he subsequently edited his post a little, though I'm not yet sure if we're in agreement on the topic.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2018-08-15T12:16:42.542Z · LW(p) · GW(p)
I think the moral-uncertainty version of the problem is fatal unless you make further assumptions about how to resolve it, such as by fixing some arbitrary intertheoretic-comparison weights (which seems to be what you're suggesting) or using the parliamentary model.
Replies from: philh, Jacy Reese↑ comment by philh · 2018-08-17T20:18:09.180Z · LW(p) · GW(p)
Regardless of whether the problem can be resolved, I confess that I don't see how it's related to the original two-envelopes problem, which is a case of doing incorrect expected-value calculations with sensible numbers. (The contents of the envelopes are entirely comparable and can't be rescaled.)
Meanwhile, it seems to me that the elephants problem just comes about because the numbers are fake. You can do sensible EV calculations, get (a + b/4) for saving two elephants versus (a/2 + b/2) for saving one human, but because a and b are mostly-unconstrained (they just have to be positive), you can't go anywhere from there.
These strike me as just completely unrelated problems.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2018-08-18T15:18:30.967Z · LW(p) · GW(p)
The naive form of the argument is the same between the classic and moral-uncertainty two-envelopes problems, but yes, while there is a resolution to the classic version based on taking expected values of absolute rather than relative measurements, there's no similar resolution for the moral-uncertainty version, where there are no unique absolute measurements.
Replies from: philh↑ comment by philh · 2018-08-18T22:05:47.746Z · LW(p) · GW(p)
There's nothing wrong with using relative measurements, and using absolute measurements doesn't resolve the problem. (It hides from the problem, but that's not the same thing.)
The actual resolution is explained in the wiki article better than I could.
I agree that the naive version of the elephants problem is isomorphic to the envelopes problem. But the envelopes problem doesn't reveal an actual difficulty with choosing between two envelopes, and the naive elephants problem as described doesn't reveal an actual difficulty with choosing between humans and elephants. They just reveal a particular math error that humans are bad at noticing.
↑ comment by Jacy Reese Anthis (Jacy Reese) · 2018-08-15T12:41:02.413Z · LW(p) · GW(p)
I think most thinkers on this topic wouldn't think of those weights as arbitrary (I know you and I do, as hardcore moral anti-realists), and they wouldn't find it prohibitively difficult to introduce those weights into the calculations. Not sure if you agree with me there.
I do agree with you that you can't do moral weight calculations without those weights, assuming you are weighing moral theories and not just empirical likelihoods of mental capacities.
I should also note that I do think intertheoretic comparisons become an issue in other cases of moral uncertainty, such as with infinite values (e.g. a moral framework that absolutely prohibits lying). But those cases seem much harder than moral weights between sentient beings under utilitarianism.
Replies from: lukeprogcomment by Ben Pace (Benito) · 2019-12-02T18:59:05.926Z · LW(p) · GW(p)
Seconding Ray. This was a bunch of important hypotheses about consciousness I had never heard of.
comment by binary_doge · 2018-08-28T01:06:47.078Z · LW(p) · GW(p)
This was an awesome read. Can you perhaps explain the listed intuition to care more about things like clock speeds than higher cognitive functions?
The way I see it, higher cognitive functions allow long term memories and their resurfacing, and cognitive interpretation of direct suffering, like physical pain. A hummingbird might have a X3 human clock, but it might be way less emotionally scarred than a human when projected to maximum pain for, lets say, 8 objective seconds ("emotionally scarred" is a not well defined way of saying that more suffering will arise later due to the pain caused in the hypothetical event). That is why, IMO, most people do assign relevance to more complicated cognitions.
comment by Vasco Grilo (vascoamaralgrilo) · 2022-05-27T15:55:01.728Z · LW(p) · GW(p)
Thanks for the post!
I was trying to use the lower and upper estimates of 5*10^-5 and 10, guessed for the moral weight of chickens relative to humans, as the 10th and 90th percentiles of a lognormal distribution. This resulted in a mean moral weight of 1000 to 2000 (the result is not stable), which seems too high, and a median of 0.02.
1- Do you have any suggestions for a more reasonable distribution?
2- Do you have any tips for stabilising the results for the mean?
I think I understand the problems of taking expectations over moral weights (E(X) is not equal to 1/E(1/X)), but believe that it might still be possible to determine a reasonable distribution for the moral weight.
↑ comment by Vasco Grilo (vascoamaralgrilo) · 2022-06-03T15:18:13.516Z · LW(p) · GW(p)
With a loguniform distribution, the mean moral weight is stable and roughly equal to 2.
comment by the gears to ascension (lahwran) · 2018-08-15T02:42:19.714Z · LW(p) · GW(p)
hot take: utilitarianism is broken, the only way to fix it is to invent economics - you can't convert utility between agents, and when you try to do anything resembling that, you get something that works sort of like (but not exactly the same as) money.
Replies from: Kaj_Sotala, SaidAchmiz↑ comment by Kaj_Sotala · 2018-08-15T14:48:55.780Z · LW(p) · GW(p)
That sounds like it's talking about some version of preference utilitarianism (in which utility is defined the way economics does it, and we try to maximize the sum of each agent's own utility function), whereas this post says that it's talking about classical utilitarianism. I think that for classical utilitarianism, it's enough to just know your own ideal exchange rates for different kinds of pain and pleasure, and then you can try to take actions which shift the world's overall ratio of pain/pleasure towards something that's good according to your own utility function.
↑ comment by Said Achmiz (SaidAchmiz) · 2018-08-15T03:37:24.958Z · LW(p) · GW(p)
That’s hardly a hot take, seeing as how Oskar Morgenstern got there first (in 1975).
comment by Dagon · 2018-08-14T18:06:51.012Z · LW(p) · GW(p)
It's fun to take these calculations and apply some existential value to seriously amplify the repugnant conclusion. We should tile the universe with whatever creature has the highest moral weight per resource consumed. It's unlikely to be humans.
Replies from: TAGcomment by Zvi · 2020-01-12T23:51:24.699Z · LW(p) · GW(p)
My actual honest reaction to this sort of thing: Please, please stop. This kind of thinking actively drives me and many others I know away from LW/EA/Rationality. I see it strongly as asking the wrong questions with the wrong moral frameworks, and using it to justify abominable conclusions and priorities, and ultimately the betrayal of humanity itself - even if people who talk like this don't write the last line of their arguments, it's not like the rest of us don't notice it. I don't have any idea what to say to someone who writes 'if I was told one pig was more important morally than one human I would not be surprised.'
That's not me trying to convince anyone of anything beyond that I have that reaction to this sort of thing, and that it seemed wrong for me not to say it given I'm writing reviews. No demon threads please, if I figure out how to say this in a way that would be convincing and actually explain, I'll try and do that. This is not that attempt.
Replies from: Zack_M_Davis, SaidAchmiz↑ comment by Zack_M_Davis · 2020-01-13T03:12:24.596Z · LW(p) · GW(p)
This kind of thinking actively drives me and many others I know away from LW/EA/Rationality
And that kind of thinking (appeal to the consequence of repelling this-and-such kind of person away from some alleged "community") has been actively driving me away. I wonder if there's some way to get people to stop ontologizing "the community" and thereby reduce the perceived need to fight for control of the "LW"/"EA"/"rationalist" brand names? (I need to figure out how to stop ontologizing, because I'm exhausted from fighting.) Insofar as "rationality" is a thing [LW · GW], it's something that Luke-like optimization processes and Zvi-like optimization processes are trying to approximate, not something they're trying to fight over.
Replies from: steven0461↑ comment by steven0461 · 2020-01-13T20:07:50.718Z · LW(p) · GW(p)
As usual, this makes me wish for UberFact or some other way of tracking opinion clusters.
↑ comment by Said Achmiz (SaidAchmiz) · 2020-01-13T08:09:45.010Z · LW(p) · GW(p)
I see it strongly as asking the wrong questions with the wrong moral frameworks, and using it to justify abominable conclusions and priorities, and ultimately the betrayal of humanity itself—even if people who talk like this don’t write the last line of their arguments, it’s not like the rest of us don’t notice it. I don’t have any idea what to say to someone who writes ‘if I was told one pig was more important morally than one human I would not be surprised.’
Entirely seconded; this is my reaction also.