Why maximize human life?
post by lise · 2022-01-07T14:11:04.621Z · LW · GW · 3 commentsThis is a question post.
Contents
Answers 14 Kaj_Sotala 11 Richard Horvath 7 tailcalled 2 kh 1 Peter V. 1 CuriousApe 1 Flaglandbase None 3 comments
There is an idea, or maybe an assumption, that I've seen mentioned in many Lesswrong posts. This is the idea that human life should be maximized: that one of our goals should be to create as many humans as possible. Or perhaps even: that preventing humans from being born is as bad as killing living humans.
I've seen this idea used to argue for a larger point, but I haven't yet seen arguments to justify the idea itself. I only have some directional notions:
- I understand not wanting human life, and possibly the various human cultures, to die out, and we should make sure that there are enough humans to prevent that. This in no way necessitates maximization, though.
- If you accept the grabby aliens model, this necessitates that humans should be grabby, because otherwise, grabby aliens will cause us to die out. This would also imply maximization of human life across the galaxy. However, I get the idea that this isn't the main reason for people to want maximization, as the other implication — that we need to be grabby — is almost never mentioned in the relevant posts.
- I could see an argument for maximizing utility, under the utilitarian framework, where you argue that creating more life would create more potential utility. However, this means that
- you should also actually create the utility for all these new lives, or they will not add to (or even subtract from) your utility calculation; simply wanting to create lives without considering living conditions does not seem to take this into account
- it is possible that maximizing animal life, or perhaps alien or artificial life, would create more utility, as these lives might be optimized with way less effort
So I would like to hear, from people who actually hold the "maximizing human life" position, some of your explanations for why. (Or pointers to a source or a framework that explains it.)
Answers
simply wanting to create lives without considering living conditions does not seem to take this into account
I don't think any of the people who support creating more lives believe we should do so regardless of living conditions, though they may assume that most human lives are worth living and that it takes exceptionally bad conditions for someone's life to become not worth living.
Typically people may also assume that technological and societal progress continues, thus making it even more likely than today that the average person has a life worth living. E.g. Nick Bostrom's paper Astronomical Waste notes, when talking about a speculative future human civilization capable of settling other galaxies:
I am assuming here that the human lives that could have been created would have been worthwhile ones. Since it is commonly supposed that even current human lives are typically worthwhile, this is a weak assumption. Any civilization advanced enough to colonize the local supercluster would likely also have the ability to establish at least the minimally favorable conditions required for future lives to be worth living.
In general, you easily end up with "maximizing human lives is good (up to a point)" as a conclusion if you accept some simple premises like:
- It's good to have humans who have lives worth living
- Most new humans will have lives that are worth living
- It's better to have more of a good thing than less of it
Thus, if it's good to have lives worth living (1) and most new humans will have lives that are worth living (2), then creating new lives will be mostly good. If it's better to have more of a good thing than less of it (3), and creating new lives will be mostly good, then it's better to create new lives than not to.
Now it's true that at some point we'll probably run into resource or other constraints so that the median new life won't be worth living anymore. But I think anyone talking about maximizing life is just assuming it as obvious that the maximization goal will only hold up to a certain point.
(Of course it's possible to dispute some of these premises - see e.g. here or here for arguments against. But it's also possible to accept them.)
- it is possible that maximizing animal life, or perhaps alien or artificial life, would create more utility, as these lives might be optimized with way less effort
Some of the people wanting to create more human lives might indeed agree with this! For instance, when they say "human", they might actually have in mind some technologically enhanced posthuman species that's a successor for our current species.
On the other hand, it's also possible that people who say this just intrinsically value humans in particular.
↑ comment by Said Achmiz (SaidAchmiz) · 2022-01-07T22:41:07.348Z · LW(p) · GW(p)
It seems to me that, separately from whether we accept or reject premises #1 and/or #2,[1], we should notice that premise #3 has an equivocation built into it.
Namely, it does not specify: better for whom?
After all, it makes no sense to speak of things being simply “better”, without some agent or entity whose evaluations we take to be our metric for goodness. But if we attempt to fully specify premise #3, we run into difficulties.
We could say: “it is better for a person to have more of a good[2] thing than to have less of it”. And, sure, it is. But then where does that leave premise #3? For whom is it better, that we should have more humans who have lives worth living?
For those humans? But surely this is a non sequitur; even if, for any individual person, we accept the idea that it’s better for them that they should exist than that they should not (an idea I find to be nonsensical, but that’s another story), still it’s not clear how we get from that to it being better that there should be more people…
Or are we saying that’s making more humans is a good thing for already existing humans? Well, perhaps it is, but then we also have to claim, and show, that this is the case—and, crucially, this renders premise #2 largely irrelevant, since “how good are the lives of newly created humans” is not necessarily relevant to the question of “how good a thing is it, for already existing humans, that we should make more humans?”
Really, the problem with this whole category of arguments, this whole style of reasoning, is that it seems to consist of an attempt to take a “view from nowhere” on the subject of goodness, desirability, what is “best”, etc. But such a view is impossible, and any attempt to take a view like this must be incoherent.
↑ comment by Kaarel (kh) · 2022-01-08T09:04:52.338Z · LW(p) · GW(p)
It could just be that a world with additional happy people is better according to my utility function, just like a world with fewer painlessly killed people per unit of time is better according to my utility function. While I agree that goodness should be "goodness for someone" in the sense that my utility function should be something like a function only of the mental states of all moral patients (at all times, etc.), I disagree with the claim that the same people have to exist in two possible worlds for me to be able to say which is better, which is what you seem to be implying in your comment. One world can be better (according to my utility function) than another because of some aggregation of the well-beings of all moral patients within it being larger. I think most people have such utility functions. Without allowing for something like this, I can't really see a way to construct an ethical model that tells essentially anything interesting about any decisions at all (at least for people who care about other people), as all decisions probably involve choosing between futures with very different sets of moral patients.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2022-01-08T16:21:11.342Z · LW(p) · GW(p)
… I disagree with the claim that the same people have to exist in two possible worlds for me to be able to say which is better, which is what you seem to be implying in your comment.
Not quite—but I would say that it is not possible to describe one world as “better” than another in any quantifiable or reducible way (as distinct from “better, according to my irreducible and arbitrary judgment”—to which you are, of course, entitled), unless the two worlds contain the same people (which, please note, is only a necessary, not a sufficient, criterion).
I do not believe that aggregation of well-being across individuals is possible or coherent.
(Incidentally, I am also fairly sure that most people don’t have utility functions, period, but I imagine that your use of the term was metaphorical, and in practice should be read merely as “preferences” or something similar.)
Without allowing for something like this, I can’t really see a way to construct an ethical model that tells essentially anything interesting about any decisions at all (at least for people who care about other people), as all decisions probably involve choosing between futures with very different sets of moral patients.
Come now, this is not a sensible model of how we make decisions. If I must choose between (a) stealing my mother’s jewelry in order to buy drugs and (b) giving a homeless person a sandwich, there are all sorts of ethical considerations we may bring to bear on this question, but “choosing between futures with very different sets of moral patients” is simply irrelevant to the question. If your decision procedure in a case like this involves the consideration of far-future outcomes, requires the construction of utility aggregation procedures across large numbers of people, etc., etc., then your ethical framework is of no value and is almost certainly nonsense.
↑ comment by Vladimir_Nesov · 2022-01-08T10:03:21.622Z · LW(p) · GW(p)
it makes no sense to speak of things being simply “better”, without some agent or entity whose evaluations we take to be our metric for goodness
If the agent/entity is hypothetical, we get an abstract preference without any actual agent/entity. And possibly a preference can be specified without specifying the rest of the agent. A metric of goodness doesn't necessarily originate from something in particular.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2022-01-08T16:09:12.338Z · LW(p) · GW(p)
You can of course define any metric you like, but what makes it a metric of “goodness” (as opposed to a metric of something else, like “badness”, or “flevness”), unless it is constructed to reflect what some agent or entity considers to be “good”?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2022-01-08T16:38:42.918Z · LW(p) · GW(p)
I see human values as something built by long reflection, a heavily philosophical process where it's unclear if humans (as opposed to human-adjacent aliens or AIs) doing the work is an important aspect of the outcome. This outcome is not something any extant agent knows. Maybe indirectly it's what I consider good, but I don't know what it is, so that phrasing is noncentral. Maybe long reflection is the entity that considers it good, but for this purpose it doesn't hold the role of an agent, it's not enacting the values, only declaring them.
Kaj_Sotala provided a good answer, but I want to give an intuitive example:
If you could decide whether:
A: a single person lives on Earth, supported by aligned AGI, its knowledge and all resources of the planet in service of nothing but his welfare, living in abundance not even the greatest emperors ever dreamed of.
B: a civilization of tens of billions living on Earth, supported by aligned AGI, thanks to which all of them have at least the living standard of a current upper-middle class American.
I believe most people would choose option B. Of course, this is not independent of living conditions (greatly influenced by anchoring), but for me covers the general "feeling" of the idea. I would formulate it along the lines of "due to diminishing returns, spending resources on increasing living standards above a certain level is wasteful, more goodness/utility is created if other humans are included".
I would like to also suggest for reading a sci-fi short story by one of the LessWrongers, which deals a lot with this question (among other things that are also memeworthy), especially in chapter 3:
https://timunderwoodscifi.wordpress.com/index/
I'm not sure where you get this idea from. Certainly I've seen people argue that within some ranges of conditions, more humans are better. But not some general thing that doesn't collide with the sorts of caveats you mentioned.
↑ comment by Said Achmiz (SaidAchmiz) · 2022-01-07T22:26:59.214Z · LW(p) · GW(p)
Eh? This is a very common idea on Less Wrong and in rationalist spaces. Have you really not encountered it? Like, a lot?
Replies from: tailcalled↑ comment by tailcalled · 2022-01-07T23:38:14.920Z · LW(p) · GW(p)
Could you give links to two examples?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2022-01-08T16:05:50.225Z · LW(p) · GW(p)
One [LW · GW] two [LW · GW] three four [LW · GW]
Replies from: tailcalled↑ comment by tailcalled · 2022-01-08T16:24:45.705Z · LW(p) · GW(p)
All of these seem to engage with op's caveats. E.g. one of OP's objections is "However, this means that you should also actually create the utility for all these new lives, or they will not add to (or even subtract from) your utility calculation", and that's something the posts consider:
In addition, you need to also consider that future humans would lead vastly better lives than today’s humans due to the enormous amount of technological progress that humanity would have reached by then.
Against neutrality about creating happy lives
ABSTRACT. With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe.
Caring about the future of sentience is sometimes taken to imply reducing the risk of human extinction as a moral priority. However, this implication is not obvious so long as one is uncertain whether a future with humanity would be better or worse than one without it.
(Emphasis added.) Since all four of your links directly engage with the points OP raises, I don't think they're a good example of rationalists ignoring such points.
"Or perhaps even: that preventing humans from being born is as bad as killing living humans."
I'm not sure if this is what you were looking for, but here are some thoughts on the "all else equal" version of the above statement. Suppose that Alice is the only person in the universe. Suppose that Alice would, conditional on you not intervening, live a really great life of 100 years. Now on the 50th birthday of Alice, you (a god-being) have the option to painlessly end Alice's life, and in place of her to create a totally new person, let's call this person Bob, who comes into existence as a 50-year old with a full set of equally happy (but totally different) memories, and who (you know) has an equally great life ahead of them as Alice would have if you choose not to intervene. (All this assumes that interpersonal comparisons of what a "great" life is make sense. I personally highly doubt one can do anything interesting in ethics without such a notion; this is just to let people know about a possible point of rejecting this argument.)
Do you think it is bad to intervene in this way? (My position is that intervening is morally neutral.) If you think it is bad to intervene, then consider intervening twice in short succession, once painlessly replacing Alice with Bob, and then painlessly replacing Bob with Alice again. Would this be even worse? Since this double-swapping process gives an essentially identical (block) universe as just doing nothing, I have a hard time seeing how anything significantly bad could have happened.
Or consider a situation in which this universe had laws of nature such that Alice was to "naturally" turn into Bob on her 50th birthday without any intervention by you. Would you then be justified in immediately swapping Alice and Bob again to prevent Alice from being "killed"?
(Of course, the usual conditions of killing someone vs creating a new person are very much non-equivalent in practice in the various ways in which the above situation was constructed to be equivalent. Approximately no one thinks that never having a baby is as bad as having a baby and then killing them.)
To me, it's about maximizing utility.
Would you want to be killed today ? That's how much you value life over non-existence.
How would you react if a loved one was to be killed today ? Same as above, that's how you value life over non-existence.
Almost everybody agree that life has value, considerable value, over non-existence. Hence considering some commonly agreed arbitrary utility function, giving life to somebody, giving existence to somebody, probably beats all the good deeds you could do in a lifetime, just as murder would probably beat all the good deeds you did over your lifetime.
The comfort of one's life is definitely important, but i'd bet the majority of depressive people still don't want to die. There's a large margin for life to become so invaluable that you'd want to die, and even then, you'd still have to consider this (usually) large positive part of your life where you still wanted to live.
Hence,
you should also actually create the utility for all these new lives
might not be a problem if life itself, if just being conscious has an almost infinite weight in most living conditions. In our arbitrary utility function, being an African kid rummaging a dump might have a weight of 1 000 000 while being a Finish kid born in a loving and wealthy family might have a weight of 1 100 000 at the very best (it could very well be lower depending on opinions and the kids trajectories).
it is possible that maximizing animal life, or perhaps alien or artificial life, would create more utility, as these lives might be optimized with way less effort
Whereas everybody agrees on the value of human life, not everybody will agree about the value of animal life (raise your hand if you ate chicken or beef or fish in the past weeks ?). Artificial life, certainly, unless the solution to the hard problem of consciousness rules out consciousness for some types of artificial life.
But, the way you stated your idea might not describe how people feel about this idea, instead of human life should be maximized, i lean more toward human life should not be minimized or it's a good thing to increase human life.
More humans leads to more knowledge. There will be more people doing basic research and adding to humanity's collective knowledge which will improve collective wellbeing. There are efficiencies of scale as someone else mentioned and in the book Scale the author mentions productivity in cities scales 1.15x superlinearly with population. There is an upper limit, though the current population growth rate is well below the peak and very likely a much higher growth rate than today's can be sustained.
If ever greater numbers of possible human-level minds are created, their creation becomes computationally easier due to economies of scale, shared elements, and opportunities for data compression.
3 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2022-01-07T14:45:43.388Z · LW(p) · GW(p)
Expected utility maximization is only applicable when utility is known. When it's not, various anti-goodharting considerations become more important, maintaining ability to further develop understanding of utility/values without leaning too much on any current guesses of what that is going to be. Keeping humans in control of our future is useful for that, but instrumentally convergent actions such as grabbing the matter in the future lightcone (without destroying potentially morally relevant information such as aliens) and moving decision making to a better substrate are also helpful for whatever our values eventually settle as. The process should be corrigible, should allow replacing humans-in-control with something better as understanding of what that is improves (and not getting locked-in into that either). The AI risk is about failing to set up this process.
comment by Donald Hobson (donald-hobson) · 2022-01-07T17:46:29.774Z · LW(p) · GW(p)
I think there is a confusion about what is meant by utilitarianism and utility.
Consider the moral principle that moral value is local to the individual, in the sense that there is some function F: Individual minds -> Real numbers such that the total utility of the universe. Alice having an enjoyable life is good, and the amount of goodness doesn't depend on Bob. This is a real restriction on the space of utility functions. It says that you should be indifferent between (A coin toss between both Alice and Bob existing and neither Alice or Bob existing) and (A coin toss between Alice existing and Bob existing). At least on the condition that Alice's and Bob's quality of life is the same if they do exist in either scenario. And no one else is effected by Alice's or Bob's existence.
Under this principle, a utopia of a million times the size is a million times as good.
I agree that wanting to create new lives regardless of living conditions is wrong. There is a general assumption that the lives will be worth living. In friendly superintelligence singleton scenarios, this becomes massively overdetermined.
Utility is not some ethereal substance that exists in humans and animals, with it being conceivable animals contain more.
It is possible there is some animal or artificial mind such that if we truly understood the neurology, we would prefer to fill the universe with them.
Often "humans" is short for "beings similar to current humans except for this wish list of improvements". (Posthumans) There is some disagreement over how radical these upgrades should be.
comment by JBlack · 2022-01-09T01:04:24.312Z · LW(p) · GW(p)
I only partly value maximizing human life, but I'll comment anyway.
Where the harm done seems comparatively low, it makes sense to increase the capacity for human lives. Whether that capacity actually goes into increasing population or improving the lives of those that already exist is a decision that others can make. Interfering with individual decisions about whether or not new humans should be born seems much more fraught with likelihoods of excess harms. Division of the created capacity seems more of a fiddly social and political problem than a wide-view one in the scope of this question.
The main problem is that on this planet there is a partial trade off between capacity for humans to live and the capacity for other species to live. I unapologetically favour sapient species there. Not to exclusion of all else, and particularly not to the point of endangerment or extinction, but definitely value a population of a million kangaroos and two million humans (or friendly AGIs or aliens) more than ten million kangaroos and one million humans. There is no exact ratio here, and I could hypothetically support some people who are better than human (and not inimical to humans) having greater population capacity, though I would hope that humans would be able to improve over time just as I hope that they do in the real world.
In the long term, I do think we should spread out from our planet, and be "grabby" in that weak sense. The cost in terms of harm to other beings seems very much lower than on our planet, since as far as we can tell, the universe is otherwise very empty of life.
If we ever encounter other sapient species, I would hope that we could coexist instead of anything that results in the extinction or subjugation of either. If that's not to be then it may help to already have the resources of a few galactic superclusters for the inevitable conflict. but I don't see that as a primary reason to value spreading out.