Transhumanism as Simplified Humanism
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2018-12-05T20:12:13.114Z · LW · GW · 34 commentsContents
37 comments
This essay was originally posted in 2007.
Frank Sulloway once said: “Ninety-nine per cent of what Darwinian theory says about human behavior is so obviously true that we don’t give Darwin credit for it. Ironically, psychoanalysis has it over Darwinism precisely because its predictions are so outlandish and its explanations are so counterintuitive that we think, Is that really true? How radical! Freud’s ideas are so intriguing that people are willing to pay for them, while one of the great disadvantages of Darwinism is that we feel we know it already, because, in a sense, we do.”
Suppose you find an unconscious six-year-old girl lying on the train tracks of an active railroad. What, morally speaking, ought you to do in this situation? Would it be better to leave her there to get run over, or to try to save her? How about if a 45-year-old man has a debilitating but nonfatal illness that will severely reduce his quality of life – is it better to cure him, or not cure him?
Oh, and by the way: This is not a trick question.
I answer that I would save them if I had the power to do so – both the six-year-old on the train tracks, and the sick 45-year-old. The obvious answer isn’t always the best choice, but sometimes it is.
I won’t be lauded as a brilliant ethicist for my judgments in these two ethical dilemmas. My answers are not surprising enough that people would pay me for them. If you go around proclaiming “What does two plus two equal? Four!” you will not gain a reputation as a deep thinker. But it is still the correct answer.
If a young child falls on the train tracks, it is good to save them, and if a 45-year-old suffers from a debilitating disease, it is good to cure them. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle which says “Life is good, death is bad; health is good, sickness is bad.” If so – and here we enter into controversial territory – we can follow this general principle to a surprising new conclusion: If a 95-year-old is threatened by death from old age, it would be good to drag them from those train tracks, if possible. And if a 120-year-old is starting to feel slightly sickly, it would be good to restore them to full vigor, if possible. With current technology it is not possible. But if the technology became available in some future year – given sufficiently advanced medical nanotechnology, or such other contrivances as future minds may devise – would you judge it a good thing, to save that life, and stay that debility?
The important thing to remember, which I think all too many people forget, is that it is not a trick question.
Transhumanism is simpler – requires fewer bits to specify – because it has no special cases. If you believe professional bioethicists (people who get paid to explain ethical judgments) then the rule “Life is good, death is bad; health is good, sickness is bad” holds only until some critical age, and then flips polarity. Why should it flip? Why not just keep on with life-is-good? It would seem that it is good to save a six-year-old girl, but bad to extend the life and health of a 150-year-old. Then at what exact age does the term in the utility function go from positive to negative? Why?
As far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you’re done. No special cases. You don’t have to ask anyone’s age.
You also don’t ask whether the remedy will involve only “primitive” technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they haven’t been invented yet. Your ethical dilemma report form doesn’t have a line where you write down the invention year of the technology. Can you save lives? Yes? Okay, go ahead. There, you’re done.
Suppose a boy of 9 years, who has tested at IQ 120 on the Wechsler-Bellvue, is threatened by a lead-heavy environment or a brain disease which will, if unchecked, gradually reduce his IQ to 110. I reply that it is a good thing to save him from this threat. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle saying that intelligence is precious. Now the boy’s sister, as it happens, currently has an IQ of 110. If the technology were available to gradually raise her IQ to 120, without negative side effects, would you judge it good to do so?
Well, of course. Why not? It’s not a trick question. Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one.
But – you ask – where does it end? It may seem well and good to talk about extending life and health out to 150 years – but what about 200 years, or 300 years, or 500 years, or more? What about when – in the course of properly integrating all these new life experiences and expanding one’s mind accordingly over time – the equivalent of IQ must go to 140, or 180, or beyond human ranges?
Where does it end? It doesn’t. Why should it? Life is good, health is good, beauty and happiness and fun and laughter and challenge and learning are good. This does not change for arbitrarily large amounts of life and beauty. If there were an upper bound, it would be a special case, and that would be inelegant.
Ultimate physical limits may or may not permit a lifespan of at least length X for some X – just as the medical technology of a particular century may or may not permit it. But physical limitations are questions of simple fact, to be settled strictly by experiment. Transhumanism, as a moral philosophy, deals only with the question of whether a healthy lifespan of length X is desirable if it is physically possible. Transhumanism answers yes for all X. Because, you see, it’s not a trick question.
So that is “transhumanism” – loving life without special exceptions and without upper bound.
Can transhumanism really be that simple? Doesn’t that make the philosophy trivial, if it has no extra ingredients, just common sense? Yes, in the same way that the scientific method is nothing but common sense.
Then why have a complicated special name like “transhumanism” ? For the same reason that “scientific method” or “secular humanism” have complicated special names. If you take common sense and rigorously apply it, through multiple inferential steps, to areas outside everyday experience, successfully avoiding many possible distractions and tempting mistakes along the way, then it often ends up as a minority position and people give it a special name.
But a moral philosophy should not have special ingredients. The purpose of a moral philosophy is not to look delightfully strange and counterintuitive, or to provide employment to bioethicists. The purpose is to guide our choices toward life, health, beauty, happiness, fun, laughter, challenge, and learning. If the judgments are simple, that is no black mark against them – morality doesn’t always have to be complicated.
There is nothing in transhumanism but the same common sense that underlies standard humanism, rigorously applied to cases outside our modern-day experience. A million-year lifespan? If it’s possible, why not? The prospect may seem very foreign and strange, relative to our current everyday experience. It may create a sensation of future shock. And yet – is life a bad thing?
Could the moral question really be just that simple?
Yes.
34 comments
Comments sorted by top scores.
comment by Laura B (Lara_Foster) · 2018-12-10T13:28:26.169Z · LW(p) · GW(p)
A few thoughts on why people dislike the idea of greatly extending human life:
1) The most obvious reason: people don't understand the difference between lifespan and healthspan. They see many old, enfeebled, miserable people in old folks homes and conclude, 'My God, what has science wrought!' They are at present not wrong.
2) They don't believe it could work. People as they get older start recognizing and coming to terms with mortality. It suffuses everything about their lives, preparations, the way they talk. The second half of a modern human life is mostly shoring things up for the next generation. Death is horrible. It needs to be made ok one way or another. If you dangle transhumanism in front of them, but they don't believe it has any possibility of happening, then you are undoing years of mental preparation for the inevitable for no reason. People have mental protections against this kind of thing.
3) On some level people don't want their parents to live forever. Modernly extended lifespans have already greatly extended the time parents exert influence over their children. Our childhoods essentially never end.
4) On some level people don't want to live. That might be hard for you to understand, but many people are very miserable, even if they are not explicitly suicidal. The idea of a complete life, when they can say their work is done, can be very appealing. The idea of it never ending can sound like hell.
Replies from: Vanilla_cabs, Lara_Foster↑ comment by Vanilla_cabs · 2019-06-11T14:25:06.785Z · LW(p) · GW(p)
5) status quo bias.
Most people will change they minds the moment the technology is available and cheap. Or rather, they will keep disliking the idea of 'immortality' while profusely consuming anti-aging products without ever noticing the contradiction, because in their minds these will belong in two different realms : grand theories VS everyday life. Those will conjure different images (ubermensch consumed by hubris VS sympathetic grandpa taking his pills to be able to keep playing with his grandkids). Eventually, they'll have to notice that life expectancy has risen well above what was traditionnally accepted, but by then that will be the new status quo.
6) concern about inequalities. The layman has always had the consolation that however rich and powerful someone is, and however evil they are, at least they die like everyone else eventually. But what will happen when some people can escape death indefinitely ? It means that someone who has accumulated power all his life... can keep accumulating power. Patrimony will no more be splitted among heirs. IMO, people would be right to be suspicious that such a game-changing advantage would end up in the hands of a small super-rich class.
7) popular culture has always envisioned the quest for immortality as a faustian bargain. This conditions people against seeing life lengthening as harmless.
↑ comment by Laura B (Lara_Foster) · 2018-12-16T11:54:21.456Z · LW(p) · GW(p)
I just read this article from the Atlantic - I wrote the comment first- but I think it eloquently highlights most of these points.
https://www.theatlantic.com/amp/article/379329/
comment by Qiaochu_Yuan · 2018-12-07T07:12:11.186Z · LW(p) · GW(p)
This is a bad argument for transhumanism; it proves way too much. I'm a little surprised that this needs to be said.
Consider: "having food is good. Having more and tastier food is better. This is common sense. Transfoodism is the philosophy that we should take this common sense seriously, and have as much food as possible, as tasty as we can make it, even if doing so involves strange new technology." But we tried that, and what happened was obesity, addiction, terrible things happening to our gut flora, etc. It is just blatantly false in general that having more of a good thing is better.
As for "common sense": in many human societies it was "common sense" to own slaves, to beat your children, again etc. Today it's "common sense" to circumcise male babies, to eat meat, to send people who commit petty crimes to jail, etc., to pick some examples of things that might be considered morally repugnant by future human societies. Common sense is mostly moral fashion, or if you prefer it's mostly the memes that were most virulent when you were growing up, and it's clearly unreliable as a guide to moral behavior in general.
Figuring out the right thing to do is hard, and it's hard for comprehensible reasons. Value is complex and fragile [LW · GW]; you were the one who told us that!
---
In the direction of what I actually believe: I think that there's a huge difference between preventing a bad thing happening and making a good thing happen, e.g. I don't consider preventing an IQ drop equivalent to raising IQ. The boy has had120 IQ his entire life and we want to preserve that, but the girl has had 110 IQ her entire life and we want to change that. Preserving and changing are different, and preserving vs. changing people in particular is morally complicated. Again the argument Eliezer uses here is bad and proves too much:
Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one.
Consider: "either it's better to be male than female, in which case we should transition all women to men. Or it's better to be female than male, in which case we should transition all men to women."
---
What I can appreciate about this post is that it's an attempt to puncture bad arguments against transhumanism, and if it had been written more explicitly to do that as opposed to presenting an argument for transhumanism, I wouldn't have a problem with it.
Replies from: SaidAchmiz, RobbBB↑ comment by Said Achmiz (SaidAchmiz) · 2018-12-07T07:31:38.251Z · LW(p) · GW(p)
Consider: “having food is good. Having more and tastier food is better. This is common sense. Transfoodism is the philosophy that we should take this common sense seriously, and have as much food as possible, as tasty as we can make it, even if doing so involves strange new technology.” But we tried that, and what happened was obesity, addiction, terrible things happening to our gut flora, etc. It is just blatantly false in general that having more of a good thing is better.
Conclusion does not follow from example.
You are making exactly the mistake which I described in detail [LW(p) · GW(p)] (and again [LW(p) · GW(p)] in the comments to this post). You’re conflating desirability with prudence.
It is desirable to have as much food as possible, as tasty as we can make it. It may, however, not be prudent, because the costs make it a net loss. But if we could solve the problems you list—if we could cure and prevent obesity and addiction, if we could reverse and prevent damage to our gut flora—then of course having lots of tasty food would be great! (Or would it? Would other problems crop up? Perhaps they might! And what we would want to do then, is to solve those problems—because having lots of tasty food is still desirable.)
So, in fact, your example shows nothing like what you say it shows. Your example is precisely a case where more of a good thing is better… though the costs, given current technology and scientific understanding, are too high to make it prudent to have as much of that good thing as we’d like.
As for “common sense”: in many human societies it was “common sense” to own slaves, to beat your children, again etc. Today it’s “common sense” to circumcise male babies, to eat meat, to send people who commit petty crimes to jail, etc., to pick some examples of things that might be considered morally repugnant by future human societies. Common sense is mostly moral fashion, or if you prefer it’s mostly the memes that were most virulent when you were growing up, and it’s clearly unreliable as a guide to moral behavior in general.
Now this does prove too much. Ok, so “common sense” can’t be trusted. Now what? Do we just discard everything it tells us? Reject all our moral intuitions?
Yes, by all means let’s examine our intuitions, let us interrogate the output of our common sense. This is good!
But sometimes, when we examine our intuitions and interrogate our common sense, we come up with the same answer that we got at first. We examine our intuitions, and find that actually, yeah, they’re exactly correct. We interrogate our common sense, and find that it passes muster.
And that’s fine. Answers don’t have to be complex, surprising, or unintuitive. Sometimes, the obvious answer is the right one.
That is Eliezer’s point.
↑ comment by Rob Bensinger (RobbBB) · 2018-12-11T19:21:24.509Z · LW(p) · GW(p)
Yeah, "Life is good" doesn't validly imply "Living forever is good". There can obviously be offsetting costs; I think it's good to point this out, so we don't confuse "there's a presumption of evidence for (transhumanist intervention blah)" with "there's an ironclad argument against any possible offsetting risks/costs turning up in the future".
Like Said, I took Eliezer to just be saying "there's no currently obvious reason to think that the optimal healthy lifespan for most people is <200 (or <1000, etc.)." My read is that 2007-Eliezer is trying to explain why bioconservatives need to point to some concrete cost at all (rather than taking it for granted that sci-fi-ish outcomes are weird and alien and therefore bad), and not trying to systematically respond to every particular scenario one might come up with where the utilities do flip at a certain age.
The goal is to provide an intuition pump: "Wanting people to live radically longer, be radically smarter, be radically happier, etc. is totally mundane and doesn't require any exotic assumptions or bizarre preferences." Pretty similar to another Eliezer intuition pump:
In addition to standard biases, I have personally observed what look like harmful modes of thinking specific to existential risks. The Spanish flu of 1918 killed 25-50 million people. World War II killed 60 million people. is the order of the largest catastrophes in humanity’s written history. Substantially larger numbers, such as 500 million deaths, and especially qualitatively different scenarios such as the extinction of the entire human species, seem to trigger a different mode of thinking—enter into a “separate magisterium.” People who would never dream of hurting a child hear of an existential risk, and say, “Well, maybe the human species doesn’t really deserve to survive.”
There is a saying in heuristics and biases that people do not evaluate events, but descriptions of events—what is called non-extensional reasoning. The extension of humanity’s extinction includes the death of yourself, of your friends, of your family, of your loved ones, of your city, of your country, of your political fellows. Yet people who would take great offense at a proposal to wipe the country of Britain from the map, to kill every member of the Democratic Party in the U.S., to turn the city of Paris to glass—who would feel still greater horror on hearing the doctor say that their child had cancer— these people will discuss the extinction of humanity with perfect calm. “Extinction of humanity,” as words on paper, appears in fictional novels, or is discussed in philosophy books—it belongs to a different context than the Spanish flu. We evaluate descriptions of events, not extensions of events. The cliché phrase end of the world invokes the magisterium of myth and dream, of prophecy and apocalypse, of novels and movies. The challenge of existential risks to rationality is that, the catastrophes being so huge, people snap into a different mode of thinking.
People tend to think about the long-term future in Far Mode, which makes near-mode good things like "watching a really good movie" or "helping a sick child" feel less cognitively available/relevant/salient. The point of Eliezer's "transhumanist proof by induction" isn't to establish that there can never be offsetting costs (or diminishing returns, etc.) to having more of a good thing. It's just to remind us that small concrete near-mode good things don't stop being good when we talk about far-mode topics. (Indeed, they're often the dominant consideration, because they can end up adding up to so much value when we talk about large-scale things.)
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2018-12-12T03:13:29.687Z · LW(p) · GW(p)
I like this reading and don't have much of an objection to it.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2018-12-12T03:23:49.138Z · LW(p) · GW(p)
K, cool. :)
comment by [deleted] · 2019-11-30T19:13:04.582Z · LW(p) · GW(p)
This post has been my go-to definition of Transhumanism ever since I first read it.
It's hard to put into words why I think it has so much merit. To me it just powerfully articulates something that I hold as self-evident, that I wish others would recognize as self-evident too.
comment by Pattern · 2018-12-06T04:00:23.116Z · LW(p) · GW(p)
Are the Sequences being re-run?
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-12-06T04:23:50.466Z · LW(p) · GW(p)
Nah, just this post never got posted to LW, and we were sad that it wasn't (it's only been available on Yudkowsky.net).
comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-16T13:15:53.335Z · LW(p) · GW(p)
People also tend to believe that some changes, especially to their intelligence, somehow "destroy their integrity". So they may actually believe that, if you raise that girl's IQ, some human being will live on but it will not be HER in some (admittedly ununderstandable to me) sense. So their answer to "either it is better to have IQ X then IQ Y or not" is "No, it is better to have the IQ (or relevant measure which is more age-constant) you start with - so it IS good to heal the boy and it IS bad to enhance the girl".
(Yes, I do play advocatus diaboli and I do not endorse such a position myself - pace "Present your own perspective", as it is my perspective on "what a clever skeptic would say" not my perspective on "what should be said".)
comment by romberg · 2021-10-12T09:42:25.307Z · LW(p) · GW(p)
"If the technology were available to gradually raise her IQ to 120, without negative side effects, would you judge it good to do so?"
to me it seems possible to give simple answers, if "without negative side effects" would hold.
BUT in reality this is NEVER the case! There will be different distribution of wealth etc, the lifes of quite some people will change (at least a bit).
Thus there are always negative side effects! Thus the question to be answered is, to compare the positive effects (multiplied by the number of people (and not the power of these people)) with the negative side effects (multiplied by the number of people (and not the power of these people))?
If these negative side effects are assumed to be zero, then of course you can find questions that come up with solutions that are obviously not good. But that is just making non-adequate assumptions!
comment by Alephywr · 2018-12-09T02:57:18.293Z · LW(p) · GW(p)
In the long run nothing looks human that follows this logic. Preserving humanity might not be utilitarian optimal, but there is something to be said for aesthetics.
Replies from: SaidAchmiz, None↑ comment by Said Achmiz (SaidAchmiz) · 2018-12-10T21:25:51.700Z · LW(p) · GW(p)
Indeed; hence the term “transhumanism” (and, relatedly, “posthumanism”).
Change is terrifying. This is to be expected, not least because most change is bad; in fact, change is inherently bad. Any change must be an improvement, must justify itself, to be worthwhile. And any such justification can only be uncertain. When we look out along a line of successive changes, what we see on the horizon terrifies us—all the more so because we can only see it dimly, as a vague shape, whose outlines are provided more by our imagination than our vision.
But the alternative is an endless continuation of the cycle, in its current form, with no improvement or hope for escape, until the heat death of the universe. That is far more horrifying. If that were all humanity had to look forward to, then we would have no need of Hell.
Replies from: Alephywr↑ comment by Alephywr · 2018-12-11T02:39:17.377Z · LW(p) · GW(p)
I am in favor of change. I am not in favor of existence without boundaries. I don't have a moral justification for this, just an aesthetic one: a painting that contained arbitrary combinations of arbitrarily many colors might be technically sophisticated or interesting, but is unlikely to have any of the attributes that make a painting good imo. Purely subjective. I neither fault nor seek to limit those who think differently.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-12-11T03:15:15.191Z · LW(p) · GW(p)
I am not in favor of existence without boundaries. I don’t have a moral justification for this, just an aesthetic one …
I share your aesthetic preference (and I consider such preferences to be no less valid, and no less important, than any “moral” ones). But no one here is advocating anything like that. Certainly Eliezer isn’t, and nor am I.
↑ comment by [deleted] · 2019-12-09T07:06:59.009Z · LW(p) · GW(p)
Aesthetic preferences are a huge part of our personalities, so who would agree to any enhancement that would destroy them? And as long as they’re present, a transhuman will be even more effective at making everything look, sound, smell etc. beautiful — in some form or another (maybe in a simulation if it’s detailed enough and if we decide there’s no difference), because a transhuman will be more effective at everything.
If you’re talking about the human body specifically, I don’t think a believable LMD (with artificial skin and everything) is impossible. Or maybe we’ll find a way to build organic bodies with some kind of reciever instead of a brain, to be controlled remotely by an uploaded mind. Or we’ll settle for a simulation, who knows. Smarter versions of us will find a solution.
comment by Dagon · 2018-12-06T16:28:16.934Z · LW(p) · GW(p)
Any moral framework that doesn't acknowledge tradeoffs is broken. The interesting questions aren't "should we save this person if it doesn't cost anything?" - truly, that's trivial as you say. The interesting ones, which authors and ethicists try to address even if they're not all that clear are, "who should suffer and by how much in order to let this person live better/longer?".
How much environmental damage and sweatshop labor goes into "extreme" medical interventions? How much systemic economic oppression is required to have enough low-paid nurses to work the night shift? How many 6-year-olds could have better nutrition and a significantly better life for the cost of extending a 95-year-old's life to 98 years?
I'm delighted that humanity is getting more efficient and able to support more people more fully than even imaginable in the recent past. It's good to be rich. But there's still a limit, and always there will be a frontier where tradeoffs have to be made.
I'm currently hold the opinion (weakly, but it's been growing in me) that resource allocation is the only thing that has any moral weight.
Replies from: SaidAchmiz, avturchin, ErickBall↑ comment by Said Achmiz (SaidAchmiz) · 2018-12-06T18:00:04.589Z · LW(p) · GW(p)
Any moral framework that doesn’t acknowledge tradeoffs is broken. The interesting questions aren’t “should we save this person if it doesn’t cost anything?”—truly, that’s trivial as you say. The interesting ones, which authors and ethicists try to address even if they’re not all that clear are, “who should suffer and by how much in order to let this person live better/longer?“.
The problem comes in when people start inventing imaginary tradeoffs, purely out of a sense that there must be tradeoffs—and, critically, then use the existence of those (alleged) tradeoffs as a reason to simply reject the proposal.
And then you have the flaw in reasoning that I described in this old comment [LW(p) · GW(p)]:
Replies from: DagonI think that this post conflates two issues, and is an example of a flaw of reasoning that goes like this:
Alice: It would be good if we could change [thing X].
Bob: Ah, but if we changed X, then problems A, B, and C would ensue! Therefore, it would not be good if we could change X.Bob is confusing the desirability of the change with the prudence of the change. Alice isn’t necessarily saying that we should make the change she’s proposing. She’s saying it would be good if we could do so. But Bob immediately jumps to examining what problems would ensue if we changed X, decides that changing X would be imprudent, and concludes from this that it would also be undesirable.
[…]
I think that Bob’s mistake is rooted in the fact that he is treating Alice’s proposal as, essentially, a wish made to a genie. “Oh great genie,” says Alice, “please make it so that death is no more!” Bob, horrified, stops Alice before she can finish speaking, and shouts “No! Think of all the ways the words of your wish can be twisted! Think of the unintended consequences! You haven’t considered the implications! No, Alice, you must not make such grand wishes of a genie, for they will inevitably go awry.”
↑ comment by Dagon · 2018-12-06T18:12:25.002Z · LW(p) · GW(p)
I fully agree with both of your points - people can mis-estimate the tradeoffs in either direction (assuming there are none, as EY does in this post, and assuming they're much larger than they are, as you say). And people confuse desirability of an outcome with desirability of the overall effect of a policy/behavior/allocation change.
Neither of these change my main point that the hard part is figuring out and acknowledging the actual tradeoffs and paths from the current state to a preferred possible state, not just identifying imaginary-but-impossible worlds we'd prefer.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-12-06T18:30:39.410Z · LW(p) · GW(p)
I do not read Eliezer as claiming that there are no tradeoffs. Rather, his aim is to establish the desirability of indefinite life extension in the first place! Once we’re all agreed on that, then we can talk tradeoffs.
And, hey, maybe we look at the tradeoffs and decide that nah, we’re not going to do this. Yet. For now. With sadness and regret, we shelve the idea, being always ready to come back to it, as soon as our technology advances, as soon as we have a surplus of resources, as soon as anything else changes…
Whereas if we just shake our heads and dismissively say “don’t you know that tradeoffs exist”, and end the discussion there, then we’re never going to live for a million years.
But on the other hand, maybe we look at the tradeoffs and decide that, actually, life extension is worth doing, right now! How will we know, unless we actually try and figure it out? And why would we do that, unless we first agree that it’s desirable? That is what Eliezer is trying to convince readers of, in this essay.
For example, you say:
How much environmental damage and sweatshop labor goes into “extreme” medical interventions? How much systemic economic oppression is required to have enough low-paid nurses to work the night shift? How many 6-year-olds could have better nutrition and a significantly better life for the cost of extending a 95-year-old’s life to 98 years?
Well? How many? These are fine questions. What are the answers?
Quoting myself [LW(p) · GW(p)] once again:
The view here on Lesswrong, on the other hand, treats Alice’s proposal as an engineering challenge. … Once you properly distinguish the concepts of desirability and prudence, you can treat problems with your proposal as obstacles to overcome, not reasons not to do it.
(One important effect of actually trying to answer specific questions about tradeoffs, like the ones you list, is that once you know exactly what the tradeoffs are, you can also figure out what needs to change in order to shift the tradeoffs in the right direction and by the right amount, to alter the decision. And then you can start doing what needs to be done, to change those things!)
Replies from: Dagon↑ comment by Dagon · 2018-12-06T20:39:28.771Z · LW(p) · GW(p)
I don't claim to actually know the answers, or even how I'd figure out the answers. I merely want to point out that it's not simple, and saying "sometimes it's easy" without acknowledging that "sometimes it's hard" and "knowing the difference is hard" is misleading and unhelpful.
↑ comment by avturchin · 2018-12-06T17:49:09.092Z · LW(p) · GW(p)
Interestingly, fighting aging is winning in this logic against many other causes. For example, if we assume that giving Aubrey de Grey 1 trillion dollars is enough to solve aging (ok, I know, but), than this imply that 10 billions people will be saved from death, and the cost of the saved life is around 100 USD. There are some obvious caveats, but life extension research still seems to be one of the most cost-effective interventions.
Replies from: Dagon↑ comment by Dagon · 2018-12-06T18:27:20.685Z · LW(p) · GW(p)
I don't have a strong opinion on whether fewer longer-lived individuals or more shorter-lived individuals are preferable, in the case where total desirable-experience-minutes before heat death of the universe are conserved. I honestly can't tell you whether it'd be better to spend the trillion on convincing the expensive elderly to die more gracefully is a better improvement. To the extent that it's not my $trillion in the first place, I fall back to "simple is easier to justify, so prefer the obvious" and voice my support for life extension.
However, I don't actually spend the bulk of my money on life extension. I spend it primarily on personal and family/close-friend comfort and happinesss, and secondarily on more immediate palliative (helping existing persons in the short/medium term) and threat-reduction (environmental and long-term species capability) charities. Basically, r-strategy: we have a lot of humans, so it's OK if many suffer and die, as long as the quantity and median quality are improving.
Replies from: avturchin↑ comment by avturchin · 2018-12-06T19:10:44.575Z · LW(p) · GW(p)
The interesting objection is that any short lived person will be unhappy because she will fear death, thus the minutes are inferior.
Anyway, I think that using "happy minutes" as a measure of good social policy is suffering from Goodhart-like problems on extreme cases like life extension.
Replies from: Dagon↑ comment by Dagon · 2018-12-06T20:26:20.028Z · LW(p) · GW(p)
The interesting objection is that any short lived person will be unhappy because she will fear death, thus the minutes are inferior.
Ok, so it requires MORE minutes to be net equal. This is exactly the repugnant conclusion, and I don't know how to resolve whether my intuition about desirability is wrong, or whether every believable aggregation of value is wrong. I lean toward the former, and that leans me toward accepting that the repugnancy is an error, and the conclusion is actually better.
using "happy minutes" as a measure of good social policy is suffering from Goodhart-like problems on extreme cases like life extension.
Perhaps. To the extent that "happy experience minutes" is a proxy to what we really want, it will diverge at scale. If it _really is_ what we want, Goodhart doesn't apply. Figuring out the points it starts to diverge is one good way of understanding the thing we're actually trying to optimize in the universe.
Replies from: TheWakalix↑ comment by TheWakalix · 2018-12-06T21:48:21.943Z · LW(p) · GW(p)
How is that the repugnant conclusion? It seems like the exact opposite of the repugnant conclusion to me. (That is, it is a strong argument against creating a very large number of people with very little [resources/utility/etc.].)
Replies from: Dagon↑ comment by Dagon · 2018-12-06T21:58:13.137Z · LW(p) · GW(p)
Maybe I misunderstood. Your statement that shorter-lived individuals have less quality of their minutes of experience implied to me that there would have to be more individuals to get to equal total-happiness. And if this can extend, it leads to maximizing the number of individuals with minimally-positive experience value.
My best guess is there's a declining marginal value to spending resources on happiness or quantity at either extreme (that is, making a small number of very happy entities slightly happier rather than slightly more numerous will be suboptimal, _AND_ making a large number of barely-happy entities slightly more numerous as opposed to slightly happier will be suboptimal). Finding the crossover point will be the hard problem to solve.
Replies from: TheWakalix↑ comment by TheWakalix · 2018-12-07T01:55:02.246Z · LW(p) · GW(p)
First, the grandfather was my first comment in this tree. Check the usernames.
Second, the repugnant conclusion can indeed be applied here, but the idea itself isn’t the repugnant conclusion. In fact, if the number of people-minutes is limited, and the value of a person-minute is proportional to the length of the life that contains that minute, shouldn’t that lead to the Antirepugnant Conclusion (there should only be one person)?
...wait, I just rederived utility monsters, didn’t I.
Replies from: Dagon↑ comment by Dagon · 2018-12-07T15:14:43.844Z · LW(p) · GW(p)
...wait, I just rederived utility monsters, didn’t I.
Looks like. Which implies optimal is somewhere between one immortal super-entity using all resources of the universe and 10^55 3-gram distinct entities who barely appreciate their existence before being replaced with another.
Whether it's beneficial to increase or decrease from the current size/duration of entities, I don't know. Intution is that I would prefer to live longer and be smarter, even at the cost of others, especially others not coming into existence. I have the opposite reaction when asked if I'd give my organs today (killing me) to extend other's lives by more in aggregate than mine is cut short.
Calling it trivial or saying "sometimes the obvious answer is right" is simply a mistake. The obvious answer is highly suspect.
↑ comment by ErickBall · 2018-12-06T18:03:04.201Z · LW(p) · GW(p)
Could you explain what you mean by resource allocation? Certainly there's a lot of political and public opinion resistance to any new technology that would help the rich and not the poor. I think that stems from the thought that it will provide even more incentive for the rich to increase inequality (a view to which I'm sympathetic), but I don't see how it would imply that only the distribution of wealth is important...
Replies from: habryka4, Dagon↑ comment by habryka (habryka4) · 2018-12-06T19:22:42.201Z · LW(p) · GW(p)
(Sorry, we are still working on calibrating the spam system, and this got somehow marked as spam. I fixed it, and we have a larger fix coming later today that should overall stop the spam problems).
↑ comment by Dagon · 2018-12-06T20:32:32.285Z · LW(p) · GW(p)
I do not mean "wealth" when I talk about resource allocation. I mean actual real stuff: how much heat is generated on one's behalf, how much O2 is transformed to CO2 per unit time for whom, who benefits from a given square-meter-second of sunlight energy, etc. As importantly, how much attention and motivated work one gets from other people, how much of their consumed resources benefit whom, etc.?
Money is a very noisy measure of this, and is deeply misleading when applied to any long-term goals.