Two arguments against longtermist thought experiments
post by momom2 (amaury-lorin) · 2024-11-02T10:22:11.311Z · LW · GW · 5 commentsContents
Choose actions, not outcomes Future lives are cheaper Conclusion None 5 comments
Epistemic status: shower thoughts.
I am currently going through the EA Introductory Course and we discussed two arguments against longtermism which I have not seen elsewhere.
So goes a thought experiment: imagine you have toxic waste at hand, which you can process right now at the cost of 100 lives, or bury it so it'll have no effect right away but poison the land, at the cost of 1000 lives in 100 years. Should you do it? Should you do the opposite tradeoff?
The basic intuition of longtermism is that clearly, the 1000 lives matter more than the 100, regardless of their position in time.
From Introduction to longtermism [EA · GW]:
Imagine burying broken glass in a forest. In one possible future, a child steps on the glass in 5 years' time, and hurts herself. In a different possible future, a child steps on the glass in 500 years' time, and hurts herself just as much. Longtermism begins by appreciating that both possibilities seem equally bad: why stop caring about the effects of our actions just because they take place a long time from now?
Faced with this tradeoff, I'd save the 100 immediate lives. More than that, longtermism as assigning-significant-value-to-far-future-things has almost nothing to do with this thought experiment.
The first reason is a matter of practical mindset which does not undermine longtermist principles but I feel like it's overlooked.
The second reason is more central to deprioritizing directly far-reaching actions in general.
My criticisms basically don't matter for practical caring-about-far-future-people, but I still find it annoying that the thought experiments used to build longtermist intuitions are so unrelated to the central reasons why I care about influencing the far future.
Choose actions, not outcomes
The first reason is that in practice, we face not a direct choice about outcomes (100 vs 1000 lives), but over actions (processing vs burying the waste) and so the hypothetical is fraught with assumptions about the irrelevance of indirect consequences by abstracting away the causal pathways through which your action has an impact.
- For example, the 100 people we save now will have a lot of impact in 100 years, which will plausibly compound to more than 10 future life saved per living person today.
- This could make sense if the population over the next 100 years is more than 10 times today's population, and you assign an equal share of responsibility to everyone for that (assuming you save 100 random lives).
- This is very speculative, but the point is that it's not obvious that 1000 lives in 100 years is more total impact than 100 lives now.
Someone could also say that our ability to de-poison the land will be improved in the future, or find other ways to reject the hypothetical. One could argue the thought experiment demands we disregard such considerations: assume all things are equal except for the number of lives saved, in which case you can validly derive that there are no other relevant parameters than the number of lives saved... but it doesn't feel like such a strong result now does it?
The strength of longtermism as a novel idea is its counterintuitiveness; it is the measure in which sharp arguments support unprecedented conclusions, because that is how much it will change our behavior.[1]
In practice, longtermism informs how we want to think about far-reaching actions such as creating seed banks or managing existential risk. Framing these actions in terms of tradeoffs between current and future lives forgets important information about the impact of saving a life.
Future lives are cheaper
More specifically, I think that (contrarily to what is often stated), saving future lives is not a neglected problem, and that it's relatively intractable, because sometimes we want not to compare [current efforts to save future lives] to [current efforts to save current lives] but to [future efforts to save future lives].
- The first comparison makes sense if you want to reallocate today's efforts between today's and tomorrow's causes. ("Should I buy malaria nets or build a seed bank?")
- The second makes sense if you want to reallocate tomorrow's causes between today's and tomorrow's efforts. ("Should I endanger future lives by burying toxic waste, effectively outsourcing the waste processing to the future?")
First an assumption: if the world is broadly the same or worse than today in terms of population, technology, economy, etc. then something has gone extremely wrong, and preventing this is a priority regardless of longtermist considerations.[2]
So I'm now assuming the thought experiment is about a far future which is stupendously big compared to our present, and very probably much better.
So the people of the future will have an easier time replacing lost lives (so our marginal effort is less impactful now than then) and they have more resources to devote to charity overall (so problems are less neglected).[3]
It's not infinitely easier to save a life in the future than now, but it's probably an order of magnitude easier.
Longtermism says that future lives have as much value as present lives and I say that the relative price of future lives is much lower than that of current lives; the two are not incompatible, but in practice I'm often exposed to longtermism in terms of cause prioritization.
Conclusion
I like to think of thought experiments the same way I think of made-up statistics [LW · GW]: you should dutifully go to the end of counterintuitive reasoning in order to build your intuition, then throw away the explicit result and not rely too much on auxiliary hot takes.
So, outsource your causes to the future. They'll take care of it more effectively than you.
- ^
I am implicitly adopting a consequentialist position here: I care about making my altruist actions effective [LW · GW], not about the platonic truth or virtue of longtermism.
- ^
I assume the far future is overwhelmingly likely to be very much futuristic [LW · GW] or not at all. Even if you don't think future lives are comparable to current lives in any significant manner, you probably still don't want the kind of events which would be necessary to make Earth barren or stagnant in a few centuries.
- ^
According to the first predictions that show up after a Google search, global population will be around 11B and world GDP will have grown by x25 in 100 years, so assuming resources are allocated similarly to now, I'd take 25*8/11 ~= 18 as my first approximation of how many more resources are devoted to saving lives per capita.
(If my argument does not hold up to scrutiny, if think this is the most likely point of failure.)
Note: The population could be much higher due to emulation or space travel (without necessarily large economic growth per capita if ems and colonists are basically slaves or very low-class, which would undermine my argument), and economic growth could be much higher due to AI (which would strengthen my argument; remember we're assuming away extinction risks). Consider other transformative technologies of your liking as appropriate.
5 comments
Comments sorted by top scores.
comment by nim · 2024-11-02T17:55:39.291Z · LW(p) · GW(p)
I notice that I am surprised: you didn't mention the grandfather problem situation. The existence of future lives is contingent on the survival of those peoples' ancestors who live in the present day.
Also, on the "we'd probably like for our species to continue existing indefinitely" front, the importance of each individual life can be considered as the percentage of that species which the life represents. So if we anticipate that our current population is higher than our future population, one life in the present has relatively lower importance than one life in the future. But if we expect that the future population will be larger than the present, a present life has relatively higher importance than a future one.
Replies from: amaury-lorin↑ comment by momom2 (amaury-lorin) · 2024-11-02T22:03:31.563Z · LW(p) · GW(p)
- I don't see what you mean by the grandfather problem.
- I don't care about the specifics of who spawns the far future generation; whether it's Alice or Bob I am only considering numbers here.
- Saving lives now has consequences for the far future insofar as current people are irrepleceable: if they die, no one will make more children to compensate, resulting in a lower total far future population. Some deaths are less impactful than others for the far future.
- That's an interesting way to think about it, but I'm not convinced; killing half the population does not reduce the chance of survival of humanity by half.
- In terms of individuals, only the last <.1% matter (not sure about the order of magnitude, but in any case it's small as a proportion of the total).
- It's probably more useful to think in terms of events (nuclear war, misaligned ASI -> prevent war, research alignment) or unsurvivable conditions (radiation, killer robots -> build bunker, have kill switch) that can prevent humanity from recovering from a catastrophe.
↑ comment by AnthonyC · 2024-11-02T22:38:01.635Z · LW(p) · GW(p)
I think the grandfather idea is that if you kill 100 people now, and the average person who dies would have had 1 descendant, and the large loss would happen in 100 years (~4 more generations), then the difference in total lives lived between the two scenarios is ~500, not 900. If the number of descendants per person is above ~1.2, then burying the waste means population after the larger loss in 100 years is actually higher than if you processed it now.
Obviously I'm also ignoring a whole lot of things here that I do think matter, as well.
And of course, as you pointed out in your reply to my comment above, it's probably better to ignore the scenario description and just look at it as a pure choice along the lines of something like "Is it better to reduce total population by 900 if the deaths happen in 100 years instead of now?"
comment by AnthonyC · 2024-11-02T11:34:17.642Z · LW(p) · GW(p)
I appreciate the discussion, but I can't help but be distracted by the specifics of the example scenario. In this case, it just seems obvious to me that the correct answer is to bury the waste and then invest in developing better processing solutions. There's no such thing as waste that can't be safely processed, even in principle, with a century of lead time to prepare. When I read the first few sentences, I actually thought the counterargument was going to be about uncertainty in long term impact projections.
Replies from: amaury-lorin↑ comment by momom2 (amaury-lorin) · 2024-11-02T21:47:19.363Z · LW(p) · GW(p)
Yes, that's the first thing that was talked about in my group's discussion on longtermism. For the sake of the argument, we were asked to assume that the waste processing/burial choice amounted to a trade in lives all things considered... but the fact that any realistic scenario resembling this thought experiment would not be framed like that is the central part of my first counterargument.