In Defence of Temporal Discounting in Longtermist Ethics
post by DragonGod · 2022-11-13T21:54:38.706Z · LW · GW · 4 commentsContents
4 comments
4 comments
Comments sorted by top scores.
comment by antanaclasis · 2022-11-14T18:34:24.447Z · LW(p) · GW(p)
It seems like this might be double-counting uncertainty? Normal EV-type decision calculations already (should, at least) account for uncertainty about how our actions affect the future.
Adding explicit time-discounting seems like it would over-adjust in that regard, with the extra adjustment (time) just being an imperfect proxy for the first (uncertainty), when we only really care about the uncertainty to begin with.
comment by avturchin · 2022-11-14T09:34:02.580Z · LW(p) · GW(p)
Yes, the more remote is a person, the larger number of other people can affect them from the same distance and typically the share of my impact is very small, unless I am in very special position which could affect a future person.
For example, I am planting a landmine which will self-liquidate either in 100 or 10 000 years, and while self-liquidating it will likely kill a random person. If I discount future people, I will choose 10 000 years, even if it will kill more people in future. However, if I think that humanity will likely extinct by then, it will be still a reasonable bet.
Replies from: DragonGod↑ comment by DragonGod · 2022-11-14T12:54:02.426Z · LW(p) · GW(p)
Well, I was arguing that we should discount in proportion to our uncertainty. And it seems you're pretty confident that it would kill future people (and more people, and we don't expect people in 10K years to want to die anymore than people in 100 years), so I think I would prefer to plant the 100 years landmine.
That said, expectations of technological progress, that the future would be more resilient, that they can better deal with the landmine, etc. means that in practice (outside the thought experiment), I'll probably plant the 10K landmine as I expect less people to actually die.
comment by RogerDearnaley (roger-d-1) · 2023-12-31T06:59:13.324Z · LW(p) · GW(p)
Some actions' future consequences are much easier to extrapolate out into the future than others. If I double-park and block someone else's car in so they can't leave, it's pretty predictable that if they get back to their car before I do they will be delayed and annoyed. The long-term effects of this on their life ten years later are far less predictable: like the effects of a flap of a butterfly's wing on the evolution of the weather, they could be good, bad, or negligible. However. if I instead kill them, they will very predictably still be dead ten years later.
Similarly, human-extinction risks have consequences that are unusually easy to propagate out into the distant future; if we go extinct, then we will stay extinct (well, unless some aliens come along and decide to de-extinct us from our genetic and biochemical records).
So yes, it's necessary to discount for the moral consequences of uncertainty due to computational and data limitations, and in many cases that leads to discounting the future, but not by a simple fixed exponential temporal discounting rate.