X-risks are a tragedies of the commons
post by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-02-07T02:48:25.825Z · LW · GW · 19 commentsContents
19 comments
- Safety from Xrisk is a common good: We all benefit by making it less likely that we will all die.
- In general, people are somewhat selfish, and value their own personal safety over that of another (uniform) randomly chosen person.
- Thus individuals are not automatically properly incentivized to safeguard the common good of safety from Xrisk.
I hope you all knew that already ;)
19 comments
Comments sorted by top scores.
comment by Mati_Roy (MathieuRoy) · 2019-03-30T23:09:33.323Z · LW(p) · GW(p)
From the original article by Nick Bostrom: "Reductions existential risks are global public goods [13] and may therefore be undersupplied by the market [14]. Existential risks are a menace for everybody and may require acting on the international plane. Respect for national sovereignty is not a legitimate excuse for failing to take countermeasures against a major existential risk." See: https://nickbostrom.com/existential/risks.html
comment by Dagon · 2019-02-07T18:56:14.972Z · LW(p) · GW(p)
I see what you're saying, but I don't think it's a great fit for the concept, and I don't think it's helpful to reinforce what similarities there are.
Traditional commons problems (inefficient and destructive use of resources due to lack of mechanism to encourage efficient uses) are typically solved by assigning an owner or curator, who decides what limits to impose (often in the form of an entry price or usage rent, which means it gets used by those who think it's more valuable than the price, and provides revenue to replenish or maintain the property). Tragedy of the commons is generally applied to non-exludable (can't keep people out), rivalrous (meaning it's not unlimited in usage) topics. And the solution is exclusion.
X-risk mitigation is more like the "should we build a lighthouse" problem - it's non-excludable but also non-rivalrous, since someone using it doesn't keep anyone else from using it. In fact, many beneficiaries don't even know they're taking advantage of it. Here, the difficulty is knowing what level of investment is efficient. You can't get a price signal, so there's no way know what level of cost for any given project would make people prefer to take the risk rather than spend the cost. There's a lower bound on what people will voluntarily contribute (which is how many lighthouses actually got built - through sponsorships and donations by townspeople and ship owners). In modern times, this is often a topic where government officials make claims that they know the right things, and use it as a reason to justify taxes. Sometimes, they may even be right.
Replies from: jacobjacob, capybaralet↑ comment by jacobjacob · 2019-02-09T21:00:03.333Z · LW(p) · GW(p)
In case others haven't seen it, here's a great little matrix summarising the classification of goods on "rivalry" and "excludability" axes.
↑ comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-02-11T22:43:34.891Z · LW(p) · GW(p)
I think it is rivalrous.
Xrisk mitigation isn't the resource; risky behavior is the resource. If you engage in more risky behavior, then I can't engage in as much risky behavior without pushing us over into a socially unacceptable level of total risky behavior.
Replies from: Dagon↑ comment by Dagon · 2019-02-14T19:14:16.137Z · LW(p) · GW(p)
I think existence is the resource. Risky behavior and risk-mitigating behavior both impact the probabilities of such resource in the future. The fundamental resource (enjoying existence when disaster has not happened yet) is non-rivalrous and non-excludable. Funding for risk-mitigating projects is both rivalrous and excludable - that's just normal goods. Allowing risk-increasing behaviors is ... difficult to model.
comment by John_Maxwell (John_Maxwell_IV) · 2019-02-07T10:24:01.178Z · LW(p) · GW(p)
True, but from a marketing perspective it's better to emphasize the fact reducing x-risk is in each individual's self-interest even if no one else is doing it. Also, instead of talking about AI arms races, we should talk about why AI done right means a post-scarcity era whose benefits can be shared by all. There's no real benefit to being the person who triggers the post-scarcity era.
Replies from: capybaralet↑ comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-02-07T19:42:31.506Z · LW(p) · GW(p)
I dunno... I think describing them as a tragedy of the commons can help people understand why the problems are challenging and deserving of attention.
comment by Rohin Shah (rohinmshah) · 2019-02-08T21:37:58.116Z · LW(p) · GW(p)
Tragedies of the commons usually involve some personal incentive to defect, which doesn't seem true in the framework you have. Of course, you could get such an incentive if you include race dynamics where safety takes "extra time", and then it would seem like a tragedy of the commons (though race to the bottom seems more appropriate)
Replies from: capybaralet↑ comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-02-11T22:38:38.288Z · LW(p) · GW(p)
If there is a cost to reducing Xrisk (which I think is a reasonable assumption), then there will be an incentive to defect, i.e. to underinvest in reducing Xrisk. There's still *some* incentive to prevent Xrisk, but to some people everyone dying is not much worse than just them dying.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-02-12T18:50:47.848Z · LW(p) · GW(p)
Cool, I think we agree.
Replies from: capybaralet↑ comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-02-14T04:56:34.573Z · LW(p) · GW(p)
I'm not sure. I was trying to disagree with your top level comment :P
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-02-14T18:27:41.644Z · LW(p) · GW(p)
Ah, you're right, we don't really agree, I misunderstood.
I think we basically agree on actual object-level thing and I'm mostly disagreeing on the use of "tragedy of the commons" as a description of it. I don't think this is important though so I'd prefer to drop it.
Tbc, I agree with this:
If there is a cost to reducing Xrisk (which I think is a reasonable assumption), then there will be an incentive [...] to underinvest in reducing Xrisk. There's still *some* incentive to prevent Xrisk, but to some people everyone dying is not much worse than just them dying.
comment by Viliam · 2019-02-07T10:03:18.785Z · LW(p) · GW(p)
An important aspect is that people disagree about which (if any) X-risks are real.
That makes it quite different from the usual scenario, where people agree that situation sucks but each of them has individual incentives to contribute to making it worse. Such situation allows solutions like collectively agreeing to impose a penalty on people who make things worse (thus changing their individual incentive gradient). But if people disagree, imposing the penalty is politically not possible.
Replies from: gjm↑ comment by gjm · 2019-02-07T18:11:33.741Z · LW(p) · GW(p)
The boundary between disagreement about whether something is real and different preferences about the costs of mitigation are, alas, somewhat porous. E.g., when there was debate about the dangers of smoking, you were much less likely to think there was a lot of danger if you were either a smoker or an employee of a tobacco company than if you were neither of those things.
I don't know how strong this effect is in the domain of existential risk; it might depend on what things one counts as existential risks.
comment by RobertM (T3t) · 2019-02-07T06:32:25.667Z · LW(p) · GW(p)
I think there's an important distinction between x-risks and most other things we consider to be tragedies of commons: the reward for "cooperating" against "defectors" in an x-risk scenario (putting in disproportionate effort/resources to solve the problem) is still massively positive, conditional on the effort succeeding (and in many calculations, prior to that conditional). In most central examples of tragedies of the commons, the payoff for being a "good actor" surrounded by bad actors is net-negative, even assuming the stewardship is successful.
The common thread is that there might be a free-rider problem in both cases, of course.
Replies from: donald-hobson, rossry↑ comment by Donald Hobson (donald-hobson) · 2019-02-07T17:50:13.823Z · LW(p) · GW(p)
This is making the somewhat dubious assumption that X risks are not so neglected that even a "selfish" individual would work to reduce them. Of course, in the not too unreasonable scenario where the cosmic commons is divided up evenly, and you use your portion to make a vast number of duplicates of yourself, the utility, if your utility is linear in copies of yourself, would be vast. Or you might hope to live for a ridiculously long time in a post singularity world.
The effect that a single person can have on X risks is small, but if they were selfish with no time discounting, it would be a better option than hedonism now. Although a third alternative of sitting in a padded room being very very safe could be even better.
↑ comment by rossry · 2019-02-07T09:29:36.884Z · LW(p) · GW(p)
I acknowledge that there's a distinction, but I fail to see how it's important. When you (shut up and) multiply out the probabilities, the expected personal reward for putting in disproportionate resources is negative, and the personal-welfare-optimizing level of effort is lower than the social-welfare-optimizing level.
Why is it important that that negative expected reward is made up of a positive term plus a large negative term (X-risk defense), instead of a negative term plus a different negative term (overgrazing)?
comment by Mati_Roy (MathieuRoy) · 2019-03-23T01:19:00.609Z · LW(p) · GW(p)
related comment: https://forum.effectivealtruism.org/posts/F7hZ8co3L82nTdX4f/do-eas-underestimate-opportunities-to-create-many-small#XGSQX45NkAN9qSB9A [EA(p) · GW(p)]
comment by Andaro · 2019-02-14T09:10:01.861Z · LW(p) · GW(p)
Not all proposed solutions to x-risk fit this pattern: If government spends taxes to build survival shelters that will shelter only a chose few who will then go on to perpetuate humanity in case of a cataclysm, most tax payers receive no personal benefit.
Similarly, if government-funded programs solve AI value loading problems and the ultimate values don't reflect my personal self-regarding preferences, I don't benefit from the forced funding and may in fact be harmed by it. This is also true for any scientific research whose effect can be harmful to me personally even if it reduces x-risk overall.