[LINK] Cochrane on Existential Risk
post by Salemicus · 2013-08-20T22:42:06.583Z · LW · GW · Legacy · 21 commentsContents
21 comments
The finance professor John Cochrane recently posted an interesting blog post. The piece is about existential risk in the context of global warming, but it is really a discussion of existential risk generally; many of his points are highly relevant to AI risk.
If we [respond strongly to all low-probability threats], we spend 10 times GDP.
It's a interesting case of framing bias. If you worry only about climate, it seems sensible to pay a pretty stiff price to avoid a small uncertain catastrophe. But if you worry about small uncertain catastrophes, you spend all you have and more, and it's not clear that climate is the highest on the list...
All in all, I'm not convinced our political system is ready to do a very good job of prioritizing outsize expenditures on small ambiguous-probability events.
He also points out that the threat from global warming has a negative beta - i.e. higher future growth rates are likely to be associated with greater risk of global warming, but also the richer our descendants will be. This means both that they will be more able to cope with the threat, and that the damage is less important from a utilitarian point of view. Attempting to stop global warming therefore has positive beta, and therefore requires higher rates of return than simple time-discounting.
It strikes me that this argument applies equally to AI risk, as fruitful artificial intelligence research is likely to be associated with higher economic growth. Moreover:
The economic case for cutting carbon emissions now is that by paying a bit now, we will make our descendants better off in 100 years.
Once stated this way, carbon taxes are just an investment. But is investing in carbon reduction the most profitable way to transfer wealth to our descendants? Instead of spending say $1 trillion in carbon abatement costs, why don't we invest $1 trillion in stocks? If the 100 year rate of return on stocks is higher than the 100 year rate of return on carbon abatement -- likely -- they come out better off. With a gazillion dollars or so, they can rebuild Manhattan on higher ground. They can afford whatever carbon capture or geoengineering technology crops up to clean up our messes.
So should we close down MIRI and invest the funds in an index tracker?
The full post can be found here.
21 comments
Comments sorted by top scores.
comment by AlexMennen · 2013-08-21T00:31:27.008Z · LW(p) · GW(p)
If global warming gets worse, but people get enough richer, then they could end up better off. If an unfriendly intelligence explosion occurs, then it kills everyone no matter how well the economy is doing. His argument only applies to risks of marginal harm to the average quality of life, not to risks of humanity getting wiped out entirely.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-08-21T08:30:52.051Z · LW(p) · GW(p)
If global warming gets worse, but people get enough richer, then they could end up better off.
Tautologically, yes. But the two hypotheses are not independent. Global warming is predicted to destroy wealth -- that is the only reason we care about it.
Replies from: wedrifid↑ comment by wedrifid · 2013-08-21T13:03:27.082Z · LW(p) · GW(p)
If global warming gets worse, but people get enough richer, then they could end up better off.
Tautologically, yes.
This is not tautological. Wealth is highly correlated with wellbeing but not logically equivalent.
Global warming is predicted to destroy wealth -- that is the only reason we care about it.
It seems like you have redefined the meaning of some terms here.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-08-25T15:42:49.256Z · LW(p) · GW(p)
This is not tautological. Wealth is highly correlated with wellbeing but not logically equivalent.
The tautology lies in the word "enough".
comment by gwern · 2013-08-21T00:17:57.015Z · LW(p) · GW(p)
Indeed. (Fun for commenters: come up with more. Asteroid impact. Banking system collapse. Massive crop failure from virus or bacteria. Antibiotic resistsance....) If we treat all threats this way, we spend 10 times GDP. It's a interesting case of framing bias. If you worry only about climate, it seems sensible to pay a pretty stiff price to avoid a small uncertain catastrophe. But if you worry about small uncertain catastrophes, you spend all you have and more, and it's not clear that climate is the highest on the list.
This seems like a very strange thing for an economics professor to say.
Suppose we make an isomorphic argument:
"Of course, one can buy insurance against, say, a car crash. Shouldn't we pay a bit more car insurance now, though the best guess is that it's not a worthwhile investment, as insurance against such tail risks? But the problem is, we could buy insurance against our house burning down, homeowner's insurance against being robbed, our iPod breaking, our husband dying (or our wife, or our children, or our pets), travel insurance about our upcoming vacation, insurance against losing our job, catastrophic health insurance, legal liability insurance, longevity insurance... (Commenters, have fun listing others.) But if we treat risks this way, we'll wind up spending 10 times our annual income on insuring against risks! It's an interesting case of framing bias: it may sound rational to insure against a house fire or a car crash or an income-earner dying unexpectedly, you spend all you have and more, so it's not clear that car crash insurance is highest on the list."
Doesn't sound quite so clever that time around, does it? But all I did was take the framework of his argument: "if one invests in insurance against X, then because there are also risks Y, Z, A, B, C, which are equally rational to invest in, one will also invest in risks Y...C, and if one invested against all those risks, one will wind up broke; any investment where one winds up broke is a bad investment; QED, investing in insurance against X is a bad investment." and substitute in different, uncontroversial, forms of insurance and risk.
What makes the difference? Why does his framing seem so different from my framing?
Well, it could be that the argument is fallacious in equating risks. Maybe banking system collapse has a different risk from crop collapse which has a different risk from asteroid impact, and really, we don't care about some of them and so we would not invest in those, leaving us not bankrupt but optimally insured. In which case his argument boils down to 'we should invest in climate change protection iff it's the investment which highest marginal returns', which is boringly obvious and not insightful at all because it means that all we need to discuss is the object-level discussion about where climate change belongs on "the list", and there's not a meta-level objection to investing against existential risks at all as the post is being presented as.
Replies from: Salemicus, Lumifer, Dustin↑ comment by Salemicus · 2013-08-21T08:29:36.268Z · LW(p) · GW(p)
You are treating "investing in preventing X" as the same thing as "insuring against X." They are not the same thing. And they are doubly not the same thing on a society-wide level.
Insurance typically functions to distribute risk, not reduce it. If I get insurance against a house fire, my house is just as likely to burn down as it was before. However, the risk of a house fire is now shared between me and the insurance company. As Lumifer points out, trying to make your house fire-proof (or prevent any of the other risks you list) really would be ruinously expensive.
For threats to civilisation as a whole, there is no-one outside of the planet with whom we can share the risk. Therefore it is not sensible to talk about insurance for them, except in a metaphorical sense.
Replies from: gwern↑ comment by gwern · 2013-08-21T16:39:58.478Z · LW(p) · GW(p)
You are treating "investing in preventing X" as the same thing as "insuring against X." They are not the same thing. And they are doubly not the same thing on a society-wide level.
Fair enough, certainly one can draw a distinction between spreading risks around and reducing risks; even though in practice, the distinction is a bit muddled inasmuch as insurance companies invest heavily in reducing net risk by fighting moral hazard, funding prevention research, establishing industry-wide codes, withholding insurance unless best-practices are implemented.
So go back to my isomorphic argument, and for every mention of insurance, replace it with some personal action that reduces the risk eg. for 'health insurance', swap in 'exercise' or 'caloric restriction' or 'daily wine consumption'.
Does this instantly rescue Cochrane's argument and the isomorphism sound equally sensible? "You shouldn't try to quit eating so much junk food because while that reduces your health risks, there are so many risks you could be reducing that it makes no sense to try to reduce all of them and hence by the fallacy of division, no sense to try to reduce any of them!"
As Lumifer points out, trying to make your house fire-proof (or prevent any of the other risks you list) really would be ruinously expensive.
So you resolve Cochrane's argument by denying the equality of the risks.
Replies from: Lumifer↑ comment by Lumifer · 2013-08-21T16:50:35.848Z · LW(p) · GW(p)
no sense to try to reduce any of them!
I think you're misreading Cochrane. He approvingly quotes Pindyck who says "society cannot afford to respond strongly to all those threats" and points out that picking which ones to respond to is hard. Notably, Cochrane says "I'm not convinced our political system is ready to do a very good job of prioritizing outsize expenditures on small ambiguous-probability events."
All that doesn't necessarily imply that you should nothing -- just that selecting the low-probability threats to respond to is not trivial and that our current sociopolitical system is likely to make a mess out of it. Both of these assertions sound true to me.
↑ comment by Lumifer · 2013-08-21T02:15:14.260Z · LW(p) · GW(p)
What makes the difference?
The difference is that one set of risks is insurable and the other is not.
An insurable risk is one which can be mitigated through diversification. You can insure your house against fire only because there are thousands of other people also insuring their houses against fire. One consequence is that insurance is cheaper than an individual guarantee: it would cost much more to make your specific house entirely fireproof.
The other difference (and that one goes against Cochrane) is that normal insurable risks are survivable (and so you can assign certain economic value / utility / etc. to outcomes) while existential risks are not -- the value/utility of the bad outcome is negative infinity.
↑ comment by Dustin · 2013-08-21T00:40:33.089Z · LW(p) · GW(p)
(admittedly, I just skimmed the blog post, so I can be easily convinced my tentative position here is wrong)
I'm not sure I see any difference between your proposed isomorphic argument and his argument.
Assuming our level of certainty about risks we can insure against is the same as our level of (un)certainty about existential risks, and assuming the "spending 10 times our annual income" is accurate for both...the arguments sound exactly as "clever" as each other.
I also am not sure I agree with the "boringly obvious and not insightful at all" part. Or rather, I agree that it should be boringly obvious, but given our current obsession with climate change, is it boringly obvious to most people? Or rather, I suppose, the real question is do most people need the question phrased to them in this way to see it?
I guess what I'm saying is that it doesn't seem implausible to me is that if you asked a representative sample of people if climate change protection was important to invest in they would say yes and vote for that. And then if you made the boringly obvious argument about determining where it belongs on the list of important things, they'd also say yes and vote for that.
Replies from: gwern↑ comment by gwern · 2013-08-21T01:14:38.679Z · LW(p) · GW(p)
I'm not sure I see any difference between your proposed isomorphic argument and his argument.
Good, then my isomorphism succeeded. Typically, people try to deny that the underlying logic is the same.
the arguments sound exactly as "clever" as each other.
They do? So if you agree that things like car or health or house insurance are irrational, did you run out and cancel every form of insurance you have and advise your family and friends to cancel their insurance too?
I guess what I'm saying is that it doesn't seem implausible to me is that if you asked a representative sample of people if climate change protection was important to invest in they would say yes and vote for that. And then if you made the boringly obvious argument about determining where it belongs on the list of important things, they'd also say yes and vote for that.
But note that thinking climate change is a big enough risk to invest against has nothing at all to do with his little argument about 'oh there so so many risks what are we to do we can't consume insurance against them all'. Pointing out that there are a lot of options cannot be an argument against picking a subset of options; here's another version: "this restaurant offers 20 forms of cheesecake for dessert, but if I ordered a slice of 1 then why not order all 20, but then, why I would run out of cash and be arrested and get fat too! So it seems rational to not order any cheesecake at all." Why not just order 1 or 2 of the slices you like best... Arguing about whether you like the strawberry cheesecake better than the chocolate is a completely different argument which has nothing to do with there being 20 possible slices rather than, say, 5.
Replies from: Dustin↑ comment by Dustin · 2013-08-21T21:34:56.179Z · LW(p) · GW(p)
Good, then my isomorphism succeeded. Typically, people try to deny that the underlying logic is the same. Not in the way I think you think. They do? So if you agree that things like car or health or house insurance are irrational, did you run out and cancel every form of insurance you have and advise your family and friends to cancel their insurance too?
No, because we can quantify the risks and costs of those things and make good decisions about their worth.
In other words, if I assume that you intended that, for the sake of your argument, that we have the same amount of knowledge about insurance as we do about these existential risks, then the two arguments seem exactly as clever as each other: neither are terribly clever because they both point out that we need more information and well...duh. (However, see my argument about just how obvious "duh" things actually are.)
If I don't assume that you intended that, for the sake of your isomorphism, that we have the same amount of knowledge about insurance as we do about these existential risks, then the two arguments aren't so isomorphic.
But note that thinking climate change is a big enough risk to invest against has nothing at all to do with his little argument about 'oh there so so many risks what are we to do we can't consume insurance against them all'.
If this is the argument Cochrane is endorsing, I don't support it, but that's not exactly what I got out of his post. Lumifer's reading is closer to what I got.
comment by wedrifid · 2013-08-21T12:51:03.269Z · LW(p) · GW(p)
So should we close down MIRI and invest the funds in an index tracker?
Are you some way involved with MIRI in a strategic decision making capacity? If not, the "we" seems out of place. In my 'we' we can donate to MIRI or otherwise support or oppose them in various ways but we can't close them down without breaking some rather significant laws.
comment by roystgnr · 2013-08-21T15:39:24.101Z · LW(p) · GW(p)
this argument applies equally to AI risk, as fruitful artificial intelligence research is likely to be associated with higher economic growth
Yes, fruitful AI research is likely to be associated with higher economic growth. And fruitful AI research is the risk factor here, so we have positive beta.
The existential risk with AI isn't "we won't develop AI and then the future of humanity won't be as awesome", it's "we will develop AI which turns out to be un-Friendly and then the future of humanity won't be".
comment by Richard_Kennaway · 2013-08-21T08:15:18.185Z · LW(p) · GW(p)
So should we close down MIRI and invest the funds in an index tracker?
If no-one does the work, the work will not be done.
Replies from: wedrifid, Salemicus↑ comment by wedrifid · 2013-08-21T12:54:08.560Z · LW(p) · GW(p)
If no-one does the work, the work will not be done.
As Salemicus observes this seems to be a non-sequitur. The answer indicates incomprehension of the question.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-08-25T15:41:55.653Z · LW(p) · GW(p)
See my answer to Salemicus.
↑ comment by Salemicus · 2013-08-21T12:45:13.456Z · LW(p) · GW(p)
If no-one does the work, the work will not be done.
Correct. So should that work be done, or should the resources be put to alternative uses?
In other words, would you like to engage with Professor Cochrane's arguments?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-08-25T15:40:27.442Z · LW(p) · GW(p)
Cochrane's arguments don't amount to much. There are two. One is that BIG1 x LOTS > BIG2, the unspecified numbers being respectively the cost of addressing global warning, the number of similarly expensive major threats, and total human resources. No numbers are attached, nor any argument given to establish the inequality. An argument that consists of saying "look how big (or small) this number is!" is worthless unless some effort is made to say specifically why the number is in fact big (or small) enough to do the work demanded of it.
His other argument is that global warming interacts with our emissions. If warming reduces wealth, that will reduce emissions; if emissions are high, we must be very wealthy to be emitting so much, and better able to spend money on not doing that. While this argument, unlike the first, is not dead on arrival, a great deal more work is required for it to be useful. What about lags? How big is this effect? Is there an equilibrium point, and if so, where will it be and what will it look like? Population is self-limiting too, but the Malthusian equilibrium is not where we want to be.
Certainly, the prediction of long-term growth rates is complicated by the effects of growth on climate and of climate on growth. However, he forgets all that when he goes on to compare acting with investing in order to act later:
Instead of spending say $1 trillion in carbon abatement costs, why don't we invest $1 trillion in stocks? If the 100 year rate of return on stocks is higher than the 100 year rate of return on carbon abatement -- likely -- they come out better off.
100 year rate of return? No-one can know that (a point made by the article he starts off from). And how do you invest $1T in stocks, so as to cause total human wealth to increase? Buying stocks personally can increase one's personal wealth (by increasing one's personal money, which can then be exchanged for the things one values); how does moving $1T from wherever it was into stocks do this on a global scale?
And of course, he is not talking about MIRI at all, which is a special case. MIRI claims that for reasons depending on the specific properties of the hazard of UFAI, work must begin now, not postponed. If nobody does the work, the work will not be done.