Futarchy, Xrisks, and near misses
post by Stuart_Armstrong · 2017-06-02T08:02:13.561Z · LW · GW · Legacy · 10 commentsContents
10 comments
Crossposted at the Intelligent Agent Forum.
All the clever ways of getting betting markets to take xrisks into account suffer from one big flaw: the rational xrisk bettor only makes money if the xrisk actually happens.
Now, the problem isn't because "when everyone is dead, no-one can collect bets". Robin Hanson has suggested some interesting ideas involving tickets for refuges (shelters from the disaster), and many xrisks will be either survivable (they are called xrisks, after all) or will take some time to reach extinction (such as a nuclear winter leading to a cascade of failures). Even if markets are likely to collapse after the event, they are not certain to collapse, and in theory we can also price in efforts to increase the resilience of markets and see how changes in that resilience changes the prices of refuge tickets.
The main problem, however, is just how irrational people are about xrisks, and how little discipline the market can bring to them. Anyone who strongly *over-estimates* the probability of an xrisk can expect to gradually lose all their money if they act on that belief. But someone who under-estimates xrisk probability will not suffer until an xrisk actually happens. And even then, they will only suffer in a few specific cases (refuge prices are actually honoured and those without them suffer worse fates). This is, in a way, the ultimate Talebian about black swan: huge market crashes are far more common and understandable than xrisks.
Since that's the case, it might be better to set up a market in near misses (an idea I've heard before, but can't source right now). A large meteor that shoots between the Earth and the Moon; conventional wars involving nuclear powers; rates of nuclear or biotech accidents. All these are survivable, and repeated, so the market should be much better at converging, with the overoptimistic repeatedly chastised as well as the overpessimistic.
10 comments
Comments sorted by top scores.
comment by turchin · 2017-06-02T16:21:06.850Z · LW(p) · GW(p)
As I have written before 1 accident happens in 100-400 near misses in other fields, so this estimation may be used to price near-misses. http://lesswrong.com/lw/mx2/what_we_could_learn_from_the_frequency_of/
comment by Yosarian2 · 2017-06-03T18:42:12.128Z · LW(p) · GW(p)
Interesting idea.
So, let's think about this. Say we're talking about the Xrisk of nuclear war. Which of these would count as "near misses" in your mind?
Cuban missile crisis (probably the clearest case)
End of the Korean war when General Douglas MacArthur was pushing for nuclear weapons to be used against China
Berlin Airlift (IMHO this came very close to a WWIII scenario between Russia and the US, but the USSR didn't test their first nuclear weapon until 1949, so while a WWIII that started in 1948 could have gone nuclear, it probably wouldn't have been enough nuclear weapons to be a true x-risk?)
The incident in 1983 when Lieutenant Colonel Stanislav Petrov got the false report of a US launch and decided to not pass it on to his superiors
A bomber carrying 2 nuclear weapons crashed in California, and apparnelty one of the bombs came very close to detonating: https://en.wikipedia.org/wiki/1961_Goldsboro_B-52_crash
And wikipedia lists another 8 nuclear close calls I hadn't even heard about before searching for it:
https://en.wikipedia.org/wiki/List_of_nuclear_close_calls
It seems like it might be hard to define exactally what a "close call" is or what counts as a close call, but out of those, which ones would you count?
Edit: And nuclear war is probably the easiest one to measure. If there was a near miss that almost resulted in, say, a genetically engineered bioweapon being released, odds are we never would have heard about it. And I doubt anyone would even recognize something that was a near miss in terms of UFAI or similar technologies.
Replies from: Stuart_Armstrong, turchin↑ comment by Stuart_Armstrong · 2017-06-04T10:35:53.744Z · LW(p) · GW(p)
If there was a near miss that almost resulted in, say, a genetically engineered bioweapon being released
Accidental releases of diseases from labs (which happen depressingly often) or other containment failures could be used.
UFAI is, I agree, very hard to find near misses for.
↑ comment by turchin · 2017-06-03T23:23:24.703Z · LW(p) · GW(p)
The definition of the near-misses used in other fields is something like: a situation, where urgent measures were needed to prevent an accident, like the use of the emergency braking.
However, some near-misses may be not recognised as such. But they can't be used in statistic or prediction market.
comment by Lumifer · 2017-06-03T23:36:05.647Z · LW(p) · GW(p)
The main problem, however, is just how irrational people are about xrisks ... Anyone who strongly over-estimates the probability of an xrisk can expect to gradually lose all their money if they act on that belief. But someone who under-estimates xrisk probability will not suffer until an xrisk actually happens
In which sense you are using the word "irrational" here? In the situation as you describe, it seems quite rational to bet underestimating the xrisk probability.
might be better to set up a market in near misses
Markets need to be active and liquid to provide useful information. Why would people commit capital to the market in near misses?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2017-06-04T10:34:19.912Z · LW(p) · GW(p)
In which sense you are using the word "irrational" here? In the situation as you describe, it seems quite rational to bet underestimating the xrisk probability.
Both over and underestimates are irrational, but the over-estimator can expect market feedback that either corrects them or causes them to go out of business. The under-estimator cannot expect feedback except once, when it's too late. There's a whole host of human biases at play here - following the crowd, learning best from immediate feedback, general incompetence with low probability high impact events - that point towards xrisk probabilities being underestimated by the market.
Why would people commit capital to the market in near misses?
That seems to be the standard issue with funding futarchy betting markets in the first place. Generally it needs some outside funding to make it function.
Replies from: dogiv, Lumifer↑ comment by dogiv · 2017-06-06T02:56:19.001Z · LW(p) · GW(p)
It seems like there's also an issue with risk aversion. In regular betting markets there are enough bets that you can win some and lose some, and the risks can average out. But if you bet substantially on x-risks, you will get only one low-probability payout. Even if you assume you'll actually get that one (relatively large) payout, the marginal value will be greatly decreased. To avoid that problem, people will only be willing to bet small amounts on x-risks. The people betting against them, though, would be willing to make a variety of large bets (each with low payoff) and thereby carry almost no risk.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2017-06-06T08:21:51.859Z · LW(p) · GW(p)
Yes. And making repeated small bets drives, in practice, the expected utility to the expected value of money, while one large bet doesn't.
↑ comment by Lumifer · 2017-06-04T22:36:55.389Z · LW(p) · GW(p)
Both over and underestimates are irrational
Do you think that in this particular case it's worth drawing the difference between "making an estimate" and "making a market bet on the basis of an estimate"?
This situation resembles Pascal's Mugging a bit.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2017-06-05T13:05:26.530Z · LW(p) · GW(p)
Do you think that in this particular case it's worth drawing the difference between "making an estimate" and "making a market bet on the basis of an estimate"?
Only weakly. The probability of, eg, nuclear war, is not sufficiently low to qualify as Pascal Mugging, and investors do regularly consider things of <1% probability.