How to place a bet on the end of the world

post by DirectedEvolution (AllAmericanBreakfast) · 2022-04-20T18:24:18.212Z · LW · GW · 12 comments

Edit: This post is about a practical method for structuring a bet. It's not on how to extract information about people's true confidence levels based on the betting odds.

Bryan Caplan and Eliezer Yudkowsky figured out a way to place a meaningful bet on the end of the world[1]. Bryan described their simple approach in his April 2022 80,000 Hours podcast interview.

In brief, Bryan assigns a lower probability than Eliezer to the world ending by January 1, 2030. Of course, if the world ends by that date, Eliezer can't collect his winnings. So Bryan prepays Eliezer now, and Eliezer pays him back if he loses the bet.

This works for friendly bets. Bryan points out that they need to account for interest. To that, I'd add both risk and opportunity cost. Commenters have changed my mind, however, about whether we can really learn much about the participants' confidence levels from such a bet. I elaborate on this in the comments.

Here's the relevant part of the transcript, under "Bryan's Betting Record" starting at 1:48:19 in the podcast episode.

Bryan Caplan: Let’s see. Well, I’ve already done one huge [bet against the effective altruism community], and this is that machines are going to kill us — or at least do something terrible to us — in the medium term. So I literally have an end-of-the-world bet with Eliezer Yudkowsky.

Rob Wiblin: Oh wow. I didn’t know that.

Bryan Caplan: Yeah. Which many people believe cannot be made, but it’s super easy. The person who disbelieves in the end of the world just pays the money now. And then if the world does not end, the loser pays back with whatever the odds are.

Rob Wiblin: I can’t believe I didn’t think of that.

Bryan Caplan: Yeah. So Eliezer, we actually have a bet. It takes a little effort to understand the bet, because his view is so specific. He said, “Look, I want a bet on there will no longer be any human beings on the surface of the Earth on January 1, 2030.” I was willing to give him, like, “How about all of human extinction?” — “No, no, no, no, no. There could still be humans in mine shafts. That’s OK. But not the surface of the Earth.” And I’m like, “All right. If that’s such a big deal to you, fine, we’ll make it the surface of the Earth, whatever.” But yes.

Bryan Caplan: So anyway, we have a bet, where I don’t remember the exact odds. It might be just like two-to-one. And I prepaid, so implicitly there’s interest. So it’s not as good as it seems.

  1. ^

    It's not in Bryan's public betting inventory at the time of publication of this post, but he describes it in more detail here.

12 comments

Comments sorted by top scores.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-20T20:38:42.607Z · LW(p) · GW(p)

The problem with this is that the normal market offers better odds. Just take out a low-interest loan.

The other problem with this is that money isn't important right now. I'm more interested in schemes to bet reputation/status or labor.

(I thought about this a bit last year: https://www.lesswrong.com/posts/4FhiSuNv4QbtKDzL8/how-can-i-bet-on-short-timelines [LW · GW] https://www.lesswrong.com/posts/kYa4dHP5MDnqmav2w/is-this-a-good-way-to-bet-on-short-timelines) [LW · GW]

Replies from: AllAmericanBreakfast, AllAmericanBreakfast, AllAmericanBreakfast, AllAmericanBreakfast, hold_my_fish
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-04-20T22:24:30.407Z · LW(p) · GW(p)

From your linked post:

Money is only valuable to me prior to the point of no return, so the value to me of a bet that pays off after that point is reached is approximately zero. In fact it's not just money that has this property. This means that no matter how good the odds are that you offer me, and even if you pay up front, I'm better off just taking out a low-interest loan instead.

As I understand it, you're arguing that if Eliezer wants $100 now that he doesn't need to pay off for 13 years, it would be cheaper to take it in loan interest than to make a bet with Bryan.

Using this loan calculator with the default 6% interest over 13 years compounded annually, Eliezer would owe $213.29 when his loan matured, rather than the $200 he'd owe Bryan if he loses the bet.

Eliezer enjoys $100 now, whether he bets with Bryan or takes out a loan.

If Eliezer loses the bet, his outcome is approximately the same whether he took out the loan or bet with Bryan: he owes $200. If he wins the bet, his outcome is also the same whether he took out the loan or bet with Bryan. Bryan's outcomes are also identical whether he lends out $100 at 6% annually compounded interest for 13 years, or bets with Eliezer. He's out $100 now, makes back the same either way if he wins, and isn't worried about money anymore if he loses.

Edit: I no longer think you can adjust by inflating the odds. If Bryan offers less favorable betting odds than Eliezer could get at the bank, then Eliezer could just take out the biggest loan he can get and ignore Bryan's offer to bet. I no longer think you can extract information on people's confidence about the end of the world based on a bet, unless you assume they're both acting as if they didn't understand basic finance.

Edit 2: However, the limiting factor here is the opportunity cost for Eliezer. The opportunity to take any loan at all, including an equivalent bet with Bryan, should look attractive to Eliezer (if we ignore the dollar vs. utility objection). Hence, if he were unwilling to take a bet with Bryan, or wanted to keep it small, then this should still be some evidence that he's not as confident in his claim as he's projecting. An apocalypticist who won't take out massive loans on the expectation he'll never have to pay them pack is not behaving in a manner consistent with his statements.

It seems like we could deal with this by inflating the odds. For example, if Eliezer bet Bryan at 9:1 odds, then Eliezer would get $100 now, and Bryan would make back $1000 if he wins, an $800 surplus over what he'd have gotten loaning his money out. Likewise, if Eliezer loses the bet, he would lose much more money paying back Bryan than he'd have lost taking out a 13-year loan.

So it seems like we can deal with this problem with an adjustment for opportunity cost. Eliezer and Bryan's bet is very close to a refusal to bet at all, since there is no different in outcome for either party whether they loan or bet, and no matter who wins. The real stakes for such a bet is something like the odds beyond adjustments for opportunity cost. In this case, if Eliezer was paid $100 up front by Bryan, and had to pay back about $400 if he lost the bet in 13 years, this would seem to me to be actually equivalent to a bet at 2:1 odds.

In general, the formula to calculate the "true odds" of the bet would be:

([Payment if Eliezer loses the bet] - [Interest of equivalent loan])/[Up-front payment by Bryan]

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-04-20T23:25:44.558Z · LW(p) · GW(p)

Besides, I don't need money right now anyway, at least to continue my research activities. I'd only be able to achieve significant amounts of extra good if I had quite a lot more money.

This points to a more fully general argument against using bets to operationalize a person's confidence in their claim. After all, no resource (status, time, or money) translates into personal utility in linear fashion.

 Even if resources and utility had a linear relationship, bets can be positive-sum for both participants, negative sum, or mixed. Eliezer and Bryan might both "earn" more than a couple hundred bucks in reputation, or even dollars, just by being known to have made this bet. I can also imagine a counterpart to Bryan for whom taking a bet with Bryan at all would be costly, perceived as unseemly. By contrast, Bryan builds his reputation partly on being a betting man, and I suspect he enjoys the activity for its own sake. I think this should be taken into account in interpreting people's willingness or refusal to bet.

Small bets seem to still be useful as a first measure to undermine punditry and motivate precise and explicit reasoning about empirical likelihoods. Insisting that a person making a confident claim ought to back it with favorable odds for the person on the other side of the bet seems to also be a good anti-punditry measure.

Overall, considering these points has downgraded my belief in the value of betting as a way to establish people's true confidence levels. Refusal to take a bet to back one's confident claims still doesn't look good, on the margin. But it's not devastating. We also shouldn't naively interpret betting odds as real statements about the better's exact confidence levels.

From this perspective, it seems like one of the virtue of real-money prediction markets, as opposed to personal bets, is that they're relatively anonymous. This removes most of the concern that people's eagerness or unwillingness to bet is due to reputational concerns about the act of betting, rather than reputational concerns about the prospect of being right or wrong. I haven't worked out the math, but it also seems like averaging would tend to eliminate the problems with differing utility functions, another point in favor of prediction markets.

Edit: This argument against extracting confidence information from bets is still, I think, correct. I'd now go further and say that you can't extract any information at all from a bet on the end of the world, unless you also assume the participants are acting as though they do not understand basic finance.

Toy model:

Imagine that, for you and your counterpart, making $1000 is worth 1 utility point to you, and losing $1000 is worth -2 utility points. Then you can work out your bet in terms of utility point odds, and then reconvert to dollars to enact the bet.

This becomes more complex if you and your counterpart assign different utilities to money. Let's make some simplifying assumptions. We'll ignore opportunity cost, assume your net worth doesn't change, and assume zero inflation.

Let's assume also that everybody gets utility points equal to the square root of their net worth in dollars.

Bryan has $10,000, 100 utility points. Eliezer has $100, 10 utility points. Eliezer wants to bet at 2:1 odds in utility points that the world will end in 2030. They choose 1 utility point as an upfront payment from Bryan to Eliezer, and 2 utility points as the payment from Eliezer to Bryan if the world doesn't end.

For Eliezer to get 1 utility point, he needs $21. But that would only cost Bryan about 0.1 utility points. 

If Eliezer loses his bet, he'd need to end up having a total of 8 utility points, while Bryan would need to end up having 102 utility points. So Eliezer would need to give up $57 of his $, and Bryan would need to gain $525.

Because of this, Bryan and Eliezer can only place a bet if they care about money about the same as each other, and it's not even clear that money odds will reflect their actual utility.

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-04-20T22:39:56.928Z · LW(p) · GW(p)

I just emailed Bryan to point the loan opportunity cost issue out. I'll update if I hear back.

Replies from: daniel-kokotajlo, AllAmericanBreakfast
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-21T10:21:49.324Z · LW(p) · GW(p)

Thanks for looking into this!

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-04-20T23:44:38.547Z · LW(p) · GW(p)

Bryan says he knows all this stuff, and these were just the best odds he could get. New interpretation: Bryan's having fun betting, and Eliezer's smart enough to know that if he loses, he just got a long-term loan from Bryan at a somewhat favorable rate.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-21T10:23:39.567Z · LW(p) · GW(p)

IIRC Eliezer made some joke about how Bryan's never lost a bet and maybe he can leverage this miraculous regularity to reduce AI risk :)

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-04-20T21:43:14.127Z · LW(p) · GW(p)

That’s true of Caplan’s bet, but I think you could correct for this in the odds you set. It would be useful to explicitly distinguish between the correction for opportunity cost and interest from the odds of the bet itself.

comment by hold_my_fish · 2022-04-21T00:08:16.037Z · LW(p) · GW(p)

I'm more interested in schemes to bet reputation/status or labor.

I agree that reputation (I'd say specifically credibility) is the important thing to wager, but I think any public bet implicitly does that.

If, in 2030, there are still humans on Earth's surface, then the takeaway is "AI x-risk proponent Yudkowsky proved wrong in bet", and Yudkowsky loses credibility.  (See Ehrlich's famous bet for an example of this pattern.) The upside is raising concern about AI x-risk in the present (2022).

This is a good trade-off if you think increasing concern about AI x-risk in 2022-2029 is worth decreasing concern about AI x-risk in 2030+. Of course, if AGI turns out to be invented before 2030, the trade-off seems good. In the event that it's not, the trade-off seems bad.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-21T10:27:54.795Z · LW(p) · GW(p)

Well, in that case I'll have you know that about two years ago I made a bet for $1000 with Tobias Baumann. Resolution date was also 2030. Can send more details if you like.

And in general if anyone wants to send me money now, I'll promise to pay you back with interest in 2030. But please only do this if it doesn't create significant administrative overhead for me... in fact come to think of it this probably would, I don't see how this is worth it for me... hmm..

Anyone who would accord me higher status and respect if they saw me making such bets should totally make such bets with me. Unless I don't care about their respect... which is probably true for most people... but not most people on LW...

comment by Dagon · 2022-04-20T23:05:20.446Z · LW(p) · GW(p)

It's trivial to bet on the end of the world - it's called "taking a loan that you expect not to pay back".  You "win" if anything happens that prevents your debt being collected - extinction, civilization or monetary collapse, personal death or bankruptcy.   If you can find suckers to lend you at better rates because they're getting advertising value from it (like Bryan is), so much the better. 

The problem is it's NOT trivial to bet on world survival, without accepting pretty bad odds (going interest rates) and without locking up capital you'd rather use elsewhere.   edit: this isn't a big problem - survival is it's own payout for most of us.  You ALREADY have your entire existence on that side of the line.

Replies from: AllAmericanBreakfast, None
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-04-20T23:31:22.424Z · LW(p) · GW(p)

If dollars correspond 1:1 with utility, then I think you can simply adjust for the opportunity cost of lending money out as a loan. See my "toy model" in a comment response above. However, I don't think dollars correspond 1:1 with utility, and the act of betting has a potentially asymmetrical reputational effect regardless of whether the participants are right or wrong. We can still bet on the end of the world, but we shouldn't assume that the betting odds reflect the true confidence levels of the participants in the bet.

I still think bets are a piece of weak evidence about a person's true confidence in their claims. If Bryan refused to take this bet with Eliezer, I'd be somewhat more inclined to think he's blowing hot air. But I don't think that their 2:1 bet means that we can take Eliezer as putting 2:1 odds on the end of the world in 2030.

Edit: I take it back. Why would Eliezer take a higher-"interest" bet with Bryan when he could just take out ~unlimited money in a long-term loan at more favorable rates? I no longer think you can bet in a meaningful way on the end of the world.

comment by [deleted] · 2022-05-19T18:36:13.687Z · LW(p) · GW(p)