Betting with Mandatory Post-Mortem

post by abramdemski · 2020-06-24T20:04:34.177Z · LW · GW · 14 comments

Contents

14 comments

Betting money is a useful way to

However, I recently made a bet with both a monetary component and the stipulation that the loser write at least 500 words to a group chat about why they were wrong. I like this idea because:

Furthermore, if the loser's write-up is anything short of honest praise for the winner's views, the write-up may provide hints at a continuing disagreement between the loser and winner which can lead to another bet.

This idea feels similar to Ben's Share Models, Not Beliefs [? · GW]. Bets focus only on disagreement with probabilities, not the underlying reasons for those disagreements. Declaring a winner/loser conveys binary information about who was more correct, but this is very little information. Post-mortems give a place for the models to be brought to light.

A group of people who engaged in betting-with-post-mortems together would generally be getting a lot more feedback on practical reasoning and where it can go wrong.

14 comments

Comments sorted by top scores.

comment by gwillen · 2020-06-25T00:26:31.177Z · LW(p) · GW(p)

I like this a lot. I would also like to hear a post-mortem from the winner in a lot of cases, although of course it's kind of silly to impose it. But I do sometimes see the winner and the loser agree that the bet turned out to be operationalized wrong -- that they didn't end up betting on the thing they thought they were betting on. I'd like to know whether the winner thinks they won the spirit of the bet, as well as the letter.

comment by Raemon · 2020-06-27T20:22:10.250Z · LW(p) · GW(p)

Curated.

This seems like a quite obvious idea in retrospect. I haven't yet thought through whether it's something you should always be doing when you're betting, but it certainly seems like a good tool to have in the rationalist-culture-toolkit.

comment by Raemon · 2020-06-24T22:53:58.821Z · LW(p) · GW(p)

Yeah, this seems great to me. 

It does seem like a fair bit of time, people might just say "well, I got unlucky, but my models are the same, and, I dunno I guess I slightly adjusted the weights of my model?". The more interesting thing is when you make a bet where a negative outcome should force a large update.

Replies from: aram-baghdassarian-1, Davidmanheim
comment by MarcelloV (aram-baghdassarian-1) · 2020-06-26T01:59:44.314Z · LW(p) · GW(p)

Interesting; it's similar to if you make a calculated bet in poker when the odds are in your favor but still lose. In that case, your decision was still correct as well as the means you used to arrive at your decision. So there wouldn't be much to write about. Perhaps in this case the loser could write about why they think the winner actually made the wrong decision to continue playing the hand.

comment by Davidmanheim · 2020-06-30T07:38:31.137Z · LW(p) · GW(p)

"The more interesting thing is when you make a bet where a negative outcome should force a large update."

I think that's what odds are for. If you're convinced (incorrectly) that something is very unlikely, you should be willing to give large odds. You can't really say "I thought this was 40% likely, and I happened to get it wrong" if you gave 5:1 odds initially.

(And on the other side, the person who took the bet should absolutely say they are making a small update towards the other model, because it's far weaker evidence for them.)

Replies from: abramdemski
comment by abramdemski · 2020-07-01T16:35:22.911Z · LW(p) · GW(p)

Sorta, but you might have 50:50 odds with a very large spread (both people are very confident in their side) or with a very small spread. So it might be helpful to record that.

comment by Panashe Fundira (panashe-fundira) · 2020-06-28T22:35:35.794Z · LW(p) · GW(p)

Suppose you and I have two different models, and my model is less wrong than yours. Suppose that my model assigns a 40% probability to event X, and your model assigns a 60%, we disagree and bet, and event X happens. If I had an oracle over the true distribution of X, my write-up would consist of saying "this falls into the 40% of cases, as predicted by my model", which doesn't seem very useful. In the absence of an oracle, I would end up writing up praise for, and updating towards, your more wrong model, which is obviously not what we want.


This approach might lead to over updating on single bets. You'd need to record your bets, and the odds on those bets over time to see how calibrated you were. If your calibration over time is poor, then you should be updating your model. Perhaps we can weaken the suggestion in the post to writing a post-mortem on why you may be wrong. Then when you reflect over multiple bets over time, you could try to tease out common patterns and deficits in your model making.

Replies from: Davidmanheim, abramdemski, abramdemski
comment by Davidmanheim · 2020-06-30T07:35:29.350Z · LW(p) · GW(p)

"In the absence of an oracle, I would end up writing up praise for, and updating towards, your more wrong model, which is obviously not what we want."

Perhaps I'm missing something, but I think that's exactly what we want. It leads to eventual consistency / improved estimates of odds, which is all we can look for without oracles or in the presence of noise.

First, strength of priors will limit the size of the bettor's updates. Let's say we both used beta distributions, and had weak beliefs. Your prior was Beta(4,6), and mine was Beta(6,4). These get updated to B(5,6) and B(7,4). That sounds fine - you weren't very sure initially, and you still won't over-correct much. If the priors are stronger, say, B(12,18) and B(18,12), the updates are smaller as well, as they should be given our clearer world models and less willingness to abandon them due to weak evidence.

Second, we can look at the outside observer's ability to update. If the expectation is 40% vs. 60%, unless there are very strong priors, I would assume neither side is interested in making huge bets, or giving large odds - that is, if this bet would happen at all, given transaction costs, etc. This should implicitly limit the size of the update other people make from such bets.

comment by abramdemski · 2020-06-29T15:41:37.817Z · LW(p) · GW(p)

Another idea on this: both sides could do pre-mortems, "if I lose, ...". They could look back at this when doing post-mortems. Obviously this increases the effort involved.

Replies from: aram-baghdassarian-1, panashe-fundira
comment by MarcelloV (aram-baghdassarian-1) · 2020-07-02T18:50:00.692Z · LW(p) · GW(p)

Seems similar to Murphyjitsu [LW · GW]

Replies from: abramdemski
comment by abramdemski · 2020-07-02T18:54:46.213Z · LW(p) · GW(p)

Yeah, pre-mortem is another name for pre-hindsight, and murphyjitsu is just the idea of alternating between making pre-mortems and fixing your plans to prevent whatever problem you envisioned in the pre-mortem.

comment by Panashe Fundira (panashe-fundira) · 2020-07-02T13:09:43.612Z · LW(p) · GW(p)

I really like the idea of doing a pre-mortem here.

comment by abramdemski · 2020-06-29T01:05:15.404Z · LW(p) · GW(p)

Thinking about this makes me think people should record not just their bets, but the probabilities. If I think the probability is 1% and you think it's 99%, then one of us is going to make a fairly big update. If you think it's 60% and I think it's 50%, yeah, not so much. As a rough rule of thumb, anyway. (Obviously I could be super confident in a 1% estimate in a similar way to how you describe being super confident in a 40%.)

But OTOH I think in many cases, by the time the bet is resolved there will also be a lot of other relevant evidence which determines questions related to a bet. So the warranted update will actually be much larger than would be justified just the one piece of information. In other words, if two Bayesians have different world-models and make a bet about something much into the future, by the time the actual bet is resolved they'll often have seen much more decisive evidence deciding between the two models (not necessarily in the same direction as the bet gets decided).

Still, yeah, I agree with your concern.

comment by simbad · 2020-07-04T18:22:57.231Z · LW(p) · GW(p)

Like this idea. This approach can be seen in hedge funds as well. An analyst makes a bet on how/why a stock will perform a certain way and places a corresponding monetary position. The best analysts will take the extra step and conduct a postmortem if they lose money OR if they make money but not for the reasons that they had previously outlined.