Hope Function

post by gwern · 2012-07-01T15:40:19.708Z · LW · GW · Legacy · 8 comments

Yesterday I finished transcribing "The Ups and Downs of the Hope Function In a Fruitless Search". This is a statistics & psychology paper describing a simple probabilistic search problem and the sheer difficulty subjects have in producing the correct Bayesian answer. Besides providing a great but simple illustration of the mind projection fallacy in action, the simple search problem maps onto a number of forecasting problems: the problem may be looking in a desk for a letter that may not be there, but we could also look at a problem in which we check every year for the creation of AI and ask how our beliefs change over time - which turns out to defuse a common scoffing criticism of past technological forecasting. (This last problem was why I went back and used it, after I first read of it.)

The math is all simple - arithmetic and one application of Bayes's law - so I think all LWers can enjoy it, and it has amusing examples to analyze. I have also taken the trouble to annotate it with Wikipedia links, relevant materials, and many PDF links (some jailbroken just for this transcript). I hope everyone finds it as interesting as I did.

I thank John Salvatier for doing the ILL request which got me a scan of this book chapter.

8 comments

Comments sorted by top scores.

comment by shminux · 2012-07-01T18:15:16.711Z · LW(p) · GW(p)

I'm wondering if what the researchers observed was not what the test subjects think, but what they think they think. This is because they did not observe the behavior, but only asked the subjects how they would behave.

For example, at what odds would those who said that the odds of the bus arrival do not depend on the time remaining till midnight (and are 50/50) actually bet on the bus arriving (provided they had to place a bet), when it's 11:59? My suspicion is that it would not be 50/50.

Replies from: gwern, None
comment by gwern · 2012-07-02T01:38:48.173Z · LW(p) · GW(p)

One has to wonder at the ethics of such an experiment - when you know tons of the subjects won't get even close to the right answer and thus would accept unfair bets!

Replies from: shminux, jsalvatier
comment by shminux · 2012-07-02T01:50:32.594Z · LW(p) · GW(p)

You can certainly set it up in an ethical way. For example, tell the subject that they have to find something as fast as they can. It could be a set of drawers and a large bin nearby. One could deduce their (admittedly sunk-cost biased) intuitive probabilities from where they start looking and when/whether they switch from looking in the drawers to the bin. As described, this would not be easy or clean, but you can certainly modify the experiment to achieve both.

comment by jsalvatier · 2012-07-02T04:44:15.718Z · LW(p) · GW(p)

Can't you just pay them more for doing the experiment?

Replies from: gwern
comment by gwern · 2012-07-02T04:53:55.146Z · LW(p) · GW(p)

Then they might not have enough skin at stake? Or so one could argue.

comment by [deleted] · 2012-07-02T10:30:23.561Z · LW(p) · GW(p)

I don't think you'd have to go so far as to bet. If people actually experience waiting until 11:59, they'll probably realise that the bus isn't likely to come.

comment by Rhwawn · 2012-07-01T15:50:10.389Z · LW(p) · GW(p)

Upvoted; the math may not be hard, but the curves are still not obvious.

comment by jsalvatier · 2012-07-02T08:14:14.598Z · LW(p) · GW(p)

I didn't see enough graphs, so I put together a spreadsheet for finding the hope function given a graph of the likelihood of finding what you're looking for (and a prior for finding it). I think it's right, but I'd appreciate someone sanity checking it.

I found it nice to be able to change the distribution of probability for the drawers and the prior probability and see what it does to the long term hope function.