Pigeons outperform humans at the Monty Hall Dilemma [LINK]

post by lukeprog · 2011-07-19T20:40:03.534Z · LW · GW · Legacy · 13 comments

Humans overthink the problem, which lets their biases in. [Pop Article] [Journal Article]

13 comments

Comments sorted by top scores.

comment by timtyler · 2011-07-20T08:37:16.147Z · LW(p) · GW(p)

The points given to the humans may not have been as rewarding as the food given to the pigeons.

comment by Manfred · 2011-07-19T23:02:06.247Z · LW(p) · GW(p)

EDIT: The following is wrong:

The paper is about presenting pigeons with the problem repeatedly and seeing what they get conditioned to do by the end. This was not done with humans.

EDIT AGAIN: In short, herp derp.

Replies from: JGWeissman
comment by JGWeissman · 2011-07-19T23:27:28.105Z · LW(p) · GW(p)

We already know conditioning works - the surprise would be if it didn't work on humans, but they didn't try that.

They did try it on humans, and it worked, but not as well as on the pigeons.

Replies from: Manfred
comment by Manfred · 2011-07-19T23:49:24.628Z · LW(p) · GW(p)

Oh wow. I am just herp-derp-tastic this week.

comment by Dr_Manhattan · 2011-07-20T02:07:48.210Z · LW(p) · GW(p)

What would a pigeon do with a car anyway...

comment by Unnamed · 2011-07-22T21:09:47.613Z · LW(p) · GW(p)

It's not clear if this study has anything to do with the Monty Hall problem. They didn't provide subjects with instructions about how the chances of winning were determined (so it's not clear if the humans were modeling it as they do the standard Monty Hall problem). And when they reversed the probabilities (so that p(win|stay)=2/3 and p(win|switch)=1/3) they still got the same results, with pigeons (but not humans) learning to almost exclusively pick the better option (which shows that the results don't depend on the underlying logic of them Monty Hall problem). So the problem is not people's reluctance to switch, and it might not be related to people's troubles with the standard Monty Hall problem.

The studies look like an extension of probability matching research, which show that with this particular experimental setup you get probability matching in humans but not in pigeons. But instead of systematically investigating what is going on with that, they went for the sexy Monty Hall association.

comment by DanielLC · 2011-07-20T00:19:13.023Z · LW(p) · GW(p)

Did the humans figure out it was the Monty Hall problem?

Replies from: JGWeissman
comment by JGWeissman · 2011-07-20T05:30:31.150Z · LW(p) · GW(p)

One human subject was removed from the experiment for having prior familiarity with the Monty Hall problem.

One participant in Condition 1 was eliminated due to prior familiarity with the Monty Hall Dilemma, leaving 6 participants in each condition.

"Condition one" refers to a setup following the standard Monty Haul Problem. "Condition two" reversed the probabilities (by assigning the "correct" choice after the subject made their first choice).

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-07-22T18:22:14.626Z · LW(p) · GW(p)

One human subject was removed from the experiment for having prior familiarity with the Monty Hall problem.

This is a commendable effort to make this a controlled experiment. OTOH, I have to wonder if the sort of person who would be likely to learn about/understand the monty hall problem would also, before learning about the problem, be better at solving it than a pigeon. That is, does actually knowing too much about the problem to be in the study correlate with other mental attributes that would enable a human to beat a pigeon (statistical interest and understanding, rationality, etc)?

comment by NancyLebovitz · 2011-07-20T03:25:59.331Z · LW(p) · GW(p)

Would a super-intelligence have a process for invoking the right kind of stupid for various situations?

Replies from: Manfred
comment by Manfred · 2011-07-20T08:16:23.162Z · LW(p) · GW(p)

Human biases seem like extra content that would have to be added in to a decent reasoning system as extra work - or if your AI is an evolved or trained neural network it might generate its own. Which would either not correction or be too damned hard to correct.

comment by Thomas · 2011-07-20T21:42:47.616Z · LW(p) · GW(p)

A problem might be solved by a child, but from a wrong reason. An adult who knows better, would have problems with this task. A very skilled solver finds a solution with a correct methodology. Not as a child who was just lucky.

May be the same situation here. Pigeons solve MH problem with a pure naivety. They observe the fact, that changing mind is somehow profitable. More grains, who cares exactly why and how.

Replies from: MixedNuts
comment by MixedNuts · 2011-07-21T06:48:06.326Z · LW(p) · GW(p)

It is unwise to criticise methodologies when they give right results; wait for them to fail.

Surely we are smarter than pigeons. What's to prevent us from implementing their tactic? We start off from a reasoned answer, allocate some resources (here, attempts) to exploiting it and some to explore other decisions (more, the more confusing the reasoning was), and see what actually helps. We shouldn't become worse than pigeons at changing our minds.

Also, don't assume children are stupid. You might be surprised. Or rather, you might prevent kids from having to solve problems and becoming smarter, so you might not be surprised.