Posts
Comments
You seem to be treating the sub-problem, "what would Player 2 believe if he got a move" as if it is separate from and uninformed by Player 1's original choice. Assuming Player 1 is a utility-maximizer and Player 2 knows this, Player 2 immediately knows that if he gets a move, then Player 1 believed he could get greater utility from either option B or C than he could get from option A. As option B can never offer greater utility than option A, a rational Player 1 could never have selected it in preference to A. But of course that only leaves C as a possibility for Player 1 to have selected and Player 2 will select X and deny any utility to Player 1. So neither option B nor C can ever produce more utility than option A if both players are rational.
Exactly, but B is never a preferable option over A, so the only rational option is for Player 1 to have chosen A in the first place, so any circumstance in which Player 2 has a move necessitates an irrational Player 1. The probability of Player 1 ever selecting B to maximize utility is always 0.
Yes.
If Player 2 gets to move, then the only possible choice for a rational Player 1 to have made is to pick C, because B cannot possibly maximize Player 1's utility. The probability for a rational Player 1 to pick B is always 0, so the probability of picking C has to be 1. For Player 1,there is no rational reason to ever pick B, and picking C means that a rational Player 2 will always pick X, negating Player 1's utility. So a rational Player 1 must pick A.
Well, as i said I'm not familiar with the mathematics or rules of game theory so the game may well be unsolvable in a mathematical sense. However, it still seems to me that Player 1 choosing A is the only rational choice. Having thought about it some more I would state my reasoning as follows. For Player 1, there is NO POSSIBLE way for him to maximize utility by selecting B in a non-iterated game, it cannot ever be a rational choice, and you have stated the player is rational. Choosing C can conceivably result in greater utility, so it can't be immediately discarded as a rational choice. If Player 2 finds himself with a move against a rational player, then the only possible choice that player could have made is C, so a rational Player 2 must choose X. Both players, being rational can see this, and so Player 1 cannot possibly choose anything other than A without being irrational. Unless you can justify some scenario in which a rational player can maximize utility by choosing B, then neither player can consider that as a rational option.
To formulate this mathematically you would need to determine the probability of making a false prediction and factor that into the odds, which I regret is beyond my ability.
Because for Player 1 to increase his payoff over picking A, the only option he can choose is C, based on an accurate prediction via some process of reasoning that player 2 will pick X, thereby making a false prediction about Player 1's behaviour. You have stated both players are rational, so I will assume they have equal powers of reason, in which case if it is possible for Player 2 to make a false prediction based on their powers of reason then Player 1 must be equally capable of making a wrong prediction, meaning that Player 1 should avoid the uncertainty and always go for the guaranteed payoff.
Right that makes sense, but wouldn't Player 1 simply realize that making an accurate forecast of player 2's actions is functionally impossible, and still go with the certain payout of A?
I'm not overly familiar with game theory, so forgive me if I'm making some elementary mistake, but surely the only possible outcome is Player 1 always picking A. Either other option is essentially Player 1 choosing a smaller or no payoff, which would violate the stated condition of both players being rational. A nonsensical game doesn't have to make sense.
Well fair enough. His use of nutrition science as an example was probably poorly chosen.
I don't think we actually disagree on anything, the only point I was making was that your reply to Lightwave, while accurate, wasn't actually replying to the point he made.
Obviously if you can control for a confounding factor then its not an issue, I was simply stressing that the nature of human sciences means that it is effectively impossible to control for all confounding factors, or even be aware of many of them.
I'm Tom, 23 year old uni drop out (existential apathy is a killer), majored in Bioscience for what its worth. Saw the name of this site while browsing tvtropes and was instantly intrigued, as "less wrong" has always been something of a mantra for me. I lurked for a while and sampled the sequences and was pleased to note that many of the points raised were ideas that had already occurred to me.
Its good to find a community devoted to reason and that seems to actually think where most people are content not to. I'm looking forward suckling off the collective wisdom of this community, and hopefully make a valuable contribution or two of my own.
I think he is more suggesting that the number of confounding factors in psychology experiments is generally far higher than in the natural sciences. The addition of such uncontrollable factors leads to a generally higher error rate in human sciences.