Lawful Uncertainty
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-10T21:06:32.000Z · LW · GW · Legacy · 57 commentsContents
57 comments
In Rational Choice in an Uncertain World, Robyn Dawes describes an experiment by Tversky:1
Many psychological experiments were conducted in the late 1950s and early 1960s in which subjects were asked to predict the outcome of an event that had a random component but yet had base-rate predictability—for example, subjects were asked to predict whether the next card the experimenter turned over would be red or blue in a context in which 70% of the cards were blue, but in which the sequence of red and blue cards was totally random.
In such a situation, the strategy that will yield the highest proportion of success is to predict the more common event. For example, if 70% of the cards are blue, then predicting blue on every trial yields a 70% success rate.
What subjects tended to do instead, however, was match probabilities—that is, predict the more probable event with the relative frequency with which it occurred. For example, subjects tended to predict 70% of the time that the blue card would occur and 30% of the time that the red card would occur. Such a strategy yields a 58% success rate, because the subjects are correct 70% of the time when the blue card occurs (which happens with probability .70) and 30% of the time when the red card occurs (which happens with probability .30); (.70×.70) + (.30×.30) = .58.
In fact, subjects predict the more frequent event with a slightly higher probability than that with which it occurs, but do not come close to predicting its occurrence 100% of the time, even when they are paid for the accuracy of their predictions . . . For example, subjects who were paid a nickel for each correct prediction over a thousand trials . . . predicted [the more common event] 76% of the time.
Do not think that this experiment is about a minor flaw in gambling strategies. It compactly illustrates the most important idea in all of rationality.
Subjects just keep guessing red, as if they think they have some way of predicting the random sequence. Of this experiment Dawes goes on to say, “Despite feedback through a thousand trials, subjects cannot bring themselves to believe that the situation is one in which they cannot predict.”
But the error must go deeper than that. Even if subjects think they’ve come up with a hypothesis, they don’t have to actually bet on that prediction in order to test their hypothesis. They can say, “Now if this hypothesis is correct, the next card will be red”—and then just bet on blue. They can pick blue each time, accumulating as many nickels as they can, while mentally noting their private guesses for any patterns they thought they spotted. If their predictions come out right, then they can switch to the newly discovered sequence.
I wouldn’t fault a subject for continuing to invent hypotheses—how could they know the sequence is truly beyond their ability to predict? But I would fault a subject for betting on the guesses, when this wasn’t necessary to gather information, and literally hundreds of earlier guesses had been disconfirmed.
Can even a human be that overconfident?
I would suspect that something simpler is going on—that the all-blue strategy just didn’t occur to the subjects.
People see a mix of mostly blue cards with some red, and suppose that the optimal betting strategy must be a mix of mostly blue cards with some red.
It is a counterintuitive idea that, given incomplete information, the optimal betting strategy does not resemble a typical sequence of cards.
It is a counterintuitive idea that the optimal strategy is to behave lawfully, even in an environment that has random elements.
It seems like your behavior ought to be unpredictable, just like the environment—but no! A random key does not open a random lock just because they are “both random.”
You don’t fight fire with fire; you fight fire with water. But this thought involves an extra step, a new concept not directly activated by the problem statement, and so it’s not the first idea that comes to mind.
In the dilemma of the blue and red cards, our partial knowledge tells us—on each and every round—that the best bet is blue. This advice of our partial knowledge is the same on every single round. If 30% of the time we go against our partial knowledge and bet on red instead, then we will do worse thereby—because now we’re being outright stupid, betting on what we know is the less probable outcome.
If you bet on red every round, you would do as badly as you could possibly do; you would be 100% stupid. If you bet on red 30% of the time, faced with 30% red cards, then you’re making yourself 30% stupid.
When your knowledge is incomplete—meaning that the world will seem to you to have an element of randomness—randomizing your actions doesn’t solve the problem. Randomizing your actions takes you further from the target, not closer. In a world already foggy, throwing away your intelligence just makes things worse.
It is a counterintuitive idea that the optimal strategy can be to think lawfully, even under conditions of uncertainty.
And so there are not many rationalists, for most who perceive a chaotic world will try to fight chaos with chaos. You have to take an extra step, and think of something that doesn’t pop right into your mind, in order to imagine fighting fire with something that is not itself fire.
You have heard the unenlightened ones say, “Rationality works fine for dealing with rational people, but the world isn’t rational.” But faced with an irrational opponent, throwing away your own reason is not going to help you. There are lawful forms of thought that still generate the best response, even when faced with an opponent who breaks those laws. Decision theory does not burst into flames and die when faced with an opponent who disobeys decision theory.
This is no more obvious than the idea of betting all blue, faced with a sequence of both blue and red cards. But each bet that you make on red is an expected loss, and so too with every departure from the Way in your own thinking.
How many Star Trek episodes are thus refuted? How many theories of AI?
1 Amos Tversky and Ward Edwards, “Information versus Reward in Binary Choices,” Journal of Experimental Psychology 71, no. 5 (1966): 680–683. See also Yaacov Schul and Ruth Mayo, “Searching for Certainty in an Uncertain World: The Difficulty of Giving Up the Experiential for the Rational Mode of Thinking,” Journal of Behavioral Decision Making 16, no. 2 (2003): 93–106.
57 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Cyan2 · 2008-11-10T21:27:52.000Z · LW(p) · GW(p)
IIRC, there exist minimax strategies in some games that are stochastic. There are some games in which it is in fact best to fight randomness with randomness.
Replies from: TomStocker, EngineerofScience↑ comment by TomStocker · 2015-05-12T09:28:39.526Z · LW(p) · GW(p)
Only when the opponent has a brain.
Replies from: tlhonmey↑ comment by EngineerofScience · 2015-07-26T00:05:52.748Z · LW(p) · GW(p)
Going into game theory, if an opponent makes a truly random decision between two numbers, and you win if you guess which number they guessed, that would be a time that you should fight randomness with randomness. There aren't a lot of other situations where randomness should be fought with randomness but in situations similar to that situation that is the right move.
Replies from: David_Bolin↑ comment by David_Bolin · 2015-07-26T03:30:03.454Z · LW(p) · GW(p)
If the other player is choosing randomly between two numbers, you will have a 50% chance of guessing his choice correctly with any strategy whatsoever. It doesn't matter whether your strategy is random or not; you can choose the first number every time and you will still have exactly a 50% chance of getting it.
Replies from: EngineerofScience↑ comment by EngineerofScience · 2015-07-26T11:04:17.655Z · LW(p) · GW(p)
But you want to be purely unpredictable or the opponent( if they are a super ai) would gradually figure out your strategy and have a slightly better chance. A human(without tools) can't actully generate a random number. If your opponent was guessing a non-completely random number/ a "random" number in their head, then you want your choice to be random. I should have said if the opponent chooses a non-completely random number then you should randomly determine your number.
Replies from: Jiro↑ comment by Jiro · 2015-07-26T17:08:43.488Z · LW(p) · GW(p)
You can generate a random number in your head by generating several numbers unreliably and taking the sum mod X.
Replies from: EngineerofScience↑ comment by EngineerofScience · 2015-07-29T21:03:04.899Z · LW(p) · GW(p)
That works for some purposes but it is not truly random so it would be better to use a dice or other more random number if available. Of course, be realistic with getting random numbers. If the situation calls for a quickly thought decision, that works. If you have dice in your pocket go ahead and pull them out.
comment by Psy-Kosh · 2008-11-10T21:30:28.000Z · LW(p) · GW(p)
There are caveats though. For instance, if the opponent is an actual opponent, ie, something that in some way models the world and so on.
If so, then at times it may be desirable to reduce the accuracy of the opponent's model of the world, or at least that part of it that consists of you. So you may want to then have some aspect of your actions be algorithmically more complex than your opponent can computationally deal with, so some form of randomness may be of use.
comment by Peter_de_Blanc · 2008-11-10T21:41:28.000Z · LW(p) · GW(p)
Cyan:
I think you might be conflating ignorance by oneself of one's own future actions with ignorance by an opponent of one's future actions, but I'd like to see your example before I judge you.
comment by denis_bider2 · 2008-11-10T21:42:22.000Z · LW(p) · GW(p)
Eliezer: how does this square with Robin's recent What Belief Conformity?
He quoted:
"physicists and mathematicians perform best in terms of "rationality" (i.e. performance according to theory) and psychologists worst. However, since "rational" behavior is only profitable when other subjects also behave rationally ... the ranking in terms of profits is just the opposite: psychologists are best and physicists are worst."
comment by AnneC · 2008-11-10T22:48:02.000Z · LW(p) · GW(p)
OK, upon reading the experimental premise (I blocked out the rest of the text below that so it wouldn't influence me) the very first idea, the idea that seemed most obvious to me, was to bet on blue every time.
I basically figured that if I had 10 cards, and 7 of them were blue, and I had to guess the color of all the cards at once (rather than being given them sequentially, which would give me the opportunity to take notes and keep track of how many of each had already appeared), then the most reliable way of achieving the most "hits" would be to predict that each card would be blue. That way I'd be guaranteed a correct answer as to the color of 7 of the 10 cards.
At the same time I'd know I'd be wrong about 3 of the cards going into the experiment, but this wouldn't concern me if my goal was to maximize correct answers, and I was given only the information that 70% of the cards were blue while 30% were red, and that they were arranged in a random order. Short of moving outside the conditions of the experiment (and trying to, for instance, peek at the cards), there simply isn't any path to information about what's on them.
Now, if it were a matter of, "Guess the colors of all the cards exactly or we'll shoot you", I'd be motivated to try and find ways outside the experimental constraints -- as I'm sure most people would be. It would be interesting, though, to test people's conviction that their self-made algorithms were valid by proposing that scenario. Obviously not actually threatening people, but asking them to re-evaluate their confidence in light of the hypothetical scenario. I'd be curious to know if most people would be looking for ways to obtain more information (i.e., "cheat" per the experiment), or whether they'd stick to their theories.
comment by Alexei_Turchin · 2008-11-10T22:49:09.000Z · LW(p) · GW(p)
"For example, subjects who were paid a nickel for each correct prediction over a thousand trials... predicted [the more common event] 76% of the time."
How it could be? Psychic power?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-10T23:13:32.000Z · LW(p) · GW(p)
Alexei, they predicted blue, that's not the same as correctly predicting blue.
Denis, that leapt out at me as well - whoever wrote that sentence isn't defining "rational" the same way I do.
Cyan, that'll be covered in a future post. Certainly in situations of opposition you will want to take actions that are not predictable to your opponent, and so you'll want to sample something as unpredictable as possible according to a known, game-theoretically determined probability distribution. A quantum device is fine for this, but realistically, so is thermal uncertainty and strong cryptographic random-number generators. To look at it another way, what you're doing in this situation is not so much being clever yourself, but rather reducing the optimization power of your opponent - certainly chaos and noise can act as an antidote to intelligence.
comment by Felix · 2008-11-10T23:15:27.000Z · LW(p) · GW(p)
When your knowledge is incomplete - meaning that the world will seem to you to have an element of randomness - randomizing your actions doesn't solve the problem
Ants don't agree. Take away their food. They'll go in to random search mode.
As far as that experiment is concerned, it seems that AnneC hits the point: How was it framed? Were the subjects led to believe that they were searching for a pattern? Or were they told the pattern? Wild guess: the former.
comment by A_Pickup_Artist · 2008-11-11T00:11:29.000Z · LW(p) · GW(p)
Great post! I think this answers one common debate in the Pickup Community: routines vs. no routines game.
In case you don't know what I'm talking about:
When approaching lots of women is it better to engage in spontaneous conversation with each and every one or to always use the same, tried and true material(canned routines)? Routines win!
comment by Tim_Tyler · 2008-11-11T00:12:02.000Z · LW(p) · GW(p)
Even in some cases where you might think that the best game-theoretic strategy involves randomness, the actual best strategy is to play non-randomly - e.g. see Derren Brown - Paper, Scissors, Stone.
comment by Will_Pearson · 2008-11-11T00:25:35.000Z · LW(p) · GW(p)
Chicken is a game where it is best to be random. You are random because you don't want to be predictable and thus exploitable.
Replies from: christopherj↑ comment by christopherj · 2014-04-08T16:01:19.201Z · LW(p) · GW(p)
If you're predictably committed to winning the game of chicken, then you have essentially already won, at least against a rational opponent. Though you'd have to wonder how you wound up with a rational opponent if the game is chicken.
comment by Cyan2 · 2008-11-11T00:37:44.000Z · LW(p) · GW(p)
Peter de Blanc, I don't have an example, just a vague memory of reading about minimax-optimal decision rules in J. O. Berger's Statistical Decision Theory and Bayesian Analysis. (That same text notes that minimax rules are Bayes rules under the assumption that your opponent is out to get you.)
comment by billswift · 2008-11-11T01:09:21.000Z · LW(p) · GW(p)
"When your knowledge is incomplete - meaning that the world will seem to you to have an element of randomness - randomizing your actions doesn't solve the problem
"Ants don't agree. Take away their food. They'll go in to random search mode."
It depends on your degree of ignorance. When totally ignorant try anything, at the least you'll learn something that doesn't work, and watching how it fails should teach you more. Otherwise, you should use your best knowledge, without random input. It works for ants, more or less, but for anything with more intelligence and knowledge, using the intelligence and knowledge will work much better. Even ants only use random search when they need to.
Chicken is not a good example of a random game. The best strategy is to be a bloody minded SOB, if you can't convince your opponent that you are actually crazy. This is more or less what I got from Schelling's essays in "Strategy of Conflict".
comment by Nominull3 · 2008-11-11T02:06:32.000Z · LW(p) · GW(p)
Putting randomness in your algorithms is only useful when there are second-order effects, when somehow reality changes based on the content of your algorithm in some way other than you executing your algorith. We see this in Rock-Paper-Scissors, where you use randomness to keep your opponent from predicting your moves based on learning your algorithm.
Barring these second order effects, it should be plain that randomness can't be the best strategy, or at least that there's a non-random strategy that's just as good. By adding randomness to your algorithm, you spread its behaviors out over a particular distribution, and there must be at least one point in that distribution whose expected value is at least as high as the average expected value of the distribution.
comment by Mike_Plotz · 2008-11-11T02:09:15.000Z · LW(p) · GW(p)
The assumption behind this post, as AnneC touched on, is that higher scores are linearly correlated to what is perceived as a good outcome. Guessing blue every time will guarantee a worst case and best case outcome of 70%; as such, guessing randomly becomes a much better strategy if the player puts a significant premium on scoring, say, 95% or higher. Whether this valuation is rationally justifiable is another question entirely (though an important one).
The same assumption lies behind A Pickup Artist's post. It all depends on your objective: if you want to sleep with as many women as possible, routines are probably the best bet, though likely it depends on your personality. If instead you are looking for deep, meaningful relationships with women, routines may have a place, but natural game will take you further.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-11T02:45:04.000Z · LW(p) · GW(p)
Nominull: By adding randomness to your algorithm, you spread its behaviors out over a particular distribution, and there must be at least one point in that distribution whose expected value is at least as high as the average expected value of the distribution.
Well said! This is an obvious point, but I've never heard it put quite so sharply before.
comment by A_Pickup_Artist · 2008-11-11T03:43:24.000Z · LW(p) · GW(p)
@Mike Plotz:
I guess you missed the whole point of Eliezer's post. What you said is exactly wrong for the reasons stated!
Btw, routines are still the best strategy even if you want to have meaningful relationships. The routines are there to cover the first 10-20 minutes of a cold approach(where you and the woman are strangers to each others). After that you should have mutual attraction in most cases(that's where the randomness comes in and the importance of having a systematic winning strategy, see the post). Then it's the time where you drop the routines and can start having deeper conversations. It's called the comfort phase.
Btw, you shouldn't use routines in warm approach(where the woman knows you because she is in your social circle or introduced through friends). That's a different game.
The thing with cold approach is that you only have a limited timeframe(minutes) to create a positive impression. Think of meeting a woman in a nightclub or walking in the mall. You want to optimize this initial interaction to guarantee a chance to see her again. From 100 women you approach how many will find you attractive based on the personality you manage to convey in those few first minutes? A good pickup artist can have a success rate of 10% or higher. That's the art.
comment by Matt5 · 2008-11-11T04:57:33.000Z · LW(p) · GW(p)
A Pickup Artist,
(kind of off topic)
I am also a PUA and have thought about this debate for a while. I think that successful routines can expire after some point. If a girl has heard your routine before, she is likely to turn you down. The best routines are ones that abide by the LOAs and where the target doesn't know the routine. This unpredictable factor in your routine demonstrates romance, intelligence, spontaneity and other alpha male qualities.
Buying a potential female a drink at the bar is a perfect example of an expired method. The Buy You a Drink routine theoretically makes sense (shows economic status), as it abides by the LOAs. The problem is that this method is too widely used and exposes the PA or AFC as predictable, unoriginal, unromantic, and bad intentioned. This failure should emphasis the importance of using personalized and original methods.
comment by Mike_Plotz · 2008-11-11T07:56:28.000Z · LW(p) · GW(p)
@A Pickup Artist
I got the point of Eliezer's post, and I don't see why I'm wrong. Could you tell me more specifically than "for the reasons stated" why I'm wrong? And while you're at it, explain to me your optimal strategy in AnneC's variation of the game (you're shot if you get one wrong), assuming you can't effectively cheat.
(Incidentally, and somewhat off-topic, there's a beautiful puzzle with a similar setup — see "Names in Boxes" on the first page of http://math.dartmouth.edu/~pw/solutions.pdf. The solutions are included, but try to figure it out for yourself. It's worth it.)
I'll concede the point on routines. Since so much of human interaction is scripted anyway (where are you from? what do you do? etc.), the difference between using canned material and not is hard to pin down. I'd love to see a study done on the subject, but it would be devilishly difficult to design a good one.
comment by Will_Pearson · 2008-11-11T09:11:03.000Z · LW(p) · GW(p)
Chicken is not a good example of a random game. The best strategy is to be a bloody minded SOB, if you can't convince your opponent that you are actually crazy. This is more or less what I got from Schelling's essays in "Strategy of Conflict".And if you both do that, you both crash and die. It is not the best response to itself, so can't be seen to be a Nash equilibrium.
comment by Stuart_Armstrong · 2008-11-11T11:00:40.000Z · LW(p) · GW(p)
It is a counterintuitive idea that the optimal strategy can be to think lawfully, even under conditions of uncertainty.
Nicely put. I can think of examples where you should think chaotically in order to solve a chaotic problem - but they're very convoluted, unatural examples.
One thing still niggles me; the fact that rationalists should win. Looking around sucessful people, I see more rationalists than the average - but not much more. Our society is noisy, yes, but rationalists should still win much more often than they do. Rationalists seem more skilled at avoiding losing, than at actually winning.
comment by NancyLebovitz · 2008-11-11T11:39:54.000Z · LW(p) · GW(p)
I think you're right that the subjects in the experiment simply don't think of the 100% blue strategy, and I wonder if there's any way to find out why it's so unaesthetic that it doesn't cross people's minds.
My tentative theory is that conformity is a good strategy for dealing with people if you don't have a definite reason for doing something else, and that the subjects are modeling the universe (or at least the random sequence) as conscious.
Introspecting, I think that choosing 100% blue also feels like choosing to be wrong some of the time, so some loss aversion kicks in, while doing a 70/30 strategy feels like trying to be right every time.
"Even a human" might just be a fair insult.
comment by a_soulless_automaton · 2008-11-11T12:54:19.000Z · LW(p) · GW(p)
@Stuart Armstrong: First of all, the strongest influence on future success in society is whether or not one is already successful (most easily accomplished by having successful parents). One would also expect some percentage of non-rationalists to succeed anyways simply through chance. Assuming that non-rationalists substantially outnumber rationalists, it isn't terribly surprising to see more of the former among successful people. Rather than looking at how many successful people are rationalists, it would be more informative to look at rational people and see how many become more successful over their lives compared to average. Or, you could try and estimate the likelihoods of being rational, being successful, and being rational given success, then apply Bayes' law...
Also, if rationalists seem more skilled at avoiding failure than at winning, perhaps that merely suggests that failure is more predictable than success?
comment by billswift · 2008-11-11T16:06:56.000Z · LW(p) · GW(p)
"And if you both do that, you both crash and die. It is not the best response to itself, so can't be seen to be a Nash equilibrium."
Of course it's not. I was mainly objecting to the earlier comment that it was an example of a random game. The it is is a psychological game - ideally, you want to convince your opponent before the game starts that you'll drive right into him if you need to to win.
comment by Will_Pearson · 2008-11-11T18:49:48.000Z · LW(p) · GW(p)
We are talking past each other somewhat. I'm talking about the theoretical one shot/no communication game theory version of chicken. This has a mixed strategy as an equilibrium. You are talking about the testosterone fueled young lad car version. Which doesn't have a nice mathematical analysis, or best strategy as such.
comment by Caledonian2 · 2008-11-11T18:55:08.000Z · LW(p) · GW(p)
Foraging animals make the same 'mistake': given two territories in which to forage, one of which has a much more plentiful resource and is far more likely to reward an investment of effort and time with a payoff, the obvious strategy is to only forage in the richer territory; however, animals instead split their time between the two spaces as the relative probability of a successful return.
In other words, if one territory is twice as likely to produce food through foraging as the other, animals spend twice as much time there: 2/3rds of their time in the richer territory, 1/3rd of their time in the poorer. Similar patterns hold when there are more than two foraging territories involved.
Although this results in a short-term reduction in food acquisition, it's been shown that this strategy minimizes the chances of exploiting the resource to local extinction, and ensures that the sudden loss of one territory for some reason (blight of the resource, natural diaster, predation threats, etc.) doesn't result in a total inability to find food.
The strategy is highly adaptive in its original context. The problem with humans that we retain our evolved, adaptive behaviors long after the context changes to make them non- or even mal-adaptive.
comment by michael_e_sullivan · 2008-11-11T21:38:45.000Z · LW(p) · GW(p)
Mike Plotz: I got the point of Eliezer's post, and I don't see why I'm wrong. Could you tell me more specifically than "for the reasons stated" why I'm wrong? And while you're at it, explain to me your optimal strategy in AnneC's variation of the game (you're shot if you get one wrong), assuming you can't effectively cheat.
In some games, your kind of strategy might work, but in this one it doesn't. From the problem statement, we are to assume the cards are replaced and reshuffled between each trials so that every trial has a 70% chance of being blue or red.
In every single case, it is more likely that the next card is blue. Even in the game where you are shot if you get one wrong, you should still pick blue every time. The reason is that of all the possible combinations of cards chosen for the whole game, the combination that consists of all blue cards is the most likely one. It is more likely than any particular combination that includes a red card. Because at every step, a blue card is more likely than a red one. Just because you pick a red card, doesn't give you credit for anywhere a red card might pop up. You have to pick it in the right spot if you want to live. And your chances of doing that in any particular spot are less than the chances of picking the blue card correctly.
There are games where you adopt a strategy with greater variance in order to maximize the possibility of an unlikely win, rather than go for the highest expected value (within the game), because the best expected outcome is a loss. Classic example would be the hail mary pass in football. Expected outcome is worse (in yards) than just running a normal play, or teams would do it all the time. But if there are only 5 seconds on the clock and you need a touchdown, the normal play might win 1 in 1000 games, while the hail mary wins 1 in 50. But there is no difference in variance in choosing red or blue in the game described here, so that kind of strategy doesn't apply.
comment by A_Pickup_Artist · 2008-11-11T23:36:17.000Z · LW(p) · GW(p)
@Mike
I got the point of Eliezer's post, and I don't see why I'm wrong. Could you tell me more specifically than "for the reasons stated" why I'm wrong?
I didn't read your post carefully. I was wrong. Sorry.
comment by Mike_Plotz · 2008-11-12T01:13:57.000Z · LW(p) · GW(p)
@michael e sullivan
You are right, my mistake. I was assuming that running, say, 100 trials meant going all the way through a 100-card deck without shuffling. Going back over the description of the problem, I don't see where it explicitly says that the cards are replaced and reshuffled, but that's probably a more meaningful experiment to run, and I'm sure that's how they did it.
At least I'm not crazy (nor, hopefully, stupid, if only 30%). :)
@A Pickup Artist
No worries, I made a bad assumption.
comment by Abigail · 2008-11-13T14:38:03.000Z · LW(p) · GW(p)
I was wondering whether to make the pedantic point that sometimes people do fight fire with fire, by seeking to stop a forest fire by burning a patch in the fire's path, so that the fire cannot leap over that patch.
I think too much pedantry can paralyse thought, but if our aim is rationality we should avoid untruths.
comment by Paul_Ogryzek · 2008-11-14T05:06:10.000Z · LW(p) · GW(p)
Just to clarify the utility of randomness issue, I think what some respondents are talking about is the benefit of unpredictablility, which is instrumental when playing a game against a live opponent. This is totally different from randomizing. I also don't think that saying that ants "randomly" search for food is the most accurate way to describe their process. So randomness, in its strict interpretation, is never optimal game strategy. Another thought I had is that there are some circumstances in which it would make sense to change one's prediction to red. If you had a good idea how many total cards were left and had the knowledge that blue cards had significantly over-represented themselves (50 total cards, 30 already flipped, all blue), it would lead to the conclusion that over half of the remaining cards would be red. Such a circumstance could lead to a higher than 70% success rate.
Replies from: l2718↑ comment by l2718 · 2015-02-19T00:58:03.875Z · LW(p) · GW(p)
Random search can be an effective strategy due to bounded rationality. As pointed elsewhere on this thread, the expected utility of a mixed strategy is not greater than the maximum utility of the pure strategies in its support. But determining the utility of the pure strategies may not be possible. For example, an ant cannot carry the neural machinery necessary to remember everything it has seen, or to use such information to determine with accuracy the most likely direction to food -- but it can carry sufficient neural machinery to perform a random walk.
comment by VioletX · 2009-02-08T06:58:09.000Z · LW(p) · GW(p)
I was assuming that running, say, 100 trials meant going all the way through a 100-card deck without shuffling.
I believe this should be the case. There's no need to reshuffle between each trial because it would unnecessarily complicate things. I'd assume they reshuffled a deck of hundred cards after every 100 trials.
Also, if you put the card back and reshuffle, you cannot guarantee a %70 success rate as described.
comment by JohnDavidBustard · 2010-08-25T16:40:56.996Z · LW(p) · GW(p)
An important point to make, but what of the optimal meta-strategy (strategy in forming strategies)?
I recognise the enormous advantage that a formal (reasoned) analysis of a problem provides however is this strategy statistically optimal (i.e. likely to lead to a win) in most environments?
For example, most challenges are time limited, so extensive analysis is impractical. In addition, the problem (and the solution) may not lend itself to rational analysis but instead require internal mental statistical modelling (e.g. how should I throw a rock in order to hit a target may be best answered by repeatedly trying).
In the example in the article the assumption that the deck of cards is random may itself be unreasonable (and when averaged over many challenges may be a sub-optimal heuristic). The strategy employed by those playing may appear random but may in fact represent an (information theoretically) optimal hypothesis of the likely next result given the previous inputs exploiting a set of modelling heuristics that are themselves optimal selections given the past experience and genetic history. This is likely to produce an output that had a matching distribution (because in a situation where a correct model could be produced it would have this distribution). The argument that some problems ‘are not rational’ may actually be an indication that the problem solving strategy of reasoned analysis has not produced positive results in their experience and so they are accurately communicating their statistical meta-knowledge. For them to alter their strategy in an optimal way would require that they had a statistically valid reason for doing so, i.e. that they were aware that such approaches had led to superior results in the past. Of course they have no means of communicating this way because their experiences have not led them to develop the conscious models that would enable that kind of self awareness.
comment by cousin_it · 2011-05-30T09:06:25.270Z · LW(p) · GW(p)
Coming back to this post, I don't understand what Eliezer means by "rationality" here. The game described isn't the log-score game, and the input sequence is described as uncomputable ("truly random"), so I guess Solomonoff induction will also fare asymptotically worse than a human who always bets on blue. Does anyone have an idealized model of a rational agent that can "bring itself to believe that the situation is one in which it cannot predict"?
Replies from: papetoast↑ comment by papetoast · 2022-08-24T13:50:18.259Z · LW(p) · GW(p)
No for either of my interpretations of your question
If you mean "does a test for randomness exists", I believe there isn't, but there are statistical tests that can catch non random sequences.
If you mean "can a rational agent 100% believe the someone is random", then no, because 100% certainty is impossible for anything.
comment by CriticalSteel · 2011-11-19T03:21:39.207Z · LW(p) · GW(p)
In summary.
This article seems to re-affirm: You develop a theory and test it by making further observations and following scientific method. (which you should all have memorised)
However one criticism i have is of the statistics gained at the beginning. Surly the challenge is to develop an optimum theory to predict the right card most often. Surly this objective is the same no matter who is being tested, or how many people are being tested. The question would then become; what theory did you use to get your high score? And most answers would be; card counting.
comment by Chalybs_Levitas · 2011-11-19T07:55:39.428Z · LW(p) · GW(p)
"There are lawful forms of thought that still generate the best response, even when faced with an opponent who breaks those laws"
I've only just come to the Bayesian way of thought, so please direct me to the correct answer if I'm not thinking about this right:
If I and my opponent are of equal training, rationality, abilty, and intellect, except that my opponent has a 10% chance of doing something completely at odds with rationality as we both understand it due to some mental damage: how should I plan to face him?
If I have plan A to deal with his plan A, plan B to deal with his plan B, and so on (as close as I am capable of discerning them), is there a rational way to deal with this unpredictable element, and how do I determine how much of my resources to spend on this plan?
That is: how do I plan in the face of the unpredictable, especially in cases where I do not have the resources to cover every eventuality?
comment by Jakinbandw · 2012-05-23T22:28:09.993Z · LW(p) · GW(p)
[The following is just me being slightly insane about probability and has no bearing on the point of the artical]
I have to point out some flaws with the probability that you are using here. For the most part betting blue all the time works. However Cards don't work quite like that. Each draw of the cards reduces the total number of the card that was drawn. For instance if you have 10 cards, 7 blue, 3 red, and after the first 7 draws there have been 6 blue cards drawn, but only one red card drawn then the probability now favors drawing a red card. In fact, if now you switch to calling red for every card you can achieve an 80% success rate over all because now there is only 1 blue card left, but two red cards. Just because you have come up with a strategy for success does not mean that you should stop thinking and reassessing the situation as more information becomes known.
comment by AliceKingsley · 2012-08-15T01:10:07.482Z · LW(p) · GW(p)
Thanks for this post. I have always thought this way about bets (I always call 'tails' in a coin flip, for example), and I had a lot of trouble trying to explain to my friends why if I was going to play the lottery, I'd have a set of numbers I'd play every time. I appreciate seeing this spelled out so clearly.
Replies from: christopherj↑ comment by christopherj · 2014-04-08T16:28:10.600Z · LW(p) · GW(p)
If you wanted to play the lottery, the best strategy is to play the "least lucky" and "least 'random'" numbers, ie pick the numbers that won't be picked by a bunch of superstitious people. Decrease your odds of having the split the winnings with another winner.
comment by Muhd · 2013-08-01T22:17:08.960Z · LW(p) · GW(p)
I think the behavior we are seeing here may be more a case of loss aversion rather than anything else.
Assuming that red cards must come at some point (true if we are flipping over a limited set of cards with a blue-red ratio of 7 to 3; not sure if that is the setup), the subjects adopt a strategy that gives them the highest likelihood of avoiding failure completely. Predicting blue cards every time requires accepting a certain degree failure right from the outset and is thus unpalatable to the human mind which is loss-averse.
Even if the experiment is designed so that red cards are not guaranteed to come at some point (if, for example, you shuffle after every flip), the subjects may fall prey to gambler's fallacy, which, when combined with their loss-aversion, leads them to adopt the 70-30 strategy.
comment by xSciFix · 2019-04-16T16:55:33.327Z · LW(p) · GW(p)
A lot of comments saying various forms of "well but for some situations it *is* best to be random." Fine, maybe so; but the decision to 'act randomly' is arrived after a careful analysis of the situation. It is the most rational thing to do *in that instance.* That doesn't mean that decision theory is thus refuted at all. Reaching the the conclusion that you're playing a minmax stochastic game in which the best way forward is to play at random is not at all the same as "might as well just be random all the time in the face of something that seems irrational."
Acting randomly *all the time* because hey the world is random is in fact useless. Yes, sometimes you'll hit the right answer (30% of the cards were red after all) but if you're not engaging in 'random' behavior as part of a larger strategy or goal then you're just throwing everything at the wall and seeing what sticks (granted sometimes brute-forcing an answer is also the best way forward).
Arguing about 'well in *this one instance* it is best to be random' is entirely beside the point. The point is how do you reach that conclusion and by what thought processes?
'If faced with irrationality, throwing your own reason away won't help you' is exactly correct. Conversely, when faced with rationality then acting irrationally won't help you either. Unlike the popular media trope, in real life you're not really going to baffle and thus defeat the computer opponent by just playing at random. You're not really going to beat a chess master in the park by just playing randomly in order to confuse them.
comment by heresieding · 2022-03-30T23:33:16.367Z · LW(p) · GW(p)
I think the experiment's conclusion that subjects sought to model the cards instead of to maximise wins is only valid if they had the probabilities, and/or could easily verify them, at the start; and (as many have noted) saw the deck reshuffled after each trial. (Without the probabilities, it sounds like their 'mistake' would be not noticing a majority color or not optimising when they did - I think I read the experiment as intended, but readers might find doing so easier if given these conditions.)