Rationality is Systematized Winning
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-03T14:41:25.255Z · LW · GW · Legacy · 267 commentsContents
267 comments
Followup to: Newcomb's Problem and Regret of Rationality
"Rationalists should win," I said, and I may have to stop saying it, for it seems to convey something other than what I meant by it.
Where did the phrase come from originally? From considering such cases as Newcomb's Problem: The superbeing Omega sets forth before you two boxes, a transparent box A containing $1000 (or the equivalent in material wealth), and an opaque box B that contains either $1,000,000 or nothing. Omega tells you that It has already put $1M in box B if and only if It predicts that you will take only box B, leaving box A behind. Omega has played this game many times before, and has been right 99 times out of 100. Do you take both boxes, or only box B?
A common position - in fact, the mainstream/dominant position in modern philosophy and decision theory - is that the only reasonable course is to take both boxes; Omega has already made Its decision and gone, and so your action cannot affect the contents of the box in any way (they argue). Now, it so happens that certain types of unreasonable individuals are rewarded by Omega - who moves even before they make their decisions - but this in no way changes the conclusion that the only reasonable course is to take both boxes, since taking both boxes makes you $1000 richer regardless of the unchanging and unchangeable contents of box B.
And this is the sort of thinking that I intended to reject by saying, "Rationalists should win!"
Said Miyamoto Musashi: "The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him."
Said I: "If you fail to achieve a correct answer, it is futile to protest that you acted with propriety."
This is the distinction I had hoped to convey by saying, "Rationalists should win!"
There is a meme which says that a certain ritual of cognition is the paragon of reasonableness and so defines what the reasonable people do. But alas, the reasonable people often get their butts handed to them by the unreasonable ones, because the universe isn't always reasonable. Reason is just a way of doing things, not necessarily the most formidable; it is how professors talk to each other in debate halls, which sometimes works, and sometimes doesn't. If a hoard of barbarians attacks the debate hall, the truly prudent and flexible agent will abandon reasonableness.
No. If the "irrational" agent is outcompeting you on a systematic and predictable basis, then it is time to reconsider what you think is "rational".
For I do fear that a "rationalist" will clutch to themselves the ritual of cognition they have been taught, as loss after loss piles up, consoling themselves: "I have behaved virtuously, I have been so reasonable, it's just this awful unfair universe that doesn't give me what I deserve. The others are cheating by not doing it the rational way, that's how they got ahead of me."
It is this that I intended to guard against by saying: "Rationalists should win!" Not whine, win. If you keep on losing, perhaps you are doing something wrong. Do not console yourself about how you were so wonderfully rational in the course of losing. That is not how things are supposed to go. It is not the Art that fails, but you who fails to grasp the Art.
Likewise in the realm of epistemic rationality, if you find yourself thinking that the reasonable belief is X (because a majority of modern humans seem to believe X, or something that sounds similarly appealing) and yet the world itself is obviously Y.
But people do seem to be taking this in some other sense than I meant it - as though any person who declared themselves a rationalist would in that moment be invested with an invincible spirit that enabled them to obtain all things without effort and without overcoming disadvantages, or something, I don't know.
Maybe there is an alternative phrase to be found again in Musashi, who said: "The Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size."
"Rationality is the spirit of winning"? "Rationality is the Way of winning"? "Rationality is systematized winning"? If you have a better suggestion, post it in the comments.
267 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2009-04-03T16:26:19.469Z · LW(p) · GW(p)
Rationality is about winning.
The about captures the expected systematic winning part, as you are considering the model of winning, not necessarily the accidental winning itself. It limits the scope to the winning only, leaving only the secondary roles for parry, hit, spring, strike or touch. Being a study about the real thing, rationality employs a set of tricks that allow to work it, in special cases and at coarse levels of detail. Being about the real thing, rationality aims to give the means for actually winning.
comment by timtyler · 2009-04-03T17:18:56.569Z · LW(p) · GW(p)
Wikipedia has this right:
"a rational agent is specifically defined as an agent which always chooses the action which maximises its expected performance, given all of the knowledge it currently possesses."
Expected performance. Not actual performance. Whether its actual performance is good or not depends on other factors - such as how malicious the environment is, whether the agent's priors are good - and so on.
Replies from: Eliezer_Yudkowsky, timtyler↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-03T17:55:44.206Z · LW(p) · GW(p)
Problem with that in human practice is that it leads to people defending their ruined plans, saying, "But my expected performance was great!" Vide the failed trading companies saying it wasn't their fault, the market had just done something that it shouldn't have done once in the lifetime of the universe. Achieving a win is much harder than achieving an expectation of winning (i.e. something that it seems you could defend as a good try).
Replies from: timtyler, HughRistik, anonym, Jonathan_Graehl, Technologos↑ comment by timtyler · 2009-04-03T18:48:19.557Z · LW(p) · GW(p)
Expected performance is what rational agents are actually maximising.
Whether that corresponds to actual performance depends on what their expectations are. What their expectations are typically depends on their history - and the past is not necessarily a good guide to the future.
Highly rational agents can still lose. Rational actions (that follow the laws of induction and deduction applied to their sense data) are not necessarily the actions that win.
Rational agents try to win - and base their efforts on their expectations. Whether they actually win depends on whether their expectations are correct. In my view, attempts to link rationality directly to "winning" miss the distinction between actual and expected utility.
There are reasons for associations between expected performance and actual performance. Indeed, those associations are why agents have the expectations they do. However, the association is statistical in nature.
Dissect the brain of a rational agent, and it is its expected utility that is being maximised. Its actual utility is usually not something that is completely under its control.
It's important not to define the "rational action" as "the action that wins". Whether an action is rational or not should be a function of an agent's sense data up to that point - and should not vary depending on environmental factors which the agent knows nothing about. Otherwise, the rationality of an action is not properly defined from an agent's point of view.
I don't think that the excuses humans use for failures is an issue here.
Behaving rationally is not the only virtue needed for success. For example, you also need to enter situations with appropriate priors.
Only if you want rationality to be the sole virtue, should "but I was behaving rationally" be the ultimate defense against an inquisition.
Rationality is good, but to win, you also need effort, persistence, good priors, etc - and it would be very, very bad form to attempt to bundle all those into the notion of being "rational".
Replies from: Eliezer_Yudkowsky, MichaelBishop↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T11:32:32.639Z · LW(p) · GW(p)
Expected performance is what rational agents are actually maximising.
Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout? As Nesov says, rationality is about utility; which is why a rational agent in fact maximizes their expectation of utility, while trying to maximize utility (not their expectation of utility!).
It may help to understand this and some of the conversations below if you realize that the word "try" behaves a lot like "quotation marks" and that having an extra "pair" of quotation "marks" can really make "your" sentences seem a bit odd.
Replies from: bentarm, timtyler, timtyler↑ comment by bentarm · 2009-04-04T23:41:17.337Z · LW(p) · GW(p)
I'm not sure I get this at all.
I offer you a bet, I'll toss a coin, and give you £100 if it comes up heads, you give me £50 if it comes up tails. Presumably you take the bet right? Because your expected return is £50 - surely this is the sense in which rationalists maximise expected utility. We don't mean "the amount of utility they expect to win", but expectation in the technical sense - ie, the product of the likelihood of various events happening with their utility in the univserses in which those events happen (or probably more properly an integral...)
If you expect to lose £50 and you are wrong, that doesn't actually say anything about the expectation of your winnings.
Replies from: Nebu↑ comment by Nebu · 2009-04-14T15:57:56.699Z · LW(p) · GW(p)
If you expect to lose £50 and you are wrong, that doesn't actually say anything about the expectation of your winnings.
It does, however, say something about your expectation of your winnings. Expectation can be very knowledge dependent. Let's say someone rolls two six sided dice, and then offers you a bet where you win $100 if the sum of the dice is less than 5, but lose $10 if the sum is greater than 5. You might perform various calculations to determine your expected value of accepting the bet. But if I happen to peak and see one of the dice has landed on 6, then I will calculate a different expected value than you will.
So we have different expected values for calculating the bet, because we have different information.
So EY's point is that if a rational agent's only purpose was to maximize (their) expected utility, they could easily do this by selectively ignoring information, so that their calculations turn out a specific way.
But actually rational agents are not interested in maximizing (their) expected utility. They are interested in maximizing real utility. Except it's impossible to do this without perfect information, and so what agents end up doing is maximizing expected utility, although they are trying to maximize real utility.
It's like if I'm taking a history exam in school. I am trying to achieve 100% on the exam, but end up instead achieving only 60% because I have imperfect information. My goal wasn't 60%, it was 100%. But the actual actions I took (the answers I selected) led to to arrive at 60% instead of my true goal.
Rational agents are trying to maximize real utility, but end up maximizing expected utility (by definition), even though that's not their true goal.
↑ comment by timtyler · 2009-04-04T12:23:24.258Z · LW(p) · GW(p)
Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn't. It means that the process going on in the brains of intelligent agents can be well modelled as calculating expected utilities - and then selecting the action that corresponds to the largest one.
Intelligent agents are better modelled as Expected Utility Maximisers than Utility Maximisers. Whether they actually maximise utility depends on whether they are in an environment where their expectations pan out.
Replies from: loqi↑ comment by loqi · 2009-04-05T00:11:25.810Z · LW(p) · GW(p)
Intelligent agents are better modelled as Expected Utility Maximisers than Utility Maximisers.
By definition, intelligent agents want to maximize total utility. In the absence of perfect knowledge, they act on expected utility calculations. Is this not a meaningful distinction?
↑ comment by timtyler · 2009-04-04T12:20:03.019Z · LW(p) · GW(p)
Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn't. It means that the process going on in the brains of intelligent agents heads can be accurately modelled as calculating expected utilities - and then selecting the action that corresponds to the largest of these.
Agents are better modelled as Expected Utility Maximisers than as Utility Maximisers. Whether an Expected Utility Maximiser actually maximises utility depends on whether it is in an environment where its expectations pan out.
↑ comment by Mike Bishop (MichaelBishop) · 2009-04-03T19:28:57.931Z · LW(p) · GW(p)
I am inclined to argue along exactly the same lines as Tim, though I worry there is something I am missing.
↑ comment by HughRistik · 2009-04-03T19:46:08.169Z · LW(p) · GW(p)
Problem with that in human practice is that it leads to people defending their ruined plans, saying, "But my expected performance was great!"
It's true that people make this kind of response, but that doesn't make it valid, or mean that we have to throw away the notion of rationality as maximizing expected performance, rather than actual performance.
In the case of failed trading companies, can't we just say that despite their fantasies, their expected performance shouldn't have been so great as they thought? And the fact that their actual results differed from their expected results should cast suspicion on their expectations.
Perhaps we can say that expectations about performance be epistemically rational, and only then can an agent who maximizes their expected performance be instrumentally rational.
Achieving a win is much harder than achieving an expectation of winning (i.e. something that it seems you could defend as a good try).
Some expectations win. Some expectations lose. Yet not all expectations are created equal. Non-accidental winning starts with something that seems good to try (can accidental winning be rational?). At least, there is some link between expectations and rationality, such that we can call some expectations more rational than others, regardless of whether they actually win or lose.
An example SoullessAutomaton made was that we shouldn't consider lottery winners rational, even though they won, because they should not have expected to. Conversely, all sorts of inductive expectations can be rational, even though sometimes they will fail due to the problem of induction. For instance, it's rational to expect that the sun will rise tomorrow. If Omega decides to blow up the sun, my expectation will still have been rational, even though I turned out to be wrong.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-03T21:35:06.425Z · LW(p) · GW(p)
Yet not all expectations are created equal. Non-accidental winning starts with something that seems good to try (can accidental winning be rational?).
In the real world, of course, most things are some mixture of controllable and randomized. Depending on your definition of accidental, it can be rational to make low-cost steps to position yourself to take advantage of possible events you have no control over. I wouldn't call this accidental, however, because the average expected gain should be net positive, even if one expects (id est, with confidence greater than 50%) to lose.
I used the lottery as an example because it's generally a clear-cut case where the expected gain minus the cost of participating is net negative and the controllable factor (how many tickets you buy) has extremely small impact.
Replies from: HughRistik↑ comment by HughRistik · 2009-04-03T21:58:51.251Z · LW(p) · GW(p)
Yes, and I liked your example for exactly this reason: the expected value of buying lottery tickets is negative.
I think that this shows that it is irrational to take an action where it's clear-cut that the expected value is negative, even though due to chance, one iteration of that action might produce a positive result. You are using accidental the same way I am: winning from an action with a negative expected value is what I would call accidental, and winning with a positive expected value is non-accidental.
Things are a bit more complicated when we don't know the expected value of an action. For example, in Eliezer's examples of failed trading companies, we don't know the correct expected value of their trading strategies, or whether they were positive or negative.
In cases where the expected value of an action is unknown, perhaps the instrumental rationality of the action is contingent on the epistemic rationality of our estimation of its expected value.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-03T22:24:16.516Z · LW(p) · GW(p)
I like your definition of an accidental win, it matches my intuitive definition and is stated more clearly than I would have been able to.
In cases where the expected value of an action is unknown, perhaps the instrumental rationality of the action is contingent on the epistemic rationality of our estimation of its expected value.
Yes. Actually, I think the "In cases where the expected value of an action is unknown" clause is likely unnecessary, because the accuracy of an expected value calculation is always at least slightly uncertain.
Furthermore, the second-order calculation of the expected value of expending resources to increase epistemological rationality should be possible; and in the case that acting on a proposition is irrational due to low certainty, and the second-order value of increasing certainty is negative, the rational thing to do is shrug and move on.
↑ comment by anonym · 2009-04-04T01:55:43.741Z · LW(p) · GW(p)
It sounds like the objection you're giving here is that "some people will misinterpret expected performance in the technical sense as expected performance in the colloquial sense (i.e., my guess as to how things will turn out)." That doesn't seem like much of a criticism though, and it doesn't sound severe enough to throw out what is a pretty standard definition. People will also misinterpret your alternate definition, as we have seen.
Do you have other objections?
↑ comment by Jonathan_Graehl · 2009-04-03T21:34:26.123Z · LW(p) · GW(p)
What you say is important: the vast majority of whining "rationalists" weren't done dirty by a universe that "nobody could have foreseen" (the sub-prime mortgage crisis/piloting jets into buildings). If you sample a random loser claiming such (my reasoning was flawless, my priors incorporated all feasibly available human knowledge), an impartial judge would in nearly all cases correctly call them to task.
But clearly it's not always the case that my reasoning (and/or priors) is at fault when I lose. My updates shouldn't overshoot based on empirical noise and false humility. I think what you want to say is that most likely even (especially?) the most proud rationalists probably shield themselves from attributing their loss to their own error ("eat less salt").
I'd like some quantifiable demonstration of an externalizing bias, some calibration of my own personal tendency to deny evidence of my own irrationality (or of my wrong priors).
↑ comment by Technologos · 2009-04-03T19:15:34.213Z · LW(p) · GW(p)
I'm not sure how you can implement an admonition to Win and not just to (truly, sincerely) try. What is the empirical difference?
I suppose you could use an expected regret measure (that is, the difference between the ideal result and the result of the decision summed across the distribution of probable futures) instead of an expected utility measure.
Expected regret tends to produce more robust strategies than expected utility. For instance, in Newcomb's problem, we could say that two-boxing comes from expected utility but one-boxing comes from regret-minimizing (since a "failed" two-box gives $1,000,000-$1,000=$999,000 of regret, if you believe Omega would have acted differently if you had been the type of person to one-box, where a "failed" one-box gives $1000-$0=$1,000 of regret).
Using more robust strategies may be a way to more consistently Win, though perhaps the true goal should be to know when to use expected utility and when to use expected regret (and therefore to take advantage both of potential bonanzas and of risk-limiting mechanisms).
Replies from: robzahra, grobstein, timtyler, thomblake↑ comment by robzahra · 2009-04-03T22:55:43.699Z · LW(p) · GW(p)
I'm quite confident there is only a language difference between eliezer's description and the point a number of you have just made. Winning versus trying to win are clearly two different things, and it's also clear that "genuinely trying to win" is the best one can do, based on the definition those in this thread are using. But Eli's point on ob was that telling oneself "I'm genuinely trying to win" often results in less than genuinely trying. It results in "trying to try"...which means being satisfied by a display of effort rather than utility maximizing. So instead, he arguesn why not say to oneself the imperative "Win!", where he bakes the "try" part into the implicit imperative. I agree eli's language usage here may be slightly non standard for most of us (me included) and therefore perhaps misleading to the uninitiated, but I'm doubtful we need to stress about it too much if the facts are as I've stated. Does anyone disagree? Perhaps one could argue eli should have to say, "Rational agents should win_eli" and link to an Explanation like this thread, if we are genuinely concerned about people getting confused.
Replies from: timtyler↑ comment by timtyler · 2009-04-04T05:06:29.523Z · LW(p) · GW(p)
Eliezer seems to be talking about actually winning - e.g.: "Achieving a win is much harder than achieving an expectation of winning".
He's been doing this pretty consistently for a while now - including on his administrator's page on the topic:
"Instrumental rationality: achieving your values."
That is why this discussion is still happening.
↑ comment by grobstein · 2009-04-03T20:45:48.644Z · LW(p) · GW(p)
Here's a functional difference: Omega says that Box B is empty if you try to win what's inside it.
Replies from: byrnema↑ comment by byrnema · 2009-04-04T01:58:56.476Z · LW(p) · GW(p)
Yes! This functional difference is very important!
In Logic, you begin with a set of non-contradicting assumptions and then build a consistent theory based on those assumptions. The deductions you make are analogous to being rational. If the assumptions are non-contradicting, then it is impossible to deduce something false in the system. (Analogously, it is impossible for rationality not to win.) However, you can get a paradox by having a self-referential statement. You can prove that every sufficiently complex theory is not closed -- there are things that are true that you can't prove from within the system. Along the same lines, you can build a paradox by forcing the system to try to talk about it itself.
What Grobstein has presented is a classic paradox and is the closest you can come to rationality not winning.
Replies from: Technologos↑ comment by Technologos · 2009-04-28T19:46:05.521Z · LW(p) · GW(p)
I understand all that, but I still think it's impossible to operationalize an admonition to Win. If
Omega says that Box B is empty if you try to win what's inside it.
then you simply cannot implement a strategy that will give you the proceeds of Box B (unless you're using some definition of "try" that is inconsistent with "choose a strategy that has a particular expected result").
I think that falls under the "ritual of cognition" exception that Eliezer discussed for a while: when Winning depends directly on the ritual of cognition, then of course we can define a situation in which rationality doesn't Win. But that is perfectly meaningless in every other situation (which is to say, in the world), where the result of the ritual is what matters.
↑ comment by timtyler · 2009-04-03T20:10:50.172Z · LW(p) · GW(p)
Agents do try to win. The don't necessarily actually win. For example, if they face a superior opponent. Kasparov was behaving in a highly rational manner in his battle with Deep Blue. He didn't win. He did try to, though. Thus the distinction between trying to win and actually winning.
↑ comment by thomblake · 2009-04-03T20:00:05.012Z · LW(p) · GW(p)
see http://www.overcomingbias.com/2008/10/trying-to-try.html
It's really easy to convince yourself that you've truly, sincerely tried - trying to try is not nearly as effective as trying to win.
Replies from: timtyler, Technologos, timtyler↑ comment by timtyler · 2009-04-03T20:06:21.285Z · LW(p) · GW(p)
The intended distinction was originally between trying to win and actually winning. You are comparing two kinds of trying.
Replies from: thomblake↑ comment by thomblake · 2009-04-03T20:24:28.515Z · LW(p) · GW(p)
You are comparing two kinds of trying.
I'm not sure how you can implement an admonition to Win and not just to (truly, sincerely) try. What is the empirical difference?
Based on the above, I believe the distinction was between two different kinds of admonitions. I was pointing out that an admonition to win will cause someone to try to win, and an admonition to try will cause someone to try to try.
Replies from: Eliezer_Yudkowsky, timtyler, timtyler↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T11:37:28.556Z · LW(p) · GW(p)
Thomblake's interpretation of my post matches my own.
↑ comment by timtyler · 2009-04-03T20:39:40.823Z · LW(p) · GW(p)
Right, but again, the topic is the definition of instrumental rationality, and whether it refers to "trying to win" or "actually winning".
What do "admonitions" have to do with things? Are you arguing that because telling someone to "win" may some positive effect that telling someone to "try to win" lacks that we should define "instrumental rationality" to mean "winning" - and not "trying to win"?
Isn't that an idiosyncracy of human psychology - which surely ought to have nothing to do with the definition of "instrumental rationality".
Consider the example of handicap chess. You start with no knight. You try to win. Actually you lose. Were you behaving rationally? I say: you may well have been. Rationality is more about the trying, than it is about the winning.
Replies from: thomblake↑ comment by thomblake · 2009-04-03T20:45:33.715Z · LW(p) · GW(p)
The question was about admonitions. I commented based on that. I didn't mean anything further about instrumental rationality.
Replies from: timtyler↑ comment by timtyler · 2009-04-03T21:08:16.298Z · LW(p) · GW(p)
OK. I don't think we have a disagreement, then.
I consider it to be a probably-true fact about human psychology that if you tell someone to "try" rather than telling them to "win" then that introduces failure possibilites into their mind. That may have a positive effect, if they are naturally over-confident - or a negative one, if they are naturally wracked with self-doubt.
It's the latter group who buy self-help books: the former group doesn't think it needs them. So the self-help books tell you to "win" - and not to "try" ;-)
↑ comment by timtyler · 2009-04-03T20:38:51.408Z · LW(p) · GW(p)
Right, but again, the topic is the definition of instrumental rationality, and whether it refers to "trying to win" or "actually winning".
What do "admonitions" have to do with things? Are you arguing that because telling someone to "win" may some a positive effect that telling someone to "try to win" lacks that we should define "instrumental rationality" to mean "winning" and not "trying to win"?
Isn't that an idiosyncracy of human psychology - which surely ought to have nothing to do with the definition of "instrumental rationality".
Consider the example of handicap chess. You start with no knight. You try to win. Actually you lose. Were you behaving rationally? I say: you may well have been. Rationality is more about the trying, than it is about the winning.
↑ comment by Technologos · 2009-04-28T20:03:02.288Z · LW(p) · GW(p)
I agree. I'm just noting that an admonition to Win is strictly an admonition to try, phrased more strongly. Winning is not an action--it is a result. All I can suggest are actions that get you to that result.
I can tell you "don't be satisfied with trying and failing," but that's not quite the same.
↑ comment by timtyler · 2009-04-03T20:23:17.335Z · LW(p) · GW(p)
As for the "Trying-to-try" page - an argument from Yoda and the Force? It reads like something out of a self-help manual!
Sure: if you are trying to inspire confidence in yourself in order to improve your performance, then you might under some circumstances want to think only of winning - and ignore the possibility of trying and failing. But let's not get our subjects in a muddle, here - the topic is the definition of instrumental rationality, not how some new-age self-help manual might be written.
↑ comment by timtyler · 2009-04-03T17:35:30.829Z · LW(p) · GW(p)
Of course, this isn't the first time I have pointed this out - see:
http://lesswrong.com/lw/33/comments_for_rationality/
Nobody seemed to have any coherent criticism the last time around - and yet now we have the same issue all over again.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-03T17:56:51.582Z · LW(p) · GW(p)
It would seem we don't appreciate your genius. Perhaps you should complain about this some more.
Replies from: timtyler↑ comment by timtyler · 2009-04-03T18:59:01.698Z · LW(p) · GW(p)
I'm not complaining, just observing. I see you are using the "royal we" again.
I wonder whether being surrounded by agents that agree with you is helping.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-03T19:08:31.189Z · LW(p) · GW(p)
I agree with you that people shouldn't drink fatal poison, and that 2+2=4. Should you feel worried because of that?
Replies from: thomblake↑ comment by thomblake · 2009-04-03T20:02:15.087Z · LW(p) · GW(p)
If it were also the case that your friends all agreed with you, but the "mainstream/dominant position in modern philosophy and decision theory" disagreed with you, then yes, you should probably feel a bit worried.
Replies from: Vladimir_Nesov, timtyler↑ comment by Vladimir_Nesov · 2009-04-03T21:15:16.014Z · LW(p) · GW(p)
Good point, my reply didn't take it into account. It all depends on the depth of understanding, so to answer your remark consider e.g. supernatural, UFOs.
↑ comment by timtyler · 2009-04-03T21:37:02.584Z · LW(p) · GW(p)
Is there really such a disagreement about Newcomb's problem?
The issue seems to be whether agents can convincingly signal to a powerful agent that they will act in some way in the future - i.e. whether it is possible to make credible promises to such a powerful agent.
I think that this is possible - at least in principle. Eliezer also seems to think this is possible. I personally am not sure that such a powerful agent could achieve the proposed success rate on unmodified humans - but in the context of artificial agents, I see few problems - especially if Omega can leave the artificial agent with the boxes in a chosen controlled environment, where Omega can be fairly confident that they will not be interfered with by interested third parties.
Do many in "modern philosophy and decision theory" really disagree with that?
More to the point, do they have a coherent counter-argument?
Replies from: cousin_it↑ comment by cousin_it · 2009-04-04T00:57:22.826Z · LW(p) · GW(p)
Thanks for mentioning artificial agents. If they can run arbitrary computations, Omega itself isn't implementable as a program due to the halting problem. Maybe this is relevant to Newcomb's problem in general, I can't tell.
Replies from: timtylercomment by orthonormal · 2009-04-03T18:39:02.738Z · LW(p) · GW(p)
I always thought that the majority of exposition in your Newcomb example went towards, not "Rationalists should WIN", but a weaker claim which seems to be a smaller inferential distance from most would-be rationalists:
Rationalists should not systematically lose; whatever systematically loses is not rationality.
(Of course, one needs the logical caveat that we're not dealing with a pure irrationalist-rewarder; but such things don't seem to exist in this universe at the moment.)
Replies from: timtyler↑ comment by timtyler · 2009-04-04T08:10:19.181Z · LW(p) · GW(p)
Re: Rationalists should not systematically lose; whatever systematically loses is not rationality.
Even if you are playing go with a 9-stone handicap against a shodan?
Replies from: Nick_Tarleton, orthonormal↑ comment by Nick_Tarleton · 2009-04-04T08:14:07.186Z · LW(p) · GW(p)
"Lose" = "perform worse than another (usable) strategy, all preferences considered".
Replies from: timtyler↑ comment by orthonormal · 2009-04-05T04:43:18.172Z · LW(p) · GW(p)
Well, I don't think I'd fare better by thinking less rationally; and if I really needed to find a way to win, rationality at least shouldn't hurt me me in the process.
I was hoping to be pithy by neglecting a few implicit assumptions. For one, I mean that (in the absence of direct rewards for different cognitive processes) good rationalists shouldn't systematically lose when they can see a strategy that systematically wins. Of course there are Kobayashi Maru scenarios where all the rationality in the world can't win, but that's not what we're talking about.
comment by abigailgem · 2009-04-03T15:32:33.389Z · LW(p) · GW(p)
Suggestion: "Rationalists seek to Win, not to be rational".
Suggestion: "If what you think is rational appears less likely to Win than what you think is irrational, then you need to reassess probabilities and your understanding of what is rational and what is irrational".
Suggestion: "It is not rational to do anything other than the thing which has the best chance of winning".
If I have a choice between what I define as the "Rational" course of action, and a course of action which I describe as "irrational" but which I predict has a better chance of winning, I am either predicting badly or wrongly defining what is Rational.
I am not sure my suggestions are Better, but I am groping towards understanding and hope my gropings help.
EDIT: and the warning is that we may deceive ourselves into thinking that we are being rational, when we are missing something, using the wrong map, arguing fallaciously. So what about:
Suggestion: "If you are not Winning, consider whether you are really being rational".
"If you are not Winning more than people you believe to be irrational, this may be evidence that you are not really being rational".
On a different tack, "Rationalists win wherever rationality is an aid to winning". I am not going to win millions on the Lottery, because I do not play it.
Replies from: Jonathan_Graehl, SoullessAutomaton, TheMatrixDNA↑ comment by Jonathan_Graehl · 2009-04-03T21:05:15.351Z · LW(p) · GW(p)
Choosing what gives the "best chance of winning" is good advice for a two-valued utility function, but I'm also interested in reducing the severity of my loss under uncertainty and misfortune.
I guess "maximizing expected utility" isn't as sexy as "winning".
Replies from: timtyler, Nick_Tarleton↑ comment by timtyler · 2009-04-04T08:05:34.373Z · LW(p) · GW(p)
Indeed. Forget about "winning". It is not sexy if it is wrong.
Replies from: Elit, JDM↑ comment by Elit · 2014-08-18T19:55:33.794Z · LW(p) · GW(p)
I don't think so. I take "wining" to be actualization of one's values, which encompasses minimizing loss.
Furthermore, I think it actually helps to make the terms "sexy", because I am a heuristic human; my brain is wired for narratives and motivated by drama and "coolness." Framing ideas as something Grand and Awesome makes them matter to me emotionally, makes them a part of my identity, and makes me more likely to apply them.
Similarly, there are certain worthwhile causes for which I fight. They ARE worth fighting for, but I'm deluding myself if I act as if I'm so morally superior that I support them only because the problems are so pressing that I couldn't possibly not do anything, that I have a duty to fulfill. That may be true, but it is also true that I disposed to be a fighter, and I am looking for a cause for which to fight. Knowing this, dramatizing the causes that actually do matter (as great battles for the fate of the human species) will motivate me to pursue them.
I have to be careful (as with anything), not to allow this sort of framing to distort my perception of the real, but I think as long as I know what I am doing, and I contain my self-manipulation to framing (and not denial of facts), I am served by it.
↑ comment by JDM · 2013-06-07T01:47:10.230Z · LW(p) · GW(p)
I think you're defining "winning" too strictly. Sometimes a minor loss is still a win, if the alternative was a large one.
Replies from: timtyler↑ comment by Nick_Tarleton · 2009-04-04T08:14:54.521Z · LW(p) · GW(p)
"Winning" refers to outcomes, not to actions, so it should just be "maximizing utility".
↑ comment by SoullessAutomaton · 2009-04-03T15:51:47.971Z · LW(p) · GW(p)
Suggestion: "If you are not Winning, consider whether you are really being rational".
The problem with this is that winning as a metric is swamped with random noise and different starting points.
Someone winning the lottery when you don't is not evidence that you are not being rational.
Someone whose parents were both high-paid lawyers making a fortune in business when you don't is not evidence that you are not being rational.
Replies from: abigailgem, thomblake↑ comment by abigailgem · 2009-04-03T16:02:36.492Z · LW(p) · GW(p)
Yes, but...
Of course there is random noise and different starting points, but there is also some evidence of whether one is really rational. It is a question of epistemic rationality what Wins should accrue to Rational people, and what wins (eg, parentage, the lottery) do not.
↑ comment by thomblake · 2009-04-03T20:11:26.691Z · LW(p) · GW(p)
I disagree. Someone winning the lottery when you don't is evidence that you are not being rational, if getting a large sum of money for little effort is a goal you'd shoot for. But on evaluation, it should be seen as evidence that counts for little or nothing. Most of us have already done that evaluation.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-03T21:21:38.279Z · LW(p) · GW(p)
I don't follow.
I used the lottery as an example of very randomized wins. It's the "right place at the right time" factor. Some events in life are, for all practical purposes, randomized and out of an agent's direct control. By the central limit theorem, some agents will seem to accumulate large wins due in large part to these kinds of random events, and some will accumulate large losses.
Most agents will be, by definition, near the center of the normal distribution. The existence of agents at the tails of the curve does not constitute evidence of one's own irrationality.
Replies from: thomblake↑ comment by thomblake · 2009-04-03T22:24:46.405Z · LW(p) · GW(p)
Right, but you could be wrong about it being randomized, or having negative expected value; not winning it can be taken as evidence that you're not being rational.
Suppose that everyone on your street other than you plays the lotto; you laugh at them for not being rational. Every week, someone on your street wins the lotto - by the end of the year, everyone else has become a millionaire. Doesn't it seem like you might have misunderstood something about the lottery?
Of course, it could be that you examine it further and find that the lottery is indeed random and you've just noticed a very improbable event. It was still evidence that was worth investigating.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-03T22:52:52.821Z · LW(p) · GW(p)
There's a big difference between "someone else wins the lottery" and "everyone else on your street wins the lottery". One is likely, the other absurdly unlikely.
Given your current knowledge of how the lottery works, the expected value is negative, ergo not playing the lottery is rational. Someone else winning the lottery (a result predicted by your current understanding) is itself not evidence that this decision is irrational.
However, if an extremely improbable event occurs, such as everyone on your street winning the lotto, this is strong evidence that your knowledge of the lottery is mistaken, and given the large potential payoff it then becomes rational to examine the matter further, and alter your current understanding if necessary. Your earlier actions may look irrational in hindsight, but that doesn't change that they were rational based on your knowledge at the time.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T11:28:27.277Z · LW(p) · GW(p)
...presuming that your knowledge at the time was itself rationally obtained based on the evidence; and in the long run, we should not expect too many times to find ourselves believing with high confidence that the lottery has a tiny payout and then seeing everyone on the street winning the lottery. If this mistake recurs, it is a sign of epistemic irrationality.
I make this point because a lot of success in life consists in holding yourself to high standards; and a lot of that is hunting down the excuses and killing them.
Replies from: SoullessAutomaton, mormon1↑ comment by SoullessAutomaton · 2009-04-04T11:59:23.575Z · LW(p) · GW(p)
...presuming that your knowledge at the time was itself rationally obtained based on the evidence; and in the long run, we should not expect too many times to find ourselves believing with high confidence that the lottery has a tiny payout and then seeing everyone on the street winning the lottery. If this mistake recurs, it is a sign of epistemic irrationality.
Yes. I was generally assuming in my comment that the rhetorical "you" is an imperfect epistemic rationalist with a reasonably sensible set of priors.
The point I was trying to make is to not handwave away the difference between making the most optimal known choices at a moment in time vs. updating one's model of the world. It's possible (if silly) to be very irrational on one and largely rational on the other.
↑ comment by mormon1 · 2009-04-05T16:31:44.268Z · LW(p) · GW(p)
I make this point because a lot of success in life consists in holding yourself to high standards; and a lot of that is hunting down the excuses and killing them.
Not that you'd know anything about this, since you papers read like my 8th grade papers.... Oh wait you never actually went to school beyond that...
↑ comment by TheMatrixDNA · 2013-04-13T20:02:05.814Z · LW(p) · GW(p)
There is an issue never remembered here, about the question that we believe the world is X but it is Y: Are you sure that rationality is pure product of brains... Are you sure that mind is pure product of brains... What if mind is product of a hidden superior natural system whose bits-information are invading our immediate world and being aggregated to our synapses... If so, rationality as pure product of mind will make the most evolved rationalist a loser, by while... Or don t... (sorry, I have no punctuation mark in this keyboard)
Here, in Amazon jungle, lays our real origins. And you see here that this biosphere is product of chaos. We are product of chaos, not order. It seems to me that we are the flow of order lifting up from chaos. So, for long term winning, those that best represents this flow will have bad times because the forces of chaos are the strongest. Then, the winners now, are still representants of chaos, less evolved...
But it seems to me that above the chaotic biosphere I see Cosmos at ordered state. So, I suspect that this Cosmos is the " natural" super-system sending bits-information and modelling this terrestrial chaos into a future state of order. It is acting over the last evolved system here, and I think it is the mind, not the brain. So, if one is being driven for to be rationalist (in relation to Cosmos and ordered state), he,she will be a loser in relation to this biosphere in chaotic state. The intelligent best thing to do should to find a middle alternative, fighting this world at the same time that do it with less sacrifice. What do you think...
Replies from: Estarlio, MugaSofer↑ comment by Estarlio · 2013-04-14T01:05:21.958Z · LW(p) · GW(p)
What do you think...
More ordered states could prove to be unsustainable whether or not there's some sort of overarching system such as you describe at play. Your assumptions seem to be quite complicated and thus get a low probability ahead of time, there's no specifically supporting evidence (indeed it's not even sure what supporting evidence for some super system sending down information would be.)
Basically the idea falls beneath the noise level for me in terms of credibility. Maybe ordered systems lose because the magical unicorns have a love of chaos in their hearts. I consider the two ideas about as seriously.
Replies from: TheMatrixDNA↑ comment by TheMatrixDNA · 2013-04-14T09:08:55.233Z · LW(p) · GW(p)
Thanks, Estarilo. I really need to fix my world vision and thoughts.
You said: " More ordered states could prove to be unsustainable whether or not there's some sort of overarching system such as you describe at play."
I think yes, more ordered state must be unsustainable, eternally. But, chaos also must be unsustainable. If so, there are these cycles, when chaos produces order and order produces chaos. The final results is evolution, because each cycle is a little bit more complex. There is hierarchy of systems. Overarching systems can be two types: 1) in relation to complexity and, 2) in relation to size, force. A lion is more strong than a human, but human is more complex. We have two systems modelling evolution at Earth. 1) the astronomical system (biggest size and less evolved), which is our ancestor, but we are inside it, he created us. This system is a perfect machine, but not intelligent, not rational like us. Whatever, he is the agent behind natural selection, because he is the whole environment. 2) The second system is untenable, but he must exists, because here there is mind, consciousnesses and our ancestral astronomic has no mind. I don't accept that this Universe creates things that he has no information for, so, the system that made the emergence of mind here must be superior to the Universe. And if he is ex-machine, makes no sense to talk about ordered or chaotic states. He must be more sustainable than the Universe. I am not talking about supernatural gods, I am suggesting a natural superior system from which this thing called consciousness is coming from..
You said: " there's no specifically supporting evidence"
It is probable because we have a real known parameter. An embryo gets " mind" because it comes from a superior hierarchic system that exists beyond his "Universe" (the womb). The superior system is the human species, his parents. So, it is possible that a natural super-system existing beyond our universe have transmitted before the Big Bang the informations for the mind appears here at the right time.
You said: " Maybe ordered systems lose because magical unicorns..."
In the alternation between cycles, there are the alternations between dominant and recessive. If chaos is dominant here and now, the ordered state is weak and a loser, till the chaos being extinct. And rationality is more relative to order than chaos. But rationality is not the wisdom. Must have a third superior state. What do you think ?
Replies from: CCC, Estarlio, MugaSofer↑ comment by CCC · 2013-04-14T09:35:39.080Z · LW(p) · GW(p)
I don't accept that this Universe creates things that he has no information for
It is possible to create something without having the information for it. The classic example; if enough monkeys type at random on enough typewriters for long enough, then sooner or later (probably much, much later) one of them will randomly type out the complete works of Shakespeare. Even if none of the monkeys have ever heard of Shakespeare.
Replies from: TheMatrixDNA↑ comment by TheMatrixDNA · 2013-04-14T20:31:40.813Z · LW(p) · GW(p)
I can't grasp yours example. Typewriters has the informations. Letters are graphic symbols of sounds that are signals of real things. My world vision started with comparative anatomy between all natural systems and the universal patterns founded here were projected for calculations about universes and first causes. As final result we got the same theory of Hideki Yukawa calculating the nuclear gluon, how protons and neutrons interacts. As result, this universe started with all informations for everything here, like any new origins of any human being started with prior information for creating the embryo and its womb (his entire universe). But these informations for universes are natural. Two groups of vortexes one spin right, other spin left. The interactions between then creates the intermediary movements. Each vortexes has at least seven properties which were the physical brutes forces(tendency to inertia, tendency to movement; tendency to grow, tendency to shorter; etc.). The different intensities of these forces and their interactions produces an infinity of individual types or vortex. Each vortex is one information, like genes. Th ere are genes that begins working later, so, there are universal informations in the air not applied yet. Like those building consciousness here. But, my results from these method is still theoretical. It makes sense and one day will be falsifiable
Replies from: CCC, MugaSofer↑ comment by CCC · 2013-04-15T08:48:34.305Z · LW(p) · GW(p)
I'm confused; I can't understand what you are saying. I think that part of this is the language barrier (what is your first language, by the way?) and part of it is probably an inferential distance issue (that is, what you're saying is far enough away from anything that I expect that I'm having trouble making the mental leap).
Typewriters has the informations.
So... would this mean that a typewriter contains the information for anything that can be typed on a typewriter? Including... say... the secret of immortality, plans for a time machine, and a way to detect the Higgs Boson? That seems a rather broad definition of 'information'.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-15T10:16:55.931Z · LW(p) · GW(p)
So... would this mean that a typewriter contains the information for anything that can be typed on a typewriter? Including... say... the secret of immortality, plans for a time machine, and a way to detect the Higgs Boson? That seems a rather broad definition of 'information'.
Well, in information-theoretic terms, the information for those comes from whoever looks over the monkey's work and selects Shakespeare (or whatever.)
Replies from: CCC↑ comment by MugaSofer · 2013-04-15T10:25:57.557Z · LW(p) · GW(p)
As final result we got the same theory of Hideki Yukawa calculating the nuclear gluon, how protons and neutrons interacts. As result, this universe started with all informations for everything here,
Heavily compressed, mind, but it's technically true that a superintelligence could deduce us. I'm pretty sure that doesn't imply it was deliberately designed, though; we could just be an emergent property of the universe, not it's object.
↑ comment by Estarlio · 2013-04-14T20:06:12.994Z · LW(p) · GW(p)
I think yes, more ordered state must be unsustainable, eternally. But, chaos also must be unsustainable. If so [...]
You're putting the cart before the horse here. You've said that they must be - why must they be? If they are then what predictions does their being so let you make and how have you tested them?
What, for that matter, are your formal definitions of order and chaos? The way I'd define them, chaos exists mostly on a quantum level and when you start to generalise out correlates start showing up on a macroscopic level really quickly, and then it's not chaos anymore because it's - at least in principle - predictable.
I mean it's not silly to suppose that selection and mutation - with the former being the order enforcing part of evolution and the latter being the 'chaotic' part, operate in cycles. I believe if you model evolution of finite populations using Fokker Planck equations you tend to have an increasing spread of phenotypes between periods of heavy selection - but it's not really an area I've much interest in so I couldn't say for sure.
We have two systems modelling evolution at Earth. 1) the astronomical system (biggest size and less evolved), which is our ancestor, but we are inside it, he created us. This system is a perfect machine, but not intelligent, not rational like us. Whatever, he is the agent behind natural selection, because he is the whole environment. 2) The second system is untenable, but he must exists, because here there is mind, consciousnesses and our ancestral astronomic has no mind.
I don't know what this means. You're assigning an overarching system agency. But agency tends to mean that something is alive and thinking in English. Like a human would be said to have agency, whereas a computer - at least in the common "I've got one under my desk" sense - wouldn't. Systems don't tend to be considered to have gender in English either. In French lots of words are gendered but in English very few are. The only English things I can think of that are gendered other than living creatures are ships; traditionally thought of as female.
The second system just seems to be undefined.
I don't accept that this Universe creates things that he has no information for, so, the system that made the emergence of mind here must be superior to the Universe.
If you want to find a human how easy is that for you to do? Turn out of your front door and go to town and you'll probably find a fair number of them. If you want to find a specific human how much information do you need? I believe if you start off knowing nothing about them other than that they're somewhere on Earth you only really need something like 32 bits of information but in any case it's a lot more.
If you want to create a table you just make a table. It's not hard. If you want to create a specific table design you need to know what it looks like at the very least.
If you want to create a child you need a partner. If you want to create a brown haired, blue eyed girl and no other kids besides ... you're probably going to be off picking particular partners to up your chances or running off to play with genetic engineering.
Generally the rule is that the more picky you want to be the more info you need.
If you just wanted to create a person, and nothing else, you would require a lot of information. If you wanted to create an entire universe you would need very little information. The universe is very large, and seems to consist mostly of repetitions of fairly simple things, which suggests to me an informationally sparse genesis.
And if he is ex-machine, makes no sense to talk about ordered or chaotic states. He must be more sustainable than the Universe. I am not talking about supernatural gods, I am suggesting a natural superior system from which this thing called consciousness is coming from..
Do you need to suppose a system at all? If what you're talking about can be defined entirely in terms of a conflict between order and chaos - which really just seems to be evolution in progress. What explanatory power does this system have?
So, it is possible that a natural super-system existing beyond our universe have transmitted before the Big Bang the informations for the mind appears here at the right time.
Sure, anything's possible. But how probable is it and what grounds do you have for believing that it's that probable?
In the alternation between cycles, there are the alternations between dominant and recessive. If chaos is dominant here and now, the ordered state is weak and a loser, till the chaos being extinct. And rationality is more relative to order than chaos. But rationality is not the wisdom. Must have a third superior state. What do you think ?
Broadly you seem to be saying something to the effect of: In the absence of strong selection pressures the trend is towards disorder and decay. Which I agree with. And I can see how that would relate to rationality - there are systems, like schooling, that lose their purpose and essentially go insane in the absence of strong demands. Why are schools so crappy? A large part of it seems to be because adults don't have an economic need for children at that age and it's politically expedient to conduct education in a certain way that seems to produce work - without actually testing whether that work is useful because by that point the government will be out of power.
I suspect rationality carries connotations in your language that it doesn't necessarily have in English. If a chaotic/random/brute force method of traversing the search space turns out to be better suited to certain situations I'd assign it a really high prior that people who define themselves as rationalists would make their decisions in that regard by throwing dice or some equivalent that introduced chaos into their actions. Like my passwords - what are my passwords? I don't know. Most of them are 128 character gibberish.
If you think of rationality as systematised winning it seems more like: Whatever works. Than anything particularly tied to a specific selection/mutation ratio.
↑ comment by MugaSofer · 2013-04-14T15:11:35.830Z · LW(p) · GW(p)
An embryo gets " mind" because it comes from a superior hierarchic system that exists beyond his "Universe" (the womb). The superior system is the human species, his parents.
Well, an embryo develops a mind because it's got the genetic code for it - which, yes, comes from the larger external system that evolved that code. Is that what you meant?
So, it is possible that a natural super-system existing beyond our universe have transmitted before the Big Bang the informations for the mind appears here at the right time.
I must admit, I don't see how that follows. Are you suggesting our universe was designed specifically as a "womb" to create us? That's the only analogy I can see, and evolutionary advantage seems a simpler reason for sentience to evolve - although I guess those aren't mutually exclusive, if this "natural super-system existing beyond our universe" anticipated that would result in us. But why postulate this? It could as easily have designed the universe as a "womb" to produce muffins! We could as easily be part of this muffin-womb. (Man, there's a sentence I never expected to type.)
If chaos is dominant here and now, the ordered state is weak and a loser, till the chaos being extinct. And rationality is more relative to order than chaos.
But science again and again has discovered that what we thought was "chaos" is merely the complex result of simple rules - order, in other words, that we can exploit with rationality.
And rationality is more relative to order than chaos. But rationality is not the wisdom. Must have a third superior state. What do you think ?
If rationality works in ordered states, what's the analog that works in "chaotic" states?
Replies from: TheMatrixDNA↑ comment by TheMatrixDNA · 2013-04-14T21:00:08.688Z · LW(p) · GW(p)
You said: "Well, an embryo develops a mind because it's got the genetic code for it - which, yes, comes from the larger external system that evolved that code. Is that what you meant?"
Our conflict is due two different interpretations of genetic code. You think that biological systems (aka life) evolved a genetic code, so, you think that had no genetic code before life. It is not what is suggesting the results from my different method of investigation. There is no " code" in the sense that are composed by symbols. Each horizontal base-pair of nucleotides is a derivation with some little difference of an ancestor system, the original first galaxies. (you need see the model of this galaxy and how it fits as nucleotide in my website). So, DNA is merely a pile of diversified copies of a unique ancestor astronomical system, which produces diversification and functional biological systems. But, galaxies got their system's configuration from atoms system, and they got from particles as systems, so, the prior causes of this " genetic makeup" seems to be beyond the Big Bang. The informations for building the mind of an embryo came from a system outside his womb; maybe informations for building minds in the whole universe came from a natural system outside the universe. Why not? configuration from atoms system, and they got from particles as systems (Sorry, I need stop now but I will come back. Sheers...)
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-15T10:58:10.609Z · LW(p) · GW(p)
Wait, you think human genetic code has existed, unchanged, since the beginning of time? Yeah, I can see how that would lead to human exceptionalism and such. Pretty sure it's physically impossible, though. Or do you just mean it's the result of a causal chain leading back to the beginning of time?
↑ comment by MugaSofer · 2013-04-13T21:02:28.587Z · LW(p) · GW(p)
What if mind is product of a hidden superior natural system whose bits-information are invading our immediate world and being aggregated to our synapses
Well, our personalities, memories and so on can be affected by interfering with the brain, and it certainly looks like it's doing some sort of information processing (as far as we can tell), so ... seems unlikely, to be honest. Also, our minds do kind of look evolved to fit our biological niche.
If so, rationality as pure product of mind will make the most evolved rationalist a loser
I'm having real trouble parsing this. Are you saying evolution will make us irrational? Or that rationality is incompatible with lovecraftian puppetry? Or something completely different?
Here, in Amazon jungle, lays our real origins.
You ... realize human's didn't evolve in the Amazon, right?
And you see here that this biosphere is product of chaos. We are product of chaos, not order. It seems to me that we are the flow of order lifting up from chaos. So, for long term winning, those that best represents this flow will have bad times because the forces of chaos are the strongest. Then, the winners now, are still representants of chaos, less evolved...
I'm not sure I'd characterize the natural world as "chaotic" as such. Complex, sometimes, sure, but it follows some pretty simple rules, and when we deduce these rules we can manipulate them.
But it seems to me that above the chaotic biosphere I see Cosmos at ordered state. So, I suspect that this Cosmos is the " natural" super-system sending bits-information and modelling this terrestrial chaos into a future state of order
The universe is definitely ordered, but don't forget evolution can produce some pretty "designed" looking structures.
What do you think...
I think you sound kind of like a crank, to be honest with you. You seem to be treating "order" and "chaos" more like elemental forces or something, and generally sound like you've got problems with magical thinking. That said, I had some trouble understanding bits of what you wrote, so it's possible I'm inadvertently addressing a strawman version of your claims. Tell me, are you a native English speaker?
Replies from: TheMatrixDNA↑ comment by TheMatrixDNA · 2013-04-14T10:38:06.679Z · LW(p) · GW(p)
Thanks, MugaSofer, for yours constructive reply. No, I am not a native English and my brain was hard-wired at the salvage jungle here, so, I think is a good opportunity for me debating our different experiences and world views. I hope that it must be curious for you too.
You said: " Well, our personalities, memories and so on can be affected by interfering with the brain, and it certainly looks like it's doing some sort of information processing (as far as we can tell), so ... seems unlikely, to be honest."
Yes, these things (personality, memories, etc.) composes our " state of being" and they are merely product of brains/nature. But, we have a real phenomena where we watch the emergence of consciousnesses without being product of brains: the embryo. There is no natural architecture able to be conscious of its existence, neither are the brains alone. So, where comes from the conscious state of embryos? From a superior hierarchic system that exists beyond his universe (the womb), and this system is called " human species" . So, it is not zero the probability that human mind is product of a hidden superior natural system whose bits-information are invading our immediate world and being aggregated to our synapses, besides the possibility that it was encrypted into our genes (if my models about Matrix/DNA are right).
You said: "Are you saying evolution will make us irrational? Or that rationality is incompatible with lovecraftian puppetry? Or something completely different?"
No, evolution will make us more suitable to real natural world. But, due the alternation between chaos and order, and due our origins coming from chaos, the flow of order (which is the basis for rationality) is the baby and weak force just now. Chaos is dying, order is growing, but now, chaos still is the strongest, so. irrationality and randomness are the winners, by while.
You said: " You ... realize human's didn't evolve in the Amazon, right?"
I don't understand your question. Being still virgin and untouchable, the elements of Amazon hidden niches are witness of life's origins. And we see chaos here. So, our origins came from terrestrial chaotic state of Nature, which came from ordered state of Cosmos... Cyclic alternations.
You said: " I'm not sure I'd characterize the natural world as "chaotic" as such. Complex, sometimes, sure, but it follows some pretty simple rules, and when we deduce these rules we can manipulate them."
Natural world is the Universe, not this terrestrial biosphere alone. This biosphere is a kind of disturbance, a noise, in relation to the ordered state of Cosmos. Biosphere is product of an entropic process, like the radiation of sun. So, the disturbance is corrected by the ordered Cosmos, from which is coming the emergence of those rules you are talking about. The curious thing is that humans are the carriers of those rules, we are bringing order to our salvage environment.
You said: " The universe is definitely ordered, but don't forget evolution can produce some pretty "designed" looking structures."
The Universe, as a conglomerate of galaxies, seems to be mass with no shape, not a system. We don't know if there is a nucleus, relations among parts, etc. We can't know if it is ordered or chaotic. Evolution is the result of a flow of energy moving inside this Universe. Like any fetus is under evolution due a genetic flow producing more designed looking structures. The source of this "evolution" is a natural system (human species) living beyond the fetus' universe (the womb). This is the unique real natural parameter we have for theories about the universe.
You said: You seem to be treating "order" and "chaos" more like elemental forces or something, and generally sound like you've got problems with magical thinking.
It is not magical thinking, it is the normal natural chain of causes and effects. Every system that reaches an ordered state is attacked by entropy, which produces chaos, from which lift up order again, but each cycle is more complex than the ancestors cycles. At chaotic states, like our biosphere, generations of empty minds are more likely to be winners, while generations of reasonable minds must be losers at short time and the final winner at long time. But, maybe the jungle is teaching me everything wrong. What do you think?
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-14T15:44:12.873Z · LW(p) · GW(p)
Thanks, MugaSofer, for yours constructive reply. No, I am not a native English and my brain was hard-wired at the salvage jungle here, so, I think is a good opportunity for me debating our different experiences and world views. I hope that it must be curious for you too.
It certainly is that.
Yes, these things (personality, memories, etc.) composes our " state of being" and they are merely product of brains/nature.
So ... what's left? Doesn't that explain everything we mean by "mind"?
But, we have a real phenomena where we watch the emergence of consciousnesses without being product of brains: the embryo. There is no natural architecture able to be conscious of its existence, neither are the brains alone. So, where comes from the conscious state of embryos? From a superior hierarchic system that exists beyond his universe (the womb), and this system is called " human species" .
So, it is not zero the probability that human mind is product of a hidden superior natural system whose bits-information are invading our immediate world and being aggregated to our synapses, besides the possibility that it was encrypted into our genes (if my models about Matrix/DNA are right).
I've replied to this assertion elsewhere; hope I got the interpretation right.
No, evolution will make us more suitable to real natural world. But, due the alternation between chaos and order, and due our origins coming from chaos, the flow of order (which is the basis for rationality) is the baby and weak force just now. Chaos is dying, order is growing, but now, chaos still is the strongest, so. irrationality and randomness are the winners, by while.
You know, I'm not sure what you mean by "chaos". If it's just randomness, rationality can tell you how to choose ptimally using probabilities; perhaps that's not what you mean? Is it complexity?
I don't understand your question. Being still virgin and untouchable, the elements of Amazon hidden niches are witness of life's origins. And we see chaos here. So, our origins came from terrestrial chaotic state of Nature, which came from ordered state of Cosmos... Cyclic alternations.
Oh, I think I get it; the Amazon is emblematic of Earth before civilization, right? The ancestral environment. Which is, naturally, where we evolved.
Natural world is the Universe, not this terrestrial biosphere alone. This biosphere is a kind of disturbance, a noise, in relation to the ordered state of Cosmos. Biosphere is product of an entropic process, like the radiation of sun. So, the disturbance is corrected by the ordered Cosmos, from which is coming the emergence of those rules you are talking about. The curious thing is that humans are the carriers of those rules, we are bringing order to our salvage environment.
But even the biosphere follows laws, even if sometimes the results are so complex we have trouble discerning them.
The Universe, as a conglomerate of galaxies, seems to be mass with no shape, not a system. We don't know if there is a nucleus, relations among parts, etc. We can't know if it is ordered or chaotic. Evolution is the result of a flow of energy moving inside this Universe. Like any fetus is under evolution due a genetic flow producing more designed looking structures. The source of this "evolution" is a natural system (human species) living beyond the fetus' universe (the womb). This is the unique real natural parameter we have for theories about the universe.
Sorry; by "evolution" I meant natural selection. You know, Darwinism?
It is not magical thinking, it is the normal natural chain of causes and effects. Every system that reaches an ordered state is attacked by entropy, which produces chaos, from which lift up order again, but each cycle is more complex than the ancestors cycles. At chaotic states, like our biosphere, generations of empty minds are more likely to be winners, while generations of reasonable minds must be losers at short time and the final winner at long time. But, maybe the jungle is teaching me everything wrong. What do you think?
Well, I understand physically entropy is always increasing, and replicators tend to overrun available resources and improve via selection, but I'm not clear on these "cycles".
comment by timtyler · 2009-04-04T12:44:07.685Z · LW(p) · GW(p)
Re: "First, foremost, fundamentally, above all else: Rational agents should WIN."
In an attempt to summarise the objections, there seem to be two fairly-fundamental problems:
Rational agents try. They cannot necessarily win: winning is an outcome, not an action;
"Winning" is a poor synonym for "increasing utility": sometimes agents should minimise their losses.
"Rationalists maximise expected utility" would be a less controversial formulation.
Replies from: ciphergoth, SoullessAutomaton, MarkusRamikin↑ comment by Paul Crowley (ciphergoth) · 2009-04-04T13:13:32.543Z · LW(p) · GW(p)
I agree with your two problems, but the problem with your alternative and so many others presented here is that it doesn't so strongly speak to the distinction which EY means to draw, between wanting to be seen to have followed the forms for maximising expected utility and actually seeking to maximise expected utility.
Also, of course, one who at each moment makes the decision that maximises expected future utility defects against Clippy in both Prisoner's Dilemma and Parfit's Hitchhiker scenarios, and arguably two-boxes against Omega, and by EY's definition that counts as "not winning" because of the negative consequences of Clippy/Omega knowing that that's what we do.
Replies from: timtyler, timtyler, Normal_Anomaly↑ comment by timtyler · 2009-04-04T13:42:00.757Z · LW(p) · GW(p)
Re: "it doesn't so strongly speak to the distinction which EY means to draw"
I wasn't trying to do that. It seems like a non-trivial concept. Is it important to try and capture that distinction in a slogan?
Re: "one who at each moment makes the decision that maximises expected future utility defects"
Expected utility maximising agents don't have commitment mechanisms, and can't be trusted to make promises? I am sceptical. In my view, you can express practically any agent as an expected utility maximiser. It seems easy enough to imagine commitment mechanisms. I don't see where the problem is.
Replies from: Technologos↑ comment by Technologos · 2009-12-21T11:08:46.984Z · LW(p) · GW(p)
In the Least Convenient Possible World, I imagine nobody has a commitment mechanism in the Prisoner's Dilemma.
Replies from: timtyler↑ comment by timtyler · 2009-12-21T17:17:19.965Z · LW(p) · GW(p)
You can't claim commitment mechanisms are not possible when in fact they evidently are. "Always cooperate" is an example of a strategy which is committed to cooperate in the prisoner's dilemma.
Replies from: Technologos↑ comment by Technologos · 2009-12-21T19:14:30.256Z · LW(p) · GW(p)
"Commitment mechanism" typically means some way to impose a cost on a party for breaking the commitment, otherwise it is, in the game theorist's parlance, "cheap talk" instead. In the one-shot PD, there is by definition no commitment mechanism, and it was in this LCPW that Eliezer's decision theories are frequently tested.
You're talking about the repeated PD with "always cooperate," rather than the one-shot version, which was the scenario in which we found ourselves with Clippy. Please understand--I'm not saying EU-maxing agents do not have commitment mechanisms in general, just that the PD was formulated expressly to show the breakdown of cooperation under certain circumstances.
Regardless, always cooperate definitely does not maximize expected utility in the vast majority of environments. Indeed, it is not part of literally any stable equilibrium in a finite-time RPD. But more to the point, AC is only "committed" in the sense that, if given no opportunities afterward to make decisions, it will appear to produce committed behavior. It is unstable precisely because it requires no further decision points, where the RPD (in which it is played) has them every round.
Replies from: timtyler↑ comment by timtyler · 2009-12-21T19:54:53.114Z · LW(p) · GW(p)
You and I are using different definitions of "commitment mechanism", then.
The idea I am talking about is demonstrating to the other party that you are a nice, cooperative agent. For example by showing the other agent your source code. That concept has nothing to do with crime and punishment.
The type of commitment mechanism I am talking about is one that convincingly demonstrates that you are committed to a particular course of action under some specified circumstances. That includes commitment via threat of retribution - but also includes some other things.
AC's stability is tangential to my point. If you want to complain that AC is unstable, perhaps consider TFT instead. That is exactly the same as AC on the first round.
↑ comment by timtyler · 2009-04-04T13:40:57.501Z · LW(p) · GW(p)
Re: "it doesn't so strongly speak to the distinction which EY means to draw"
I wasn't trying to do that. It seems like a non-trivial concept. Is it important to try capture that idea in a slogan?
Re: "one who at each moment makes the decision that maximises expected future utility defects"
Expected utility maximising agents don't have commitment mechanisms, and can't be trusted to make promises? I am sceptical. In my view, you can express practically any agent as an expected utility maximiser. It seems easy enough to imagine commitment mechanisms. I don't see where the problem is.
↑ comment by Normal_Anomaly · 2013-06-08T20:59:49.662Z · LW(p) · GW(p)
Also, of course, one who at each moment makes the decision that maximises expected future utility defects against Clippy in both Prisoner's Dilemma and Parfit's Hitchhiker scenarios, and arguably two-boxes against Omega, and by EY's definition that counts as "not winning" because of the negative consequences of Clippy/Omega knowing that that's what we do.
I think I'm misunderstanding you here because this looks like a contradiction. Why does making the decision that maximizes expected utility necessarily have negative consequences? It sounds like you're working under a decision theory that involves preference reversals.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2013-06-09T09:44:25.856Z · LW(p) · GW(p)
I'm talking about the difference between CDT, which stiffs the lift-giver in Parfit's Hitchhiker and so never gets a lift, and other decision theories.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2013-06-10T22:40:07.514Z · LW(p) · GW(p)
Oh, I see. I thought you were saying an optimal decision theory stiffed the lift-giver.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2013-06-11T09:58:45.207Z · LW(p) · GW(p)
I hope I've become clearer in the four years since I wrote that!
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2013-07-03T20:02:23.913Z · LW(p) · GW(p)
. . . did not notice the date-stamp. Good thing thread necros are allowed here.
↑ comment by SoullessAutomaton · 2009-04-04T13:06:06.874Z · LW(p) · GW(p)
"Rationalists maximise expected utility" would be a less controversial formulation.
But, alas, less catchy.
Replies from: timtyler↑ comment by timtyler · 2009-04-04T13:22:38.863Z · LW(p) · GW(p)
As contagious memes go, "rationalists should win" seems to be rather pathogenic to me. A proposed rationalist slogan shouldn't need so many footnotes. For the sake of minds everywhere, I think it would be best to try to kill it off in its early stages.
↑ comment by MarkusRamikin · 2012-05-05T10:19:29.086Z · LW(p) · GW(p)
I much prefer "rationalists should win" because it's simple, accessible language. Makes this article more powerful than it would otherwise be. Everyone gets winning; how many people find terms like expected utility maximisation meaningful on a gut level?
Replies from: timtyler↑ comment by timtyler · 2012-05-05T11:12:43.218Z · LW(p) · GW(p)
Charlie Sheen rationality.
comment by Z_M_Davis · 2009-04-04T01:46:35.298Z · LW(p) · GW(p)
Rationality seems like a good name for the obvious ideal that you should believe things that are true and use this true knowledge to achieve your goals. Because social organisms are weird in ways whose details are beyond the scope of this comment, striving to be more rational might not pay off for a human seeking to move up in a human world---but aside from this minor detail relating to an extremely pathological case, it's still probably a good idea.
Replies from: knb↑ comment by knb · 2009-04-04T05:33:55.741Z · LW(p) · GW(p)
Hmmm. Unless you are suggesting a different definition for rationality, I think I disagree. If an atheist has the goal of gaining business contacts (or something) and he can further this goal by joining a church, and impersonating the irrational behaviors he sees, he isn't being irrational. While behaviors that tend to have their origins in irrational thought are sometimes rewarded by human society, the irrationality itself never is. I think becoming more rational will help a person move up in a human status hierarchy, if that is the rationalist's goal. I think we have this stereotyped idea of rationalists as Asperger's-afflicted know-it-alls who are unable to deal with irrational humans. It simply doesn't have to be that way.
Replies from: Z_M_Davis↑ comment by Z_M_Davis · 2009-04-05T02:14:36.467Z · LW(p) · GW(p)
I denotatively agree with your conclusion, but I think that many if not most aspiring rationalists are incapable of that level of Machiavellianism. Suppose that your typical human cares about both social status and being forthright, and that there are social penalties for making certain true but unpopular statements. Striving for rationality in this situation, could very well mean having to choose between popularity and honesty, whereas the irrationalist can have her cake and eat it, too. So yes, some may choose popularity---but you see, it is a choice.
comment by Alicorn · 2009-04-03T14:58:48.294Z · LW(p) · GW(p)
Rationalists are the ones who win when things are fair, or when things are unfair randomly over an extended period. Rationality is an advantage, but it is not the only advantage, not the supreme advantage, not an advantage at all in some conceivable situations, and cannot reasonably be expected to produce consistent winning when things are unfair non-randomly. However, it is a cultivable advantage, which is among the things that makes it interesting to talk about.
A rationalist might be unfortunate enough that (s)he does not do well, but ceteris paribus, (s)he will do better. Maybe that could be the slogan - "rationalists do better"? With the implied parenthetical "(than they would do if they were not rationalists, with the caveat that you can concoct unlikely situations in which rationality is an impediment to some values of "doing well")".
Replies from: JulianMorrison, SoullessAutomaton↑ comment by JulianMorrison · 2009-04-03T15:50:43.179Z · LW(p) · GW(p)
"You can't reliably do better than rationality in a non-pathological universe" is probably closer to the math.
Replies from: grobstein↑ comment by grobstein · 2009-04-03T17:34:31.234Z · LW(p) · GW(p)
It's impossible to add substance to "non-pathological universe." I suspect circularity: a non-pathological universe is one that rewards rationality; rationality is the disposition that lets you win in a nonpathological universe.
You need to attempt to define terms to avoid these traps.
Replies from: JulianMorrison, grobstein↑ comment by JulianMorrison · 2009-04-03T19:32:08.629Z · LW(p) · GW(p)
Pathological universes are ones like: where there is no order and the right answer is randomly placed. Or where the facts are maliciously arranged to entrap in a recursive red herring where the simplest well-supported answer is always wrong, even after trying to out-think the malice. Or where the whole universe is one flawless red herring ("God put the fossils there to test your faith").
"No free lunch" demands they be mathematically conceivable. But to assert that the real universe behaves like this is to go mad.
Replies from: randallsquared↑ comment by randallsquared · 2009-04-03T20:04:30.195Z · LW(p) · GW(p)
Since we learn reason from the universe we're in, if we were in a universe you're referring to as "pathological", we (well, sentients, if any) would have learned a method of arriving at conclusions which matched that. Likewise, since the universe produced math, I don't think it has any meaning to talk of whether universes with different fundamental rules are "mathematically conceivable".
Replies from: JulianMorrison↑ comment by JulianMorrison · 2009-04-03T21:17:06.416Z · LW(p) · GW(p)
http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization
No search algorithm beats random picking in the totally general case. This implies the totally general case must include an equal balance of pathology and sanity. Intuitively, a problem could be structured so every good decision gives a bad result.
Edit: this post gives a perfect example of a pathological problem: there is only one decision to be made, a Bayesian loses, a random picker gets it right half the time and an anti-Bayesian wins.
However we seem to be living in a sane universe.
↑ comment by SoullessAutomaton · 2009-04-03T15:29:47.748Z · LW(p) · GW(p)
Maybe that could be the slogan - "rationalists do better"? With the implied parenthetical "(than they would do if they were not rationalists, with the caveat that you can concoct unlikely situations in which rationality is an impediment to some values of "doing well")".
By parallel construction with the epistemic rationality of the site's name, perhaps "rationalists make fewer mistakes"?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-03T16:16:12.654Z · LW(p) · GW(p)
I guess when I look over the comments, the problem with the phraseology is that people seem to inevitably begin debating over whether rationalists win and asking how much they win - the properties of a fixed sort of creature, the "rationalist" - rather than saying, "What wins systematically? Let us define rationality accordingly."
Not sure what sort of catchphrase would solve this.
Replies from: AlexU, astray↑ comment by AlexU · 2009-04-03T16:21:28.748Z · LW(p) · GW(p)
Yes. Rationalism shouldn't be see as a bag of discrete tricks, but rather, as the means for achieving any given end -- what it takes to do something you want to do. The particulars will vary, of course, depending on the end in question, but the rational individual should do better at figuring them out.
On a side note, I'm not sure coming up with better slogans, catchphrases, and neologisms is the right thing to be aiming for.
Replies from: Eliezer_Yudkowsky, gwern↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-03T16:34:52.130Z · LW(p) · GW(p)
Do not underestimate the power of poetry.
Replies from: Annoyance↑ comment by astray · 2009-04-03T16:29:15.915Z · LW(p) · GW(p)
It runs into problems elsewhere, but what about "Rationalism should win" ?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-03T16:34:29.749Z · LW(p) · GW(p)
Well, that's wrong, but thinking about why it's wrong leads me to realize that maybe "Rationality should win" would have been a better move.
But I did also want to convey the idea that aspiring to be a rationalist means aspiring to be stronger, something more formidable than a debating style... well, I guess "rationality should win" conveys a bit of that too.
comment by JamesAndrix · 2009-04-03T18:50:22.792Z · LW(p) · GW(p)
It seems to me that the disagreement isn't so much about winning as the expectation.
In fact I don't really agree with this winning vs. belief modes of rationality.
Both approaches are trying to maximize their expected payout. Eliezer's approach has a wider horizon of what it considers when figuring out what the universe is like.
The standard approach is that since the content of the boxes is already determined at the time of the choice, so taking both will always put you $1000 ahead.
Eliezer looks (I think) out to the most likely final outcomes. (or looks back at how the chain of causality of one's decision is commingled with the chain of causality of Omega's decision. )
I think that flaw in the standard approach is not 'not winning' but a false belief about the relationship between the boxes and your choices. (the belief that there isn't any.) Once you have the right answer, making the choice that wins is obvious.
The way we would know that the standard approach is the wrong one is by looking at results. That a certain set of choices consistently wins isn't evidence that it is rational, it is evidence that it wins. Believing that it wins is rational.
So maybe: "Rationality is learning how to win"
Replies from: CronoDAS↑ comment by CronoDAS · 2009-04-03T20:00:42.894Z · LW(p) · GW(p)
"Rationality is learning how to win"
I like that.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-04-16T21:29:00.917Z · LW(p) · GW(p)
I would like it two if it wasn't for the fact that it, well, isn't.
comment by grobstein · 2009-04-03T14:46:50.769Z · LW(p) · GW(p)
I don't think I buy this for Newcomb-like problems. Consider Omega who says, "There will be $1M in Box B IFF you are irrational."
Rationality as winning is probably subject to a whole family of Russell's-Paradox-type problems like that. I suppose I'm not sure there's a better notion of rationality.
Replies from: MBlume, gjm↑ comment by MBlume · 2009-04-03T14:51:50.034Z · LW(p) · GW(p)
What you give is far harder than a Newcomb-like problem. In Newcomb-like problems, Omega rewards your decisions, he isn't looking at how you reach them. This leaves you free to optimize those decisions.
Replies from: grobstein↑ comment by grobstein · 2009-04-03T15:57:10.621Z · LW(p) · GW(p)
What you give is far harder than a Newcomb-like problem. In Newcomb-like problems, Omega rewards your decisions, he isn't looking at how you reach them.
You misunderstand. In my variant, Omega is also not looking at how you reach your decision. Rather, he is looking at you beforehand -- "scanning your brain", if you will -- and evaluating the kind of person you are (i.e., how you "would" behave). This, along with the choice you make, determines your later reward.
In the classical problem, (unless you just assume backwards causation,) what Omega is doing is assessing the kind of person you are before you've physically indicated your choice. You're rewarded IFF you're the kind of person who would choose only box B.
My variant is exactly symmetrical: he assesses whether you are the kind of person who is rational, and responds as I outlined.
Replies from: Technologos↑ comment by Technologos · 2009-04-03T16:08:37.572Z · LW(p) · GW(p)
We have such an Omega: we just refer to it differently.
After all, we are used to treating our genes and our environments as definite influences on our ability to Win. Taller people tend to make more money; Omega says "there will be $1mil in box B if you have alleles for height."
If Omega makes decisions based on properties of the agent, and not on the decisions either made or predicted to be made by the agent, then Omega is no different from, well, a lot of the world.
Rationality, then, might be better redefined under these observations as "making the decisions that Win whenever such decisions actually affect one's probability of Winning," though I prefer Eliezer's more general rules plus the tacit understanding that we are only including situations where decisions make a difference.
Replies from: grobstein, grobstein↑ comment by grobstein · 2009-04-03T16:22:57.053Z · LW(p) · GW(p)
Quoting myself:
(though I don't see how you identify any distinction between "properties of the agent" and "decisions . . . predicted to be made by the agent" or why you care about it).
I'll go further and say this distinction doesn't matter unless you assume that Newcomb's problem is a time paradox or some other kind of backwards causation.
This is all tangential, though, I think.
↑ comment by grobstein · 2009-04-03T16:21:11.815Z · LW(p) · GW(p)
Yes, all well and good (though I don't see how you identify any distinction between "properties of the agent" and "decisions . . . predicted to be made by the agent" or why you care about it). My point is that a concept of rationality-as-winning can't have a definite extension say across the domain of agents, because of the existence of Russell's-Paradox problems like the one I identified.
This is perfectly robust to the point that weird and seemingly arbitrary properties are rewarded by the game known as the universe. Your proposed redefinition may actually disagree with EY's theory of Newcomb's problem. After all, your decision can't empty box B, since the contents of box B are determinate by the time you make your decision.
Replies from: major↑ comment by major · 2009-04-03T18:06:14.310Z · LW(p) · GW(p)
After all, your decision can't empty box B, since the contents of box B are determinate by the time you make your decision.
Hello. My name is Omega. Until recently I went around claiming to be all-knowing/psychic/whatever, but now I understand lying is Wrong, so I'm turning over a new leaf. I'd like to offer you a game.
Here are two boxes. Box A contains $1,000, box B contains $1,000,000. Both boxes are covered by touch-sensitive layer. If you choose box B only (please signal that by touching box B), it will send out a radio signal to box A, which will promptly disintegrate. If you choose both boxes (please signal that by touching box A first), a radio signal will be sent out to box B, which will disintegrate it's content, so opening it will reveal an empty box.
(I got the disintegrating technology from the wreck of a UFO that crashed into my barn, but that's not relevant here.)
I'm afraid, if I or my gadgets detect any attempt to temper with the operation of my boxes, I will be forced to disqualify you.
In case there is doubt, this is the same game I used to offer back in my deceitful days. The difference is, now the player knows the rules are enforced by cold hard electronics, so there's no temptation to try and outsmart anybody.
So, what will it be?
Replies from: grobstein, Vladimir_Nesov↑ comment by grobstein · 2009-04-03T18:40:18.070Z · LW(p) · GW(p)
Yes, you are changing the hypo. Your Omega dummy says that it is the same game as Newcomb's problem, but it's not. As VN notes, it may be equivalent to the version of Newcomb's problem that assumes time travel, but this is not the classical (or an interesting) statement of the problem.
↑ comment by Vladimir_Nesov · 2009-04-03T18:29:20.213Z · LW(p) · GW(p)
What is your point? You seem to be giving a metaphor for solving the problem by imagining that your action has a direct consequence of changing the past (and as a result, contents of the box in the present). More about that in this comment.
Replies from: major↑ comment by major · 2009-04-03T22:42:25.950Z · LW(p) · GW(p)
Naive argument coming up.
How Omega decides what to predict or what makes it's stated condition for B (aka. result of "prediction") come true, is not relevant. Ignoring the data that says it's always/almost always correct, however, seems ... not right. Any decision must be made with the understanding that Omega is most likely to predict it. You can't outsmart it by failing to update it's expected state of mind in the last second. The moment you decide to two-box is the moment Omega predicted, when it chose to empty box B.
Consider this:
Andy: "Sure, one box seems like the good choice, because Omega would take the million away otherwise. OK. ... Now that the boxes are in front of me, I'm thinking I should take both. Because, you know, two is better than one. And it's already decided, so my choice won't change anything. Both boxes."
Barry: "Sure, one box seems like the good choice, because Omega would take the million away otherwise. OK. ... Now that the boxes are in front of me, I'm thinking I should take both. Because, you know, two is better than one. Of course the outcome still depends on what Omega predicted. Say I choose both boxes. So if Omega's prediction is correct this time, I will find an empty B. But maybe Omega was wrong THIS time. Sure, and maybe THIS time I will also win the lottery. How it would have known is not relevant. The fact that O already acted on it's prediction doesn't make it more likely to be wrong. Really, what is the dilemma here? One box."
Ok, I don't expect that I'm the first person to say all this. But then, I wouldn't have expected anybody to two-box, either.
Replies from: HughRistik↑ comment by HughRistik · 2009-04-03T23:01:43.780Z · LW(p) · GW(p)
major said:
Ignoring the data that says it's always/almost always correct, however, seems ... not right.
You're not the only person to wonder this. Either I'm missing something, or two-boxers just fail at induction.
I have to wonder how two-boxers would do on the "Hot Stove Problem."
In case you guys haven't heard of such a major problem in philosophy, I will briefly explain the Hot Stove Problem:
You have touched a hot stove 100 times. 99 times you have been burned. Nothing has changed about the stove that you know about. Do you touch it again?
Replies from: thomblake↑ comment by gjm · 2009-04-03T19:01:10.996Z · LW(p) · GW(p)
If one defines rationality in some way that isn't about winning, your example shows that rationalists-in-such-a-sense might not win.
If one defines rationality as actually winning, your example shows that there are things that even Omega cannot do because they involve logical contradiction.
If one defines rationality as something like "expected winning given one's model of the universe" (for quibbles, see below), your example shows that you can't coherently carry around a model of the universe that includes a superbeing who deliberately acts so as to invalidate that model.
I find all three of these things rather unsurprising.
The traditional form of Newcomb's problem doesn't involve a superbeing deliberately acting so as to invalidate your model of the universe. That seems like a big enough difference from your version to invalidate inferences of the form "there's no such thing as acting rationally in grobstein's version of Newcomb's problem; therefore it doesn't make sense to use any version of Newcomb's problem in forming one's ideas about what constitutes acting rationally".
I think the third definition is pretty much what Eliezer is getting at when he declares that rationalists/rationality should win. Tightening it up a bit, I think we get something like this: rationality is a strategy S such that at each moment, acting as S tells you to act -- given (1) your beliefs about the universe at that point and (2) your intention of following S at all times -- maximizes your net utility (calculated in whatever way you prefer; that is mostly not a question of rationality). This isn't quite a definition, because there might turn out to be multiple such strategies, especially for people whose initial beliefs about the universe are sufficiently crazy. But if you add some condition to the effect that S and your initial beliefs shouldn't be too unlike what's generally considered (respectively) rational and right now, there might well be a unique solution to the equations. And it seems to me that what your example shows about this definition is simply that you can't consistently expect Omega to act in a way that falsifies your beliefs and/or invalidates your strategies for acting. Which is (to me) not surprising, and not a reason for defining rationality in this sort of way.
Replies from: grobstein↑ comment by grobstein · 2009-04-03T19:55:42.898Z · LW(p) · GW(p)
What is it, pray tell, that Omega cannot do?
Can he not scan your brain and determine what strategy you are following? That would be odd, because this is no stronger than the original Newcomb problem and does not seem to contain any logical impossibilities.
Can he not compute the strategy, S, with the property "that at each moment, acting as S tells you to act -- given (1) your beliefs about the universe at that point and (2) your intention of following S at all times -- maximizes your net utility [over all time]?" That would be very odd, since you seem to believe a regular person can compute S. If you can do it, why not Omega? (NB, no, it doesn't help to define an approximation of S and use that. If it's rational, Omega will punish you for it. If it's not, why are you doing it?)
Can he not compare your strategy to S, given that he knows the value of each? That seems odd, because a pushdown automaton could make the comparison. Do you require Omega to be weaker than a pushdown automaton?
No?
Then is it possible, maybe, that the problem is in the definition of S?
Replies from: gjm↑ comment by gjm · 2009-04-03T23:45:01.960Z · LW(p) · GW(p)
What is it, pray tell, that Omega cannot do?
Well, for instance, he cannot make 1+1=3. And, if one defines rationality as actually winning then he cannot act in such a way that rational people lose. This is perfectly obvious; and, in case you have misunderstood what I wrote (as it looks like you have), that is the only thing I said that Omega cannot do.
In the discussion of strategy S, my claim was not about what Omega can do but about what you (a person attempting to implement such a strategy) can consistently include in your model of the universe. If you are an S-rational agent, then Omega may decide to screw you over, in which case you lose; that's OK (as far as the notion of rationality goes; it's too bad for you) because S doesn't purport to guarantee that you don't lose.
What S does purport to do is to arrange that, in so far as the universe obeys your (incomplete, probabilistic, ...) model of it, you win on average. Omega's malfeasance is only a problem for this if it's included in your model. Which it can't be. Hence:
what your example shows [...] is that you can't consistently expect Omega to act in a way that falsifies your beliefs and/or invalidates your strategies for acting.
(Actually, I think that's not quite right. You could probably consistently expect that, provided your expectations about how he's going to to it were vague enough.)
I did not claim, nor do I believe, that a regular person can compute a perfectly rational strategy in the sense I described. Nor do I believe that a regular person can play chess without making any mistakes. None the less, there is such a thing as playing chess well; and there is such a thing as being (imperfectly, but better than one might be) rational. Even with a definition of the sort Eliezer likes.
comment by infotropism · 2009-04-03T20:37:58.825Z · LW(p) · GW(p)
The rationality that doesn't secure your wish isn't the true rationality.
Winning has no fixed form. You'll do whatever is needed to succeed, however original or far fetched it would sound. How it sounds is irrelevant, how it works is the crux.
And If at first what you tried didn't work, then you'll learn, adapt, and try again, making no pause for excuses, if you merely want to succeed, you'll be firm as a rock, relentless in your attempts to find the path to success.
And if your winning didn't go as smoothly or well as you wanted or thought it should, in general, then learn, adapt, and try again. Think outside of the box, self recurse on winning itself. Eventually, you should refine and precise your methods into a tree, from general to specialized.
That tree will have a trunk of general cases and methods used to solve those, and any case that lies ahead, upwards on the tree; and the higher you go, the more specialized the method, the rarer the case it solves. The tree isn't fixed either, it can and will grow and change.
Replies from: timtyler, Marshall↑ comment by timtyler · 2009-04-04T08:02:21.558Z · LW(p) · GW(p)
Re: The rationality that doesn't secure your wish isn't the true rationality.
Again with the example of handicap chess. You start with no knight. You wish to win. Actually you lose. Does that mean you were behaving irrationally? No, of course not! It is not whether you win or lose, but how you play the game.
↑ comment by Marshall · 2009-04-04T07:22:15.980Z · LW(p) · GW(p)
Yes!
Rationality is Messy, Uncertain and Fumbling.
The explanation afterwards looks Neat, Certain and Cut 'n Dried.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T12:02:42.388Z · LW(p) · GW(p)
Say "Rationalists are" instead of "Rationality is" and I'll agree with that.
Replies from: Marshall↑ comment by Marshall · 2009-04-04T17:25:37.894Z · LW(p) · GW(p)
I am wondering, if this difference makes a difference.
"Rationality" is of course a nominalisation - you can't put it in a wheelbarrow - so it is an abstraction that can mean many things. "Rationalists" are more concrete.
However the activity of rationality is dependent on a carrier (the rationalist). No carrier, no rationality. The activity of rationality is messy, uncertain and fumbling. Would non-human carriers of rationality be less messy?. Maybe they would be quicker and the quickness would disguise the messiness. Maybe they would turn down fewer blind alleys, but surely they are as blind as us.
Thus I do not see a difference that makes a difference.
comment by marc · 2009-04-03T17:08:29.513Z · LW(p) · GW(p)
What about cases where any rational course of action still leaves you on the losing side?
Although this may seem to be impossible according to your definition of rationality, I believe it's possible to construct such a scenario because of the fundamental limitations of a human brains ability to simulate.
In previous posts you've said that, at worst, the rationalist can simply simulate the 'irrational' behaviour that is currently the winning strategy. I would contend that humans can't simulate effectively enough for this to be an option. After all we know that several biases stem from our inability to effectively simulate our own future emotions, so to effectively simulate an entire other beings response to a complex situation would seem to be a task beyond the current human brain.
As a concrete example I might suggest the ability to lie. I believe it's fairly well established that humans are not hugely effective liars and therefore the most effective way to lie is to truly believe the lie. Does this not strongly suggest that limitations of simulation mean that a rational course of action can still be beaten by an irrational one?
I'm not sure that even if this is true it should effect a universal definition of rationality - but it would place bounds on the effectiveness of rationality in beings of limited simulation capacity.
comment by James_Miller · 2009-04-03T16:29:14.339Z · LW(p) · GW(p)
If humans are imperfect actors then in situations (such as a game of chicken) in which it is better to (1) be irrational and seen as irrational then it is to (2) be rational and seen as rational
then the rational actor will lose.
Of course holding constant everyone else's beliefs about you, you always gain by being more rational.
Replies from: Eliezer_Yudkowsky, grobstein, abigailgem↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-03T18:00:23.282Z · LW(p) · GW(p)
Given that I one-box on Newcomb's Problem and keep my word as Parfit's Hitchhiker, it would seem that the rational course of action is to not steer your car even if it crashes (if for some reason winning that game of chicken is the most important thing in the universe).
Replies from: James_Miller, Jonathan_Graehl, rwallace, grobstein↑ comment by James_Miller · 2009-04-03T19:12:53.276Z · LW(p) · GW(p)
You are playing chicken with your irrational twin. Both of you would rather survive than win. Your twin, however, doesn't understand that it's possible to die when playing chicken. In the game your twin both survives and wins whereas you survive but lose.
Replies from: Aurini, rwallace↑ comment by Aurini · 2009-04-03T20:06:39.890Z · LW(p) · GW(p)
Then you murder the twin prior to the game of chicken, and fake his suicide. Or you intimidate the twin, using your advanced rational skills to determine how exactly to best fill them with fear and doubt.
But before murdering or risking an uncertain intimidation feint, there's another question you need to ask yourself. How certain are you that the twin is irrational? The Cold War was (probably) a perceptual error; neither side realized that they were in a prisoners dilemma, they both assumed that the other side preferred "unbalanced armament" over "mutual armament" over "mutual disarmament;" in reality, the last two should have been switched.
Worst case scenario? You die playing chicken, because the stakes were worth it. The Rational path isn't always nice.
(There are some ethical premises implicit in this argument, premises which I plan to argue are natural derivatives from Game Theory... but I'm still working on that article.)
↑ comment by rwallace · 2009-04-03T19:21:48.866Z · LW(p) · GW(p)
My answer to that one is that I don't play chicken in the first place unless the stake is something I'm prepared to die for.
Replies from: James_Miller↑ comment by James_Miller · 2009-04-03T19:27:07.304Z · LW(p) · GW(p)
There are lots of chicken like games that don't involve death. For example, your boss wants some task done and either you or a co-worker can do it. The worst outcome for both you and the co-worker is for the task to not get done. The best is for the other person to do the task.
Replies from: rwallace↑ comment by rwallace · 2009-04-03T19:30:37.300Z · LW(p) · GW(p)
My answer still applies - I'm not going to make a song and dance about who does it, unless the other guy has been systematically not pulling his weight and it's got to the point where that matters more to me than this task getting done.
↑ comment by Jonathan_Graehl · 2009-04-03T21:24:31.561Z · LW(p) · GW(p)
For Newcomb's Problem, is it fair to say that if you believe the given information, the crux is whether you believe it's possible (for Omega) to have a 99%+ correct prediction of your decision based on the givens? Refusal to accept that seems to me the only justification for two-boxing. Perhaps that's a sign that I'm less tied to a fixed set of "rationalist" procedures than a perfect rationalist would be, but I would feel like I were pretending to say otherwise.
I also wonder if the many public affirmations I've heard of "I would one-box Newcomb's Problem" are attempts at convincing Omega to believe us in the unlikely event of actually encountering the Problem. It does give a similar sort of thrill to "God will rapture me to heaven."
↑ comment by rwallace · 2009-04-03T18:39:43.104Z · LW(p) · GW(p)
+1 for "Rationalists win". What is Parfit's Hitchhiker? I couldn't find an answer on Google.
Replies from: grobstein↑ comment by grobstein · 2009-04-03T19:05:48.571Z · LW(p) · GW(p)
It's a test case for rationality as pure self-interest (really it's like an altruistic version of the game of Chicken).
Suppose I'm purely selfish and stranded on a road at night. A motorist pulls over and offers to take me home for $100, which is a good deal for me. I only have money at home. I will be able to get home then IFF I can promise to pay $100 when I get home.
But when I get home, the marginal benefit to paying $100 is zero (under assumption of pure selfishness). Therefore if I behave rationally at the margin when I get home, I cannot keep my promise.
I am better off overall if I can commit in advance to keeping my promise. In other words, I am better off overall if I have a disposition which sometimes causes me to behave irrationally at the margin. Under the self-interest notion of rationality, then, it is rational, at the margin of choosing your disposition, to choose a disposition which is not rational under the self-interest notion of rationality. (This is what Parfit describes as an "indirectly self-defeating" result; note that being indirectly self-defeating is not a knockdown argument against a position.)
Replies from: rwallace, ciphergoth↑ comment by rwallace · 2009-04-03T19:19:33.350Z · LW(p) · GW(p)
Ah, thanks. I'm of the school of thought that says it is rational both to promise to pay the $100, and to have a policy of keeping promises.
Replies from: GuySrinivasan, grobstein↑ comment by SarahNibs (GuySrinivasan) · 2009-04-03T20:22:58.546Z · LW(p) · GW(p)
I think it is both right and expected-utility-maximizing to promise pay the $100, right to pay the $100, and not expected-utility-maximizing to pay the $100 under standard assumptions of you'll never see the driver again or whatnot.
Replies from: thomblake↑ comment by thomblake · 2009-04-03T20:31:48.264Z · LW(p) · GW(p)
You're assuming it does no damage to oneself to break one's own promises. Virtue theorists would disagree.
Breaking one's promises damages one's integrity - whether you consider that a trait of character or merely a valuable fact about yourself, you will lose something by breaking your promise even if you never see the fellow again.
Replies from: grobstein↑ comment by grobstein · 2009-04-03T20:39:51.324Z · LW(p) · GW(p)
Your argument is equivalent to, "But what if your utility function rates keeping promises higher than a million orgasms, what then?"
The hypo is meant to be a very simple model, because simple models are useful. It includes two goods: getting home, and having $100. Any other speculative values that a real person might or might not have are distractions.
Replies from: rwallace, thomblake↑ comment by thomblake · 2009-04-03T20:43:00.446Z · LW(p) · GW(p)
Except that you mention both persons and promises in the hypothetical example, so both things factor into the correct decision. If you said that it's not a person making the decision, or that there's no promising involved, then you could discount integrity.
↑ comment by grobstein · 2009-04-03T19:29:55.966Z · LW(p) · GW(p)
Yes, this seems unimpeachable. The missing piece is, rational at what margin? Once you are home, it is not rational at the margin to pay the $100 you promised.
Replies from: randallsquared↑ comment by randallsquared · 2009-04-03T20:08:43.821Z · LW(p) · GW(p)
This assumes no one can ever find out you didn't pay, as well. In general, though, it seems better to assume everything will eventually be found out by everyone. This seems like enough, by itself, to keep promises and avoid most lies.
Replies from: grobstein↑ comment by Paul Crowley (ciphergoth) · 2009-04-03T20:02:46.401Z · LW(p) · GW(p)
Thank you, I too was curious.
We need names for these positions; I'd use hyper-rationalist but I think that's slightly different. Perhaps a consequentialist does whatever has the maximum expected utility at any given moment, and a meta-consequentialist is a machine built by a consequentialist which is expected to achieve the maximum overall utility at least in part through being trustworthy to keep commitments a pure consequentialist would not be able to keep.
I guess I'm not sure why people are so interested in this class of problems. If you substitute Clippy for my lift, and up the stakes to a billion lives lost later in return for two billion saved now, there you have a problem, but when it's human beings on a human scale there are good ordinary consequentialist reasons to honour such bargains, and those reasons are enough for the driver to trust my commitment. Does anyone really anticipate a version of this situation arising in which only a meta-consequentialist wins, and if so can you describe it?
Replies from: grobstein, grobstein↑ comment by grobstein · 2009-04-03T20:05:42.599Z · LW(p) · GW(p)
I very much recommend Reasons and Persons, by the way. A friend stole my copy and I miss it all the time.
Replies from: ciphergoth, gjm↑ comment by Paul Crowley (ciphergoth) · 2009-04-04T08:38:38.266Z · LW(p) · GW(p)
OK, thanks!
Your friend stole a book on moral philosophy? That's pretty special!
Replies from: MichaelHoward↑ comment by MichaelHoward · 2009-04-05T14:18:47.422Z · LW(p) · GW(p)
↑ comment by gjm · 2009-04-03T23:35:44.724Z · LW(p) · GW(p)
It's still in print and readily available. If you really miss it all the time, why haven't you bought another copy?
Replies from: grobstein↑ comment by grobstein · 2009-04-03T23:37:27.750Z · LW(p) · GW(p)
It's $45 from Amazon. At that price, I'm going to scheme to steal it back first.
OR MAYBE IT'S BECAUSE I'M CRAAAZY AND DON'T ACT FOR REASONS!
Replies from: gjm↑ comment by gjm · 2009-04-04T00:35:49.585Z · LW(p) · GW(p)
Gosh. It's only £17 in the UK.
(I wasn't meaning to suggest that you're crazy, but I did wonder about ... hmm, not sure whether there's a standard name for it. Being less prepared to spend X to get Y on account of having done so before and then lost Y. A sort of converse to the endowment effect.)
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-04-04T06:51:48.335Z · LW(p) · GW(p)
Mental accounting has that effect in the short run, but seems unlikely to apply here.
↑ comment by grobstein · 2009-04-03T16:43:11.867Z · LW(p) · GW(p)
This is a classic point and clearer than the related argument I'm making above. In addition to being part of the accumulated game theory learning, it's one of the types of arguments that shows up frequently in Derek Parfit's discussion of what-is-rationality, in Ch. 1 of Reasons and Persons.
I feel like there are difficulties here that EY is not attempting to tackle.
↑ comment by abigailgem · 2009-04-03T17:33:32.358Z · LW(p) · GW(p)
James, when you say, "be rational", I think this shows a misunderstanding.
It may be really important to impress people with a certain kind of reckless courage. Then it is Rational to play chicken as bravely as you can. This Wins in the sense of being better than the alternative open to you.
Normally, I do not want to take the risk of being knocked down by a car. Only in this case is it not rational to play chicken: because not playing achieves what I want.
I do not see why a rationalist should be less courageous, less able to estimate distances and speeds, and so less likely to win at Chicken.
Replies from: grobstein↑ comment by grobstein · 2009-04-03T18:33:21.102Z · LW(p) · GW(p)
No. The point is that you actually want to survive more than you want to win, so if you are rational about Chicken you will sometimes lose (consult your model for details). Given your preferences, there will always be some distance \epsilon before the cliff where it is rational for you to give up.
Therefore, under these assumptions, the strategy "win or die trying" seemingly requires you to be irrational. However, if you can credibly commit to this strategy -- be the kind of person who will win or die trying -- you will beat a rational player every time.
This is a case where it is rational to have an irrational disposition, a disposition other than doing what is rational at every margin.
Replies from: Annoyance↑ comment by Annoyance · 2009-04-03T18:40:09.838Z · LW(p) · GW(p)
But a person who truly cares more about winning than surviving can be utterly rational in choosing that strategy.
Replies from: James_Miller, Technologos↑ comment by James_Miller · 2009-04-03T19:08:43.337Z · LW(p) · GW(p)
In chicken-like games in which one player is rational and the other irrational:
The rational person cares more about surviving than winning and so survives and loses.
The irrational person who doesn't think through the consequences of losing both survives and wins.
↑ comment by Technologos · 2009-04-03T19:04:16.002Z · LW(p) · GW(p)
Agreed. In fact, the classic game-theoretic model of chicken requires that the players vastly prefer losing their pride to losing their lives. If winning/losing > losing/dying, then in a situation with imperfect information, we would assign a positive probability to playing aggressively.
And technically speaking, it is most rational, in the game-theoretic sense, to disable your steering ostentatiously before the other player does so as well. In that case, you've won the game before it begins, and there is no actual risk.
Replies from: James_Miller↑ comment by James_Miller · 2009-04-03T19:17:07.554Z · LW(p) · GW(p)
No, if you are rational the best action is to convince your opponent that you have disabled your steering when in fact you have not done so.
Replies from: Technologos↑ comment by Technologos · 2009-04-04T18:33:30.608Z · LW(p) · GW(p)
Either a) your opponent truly does believe that you've disabled your steering, in which case the outcomes are identical and the actions are equally rational, or b) we account for the (small?) chance that your opponent can determine that you actually have not disabled your steering, in which case he ostentatiously disables his and wins. Only by setting up what is in effect a doomsday device can you ensure that he will not be tempted to information-gathering brinksmanship.
comment by JGWeissman · 2009-04-05T03:36:56.114Z · LW(p) · GW(p)
Alleged rationalists should not find themselves envying the mere decisions of alleged nonrationalists, because your decision can be whatever you like.
Eliezer said this in the Newcomb's Problem post which introduced "Rationalists should win".
Perhaps for a slogan, shorten it to: "Rationalists should not envy the mere decisions of nonrationalists." This emphasizes that rationality contributes to winning through good decisions.
A potential problem is that, in some circumstances, an alleged rationalist could find a factor that seems unrelated to their decisions to blame for losing, and therefore argue that their being rational is consistent with the slogan. For example, a someone who blames losing on luck might need to reconsider their probability theory that is informing their decisions. Though this should not be a fully general counterargument, someone who wins more often than others in the same situation is likely doing something right, even if they do not win with probability 1.
comment by Thomas · 2009-04-04T08:22:55.588Z · LW(p) · GW(p)
Both boxes might be transparent. In this case, you would see the money in both boxes only if you are rational enough to understand, that you have to pick just B.
Wouldn't that be an irrational move? Not all! You have to understand that to be rational.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-04T18:54:32.712Z · LW(p) · GW(p)
That's brilliant! (I'm not sure what you mean by understand though.)
In other words, Omega does one of the two things: it either offers you $1000 + $1, or only $10. It offers you $1000 + $1 only if it predicts that you won't take the $1, otherwise it only gives you $10.
This is a variant of counterfactual mugging, except that there is no chance involved. Your past self prefers to precommit to not taking the $1, while your present self faced with that situation prefers to take the 1$.
Replies from: Thomas, Vladimir_Nesov↑ comment by Thomas · 2009-04-28T08:09:02.708Z · LW(p) · GW(p)
You have to understand this twist, to be able to call yourself rational, by my book.
You understood the twist, as I see.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-28T11:27:50.736Z · LW(p) · GW(p)
This reply is too mysterious to reveal whether you got the criterion right.
↑ comment by Vladimir_Nesov · 2009-04-04T19:53:39.492Z · LW(p) · GW(p)
Hmmm... It looks like the decision to take the $1 determines the situation where you make that decision out of reality. Effects of precommitment being restricted to the counterfactual branches are a usual thing, but in this problem they stare you right in the face, which is rather daring.
Replies from: Vladimir_Nesov, Eliezer_Yudkowsky↑ comment by Vladimir_Nesov · 2009-04-04T20:21:07.403Z · LW(p) · GW(p)
Another variation, playing only on real/counterfactual, without motivating the real decision. Omega comes to you and offers $1, if and only if it predicts that you won't take it. What do you do? It looks neutral, since expected gain in both cases is zero. But the decision to take the $1 sounds rather bizarre: if you take the $1, then you don't exist!
Agents self-consistent under reflection are counterfactual zombies, indifferent to whether they are real or not.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-07-02T19:53:01.659Z · LW(p) · GW(p)
Seems roughly as disturbing as Wikipedia's article on Gaussian adaptation:
Gaussian adaptation as an evolutionary model of the brain obeying the Hebbian theory of associative learning offers an alternative view of free will due to the ability of the process to maximize the mean fitness of signal patterns in the brain by climbing a mental landscape in analogy with phenotypic evolution.
Such a random process gives us lots of freedom of choice, but hardly any will. An illusion of will may, however, emanate from the ability of the process to maximize mean fitness, making the process goal seeking. I. e., it prefers higher peaks in the landscape prior to lower, or better alternatives prior to worse. In this way an illusive will may appear. A similar view has been given by Zohar 1990. See also Kjellström 1999.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T20:05:32.901Z · LW(p) · GW(p)
If you want your source code to be self-consistent under reflection, you know what you have to do.
comment by SoullessAutomaton · 2009-04-03T15:28:15.314Z · LW(p) · GW(p)
It seems to me that some of the kibitzing is due to human cognitive architecture making it difficult to be both epistemologically and instrumentally rational in many contexts, e.g., expected overconfidence in social interactions, motivation issues related to optimism/pessimism, &c.
An ideal rational agent would not have this problem, but human cognition is... suboptimal.
comment by CharlieSheen · 2011-04-18T23:51:44.633Z · LW(p) · GW(p)
"Rationalists should win." is what sold me on this site. Its a good phrase.
comment by tel · 2009-12-21T05:59:59.896Z · LW(p) · GW(p)
Perhaps it's not about the ad hominem.
"Rationality is whatever wins."
If it's not a winning strategy, you're not doing it right. If it is a winning strategy, overall in as long of terms as you can plan, then it's rationality. It doesn't matter what the person thinks: whether they'd call themselves rationalists or not.
comment by PeteG · 2009-04-03T21:20:15.471Z · LW(p) · GW(p)
Rationality is winning that doesn’t generate a surprise; randomly winning the lottery generates a surprise. A good measure of rationality is the amount of complexity involved in order to win, and the surprise generated by that win. If to win at a certain task requires that your method have many complex steps, and you win, non-surprisingly, then the method used was a very rational one.
comment by AlexU · 2009-04-03T16:08:42.537Z · LW(p) · GW(p)
All else being equal, shouldn't rationalists, almost by definition, win? The only way this wouldn't happen would be in a contest of pure chance, in which rationality could confer no advantage. It seems like we're just talking semantics here.
Replies from: anonym↑ comment by anonym · 2009-04-04T22:58:45.114Z · LW(p) · GW(p)
If human beings had perfect control over their minds and bodies -- e.g., could tweak System 1 without limit and perform any physically possible act/behavior -- your point would be stronger.
However, as others have mentioned elsewhere, there may be cases where we are just not capable of implementing a strategy that rationality suggests is optimal (e.g., convincingly pretending to be more confident than you are to the point that all relevant System 1 impulses/reactions are those of a person who is naturally overconfident).
It may be the case that an ubermensch rationalist can eventually learn to do anything that can be done via non-rational means, but that's not clear a priori, especially if we consider finite lifespans and opportunity costs.
Replies from: AlexU↑ comment by AlexU · 2009-04-04T23:04:55.924Z · LW(p) · GW(p)
Agreed. Particularly in hypothetical cases where one rationally concludes that it would be in their best interest to behave irrationally, e.g., over-confidence in oneself or belief in God. Even if one arrived at those conclusions, it's not clear to me how anyone could decide to become irrational in those ways. Pascal's notion of "bootstrapping" oneself into religious belief never struck me as very plausible. Interestingly though, "faking" confidence in oneself often does tend to lead to real confidence via some sort of feedback mechanism, e.g., interactions with women.
comment by cousin_it · 2009-04-03T15:05:28.865Z · LW(p) · GW(p)
As an answer to my and others' constant nagging, your post feels strangely unfulfilling. Just what problems does the Art solve, and how do you check if the solutions are correct? Of course the problems can be theoretical, not real-world - this isn't the issue at all.
How about "Rationality isn't about winning"? Nod to Robin Hanson.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-03T15:25:56.419Z · LW(p) · GW(p)
I know what problem my Art is intended to solve. You may feel that some progress has been exhibited, or not; it will certainly pale by comparison to my future hopes, but it might not seem so pale by comparison to the average. The Art seems to be giving me what I ask of it; I have hopes that this will hold true of others, and that I will be able to understand what they have invented.
Meanwhile, there are plenty of people shooting off their own feet in straightforward ways; and it is a good deal easier to do something about that, than to produce superstars.
comment by psvkushal · 2024-09-26T14:20:22.626Z · LW(p) · GW(p)
It is this that I intended to guard against by saying: "Rationalists should win!" Not whine, win. If you keep on losing, perhaps you are doing something wrong. Do not console yourself about how you were so wonderfully rational in the course of losing. That is not how things are supposed to go. It is not the Art that fails, but you who fails to grasp the Art.
It is similar to scrub mentality in "playing to win" by sirlin. if you are not winning consistently then there is something wrong in the behaviours, which needs to be recognised
comment by matthew rummler (matthew-rummler) · 2023-11-17T18:23:53.206Z · LW(p) · GW(p)
I fully support "Rationality is systematized winning", however, there may be a lack of wider market appeal in the word systematized.
Perhaps consultation with a marketing expert* could get you the most effective results?
*(Someone expert in understanding what words and phrases are most likely to have impactful meaning to one or more groups of humans)
comment by a bottle cap · 2019-07-22T16:36:00.996Z · LW(p) · GW(p)
If one values winning above everything else, then everything that leads to winning is rational. The reductio to this is if torturing a googolplex of beings at maximum duration and increasing intensity leads to winning, then that's what must be done.
Yet... perhaps winning then is not what we should most value? Perhaps we should value destroying the thing which values torturing a googolplex of beings. What if we need to torture half of a googolplex of beings to outcompete something willing to torture a googolplex of beings? What if outcompeting such a thing is impossible? What is the threshold for the number of beings tortured, in total? Such a question must by definition seem irrational to someone winning at all costs, this is the tradeoff one makes for valuing winning at all costs and calling it rationality. At which point does one say, "The most rational move is stopping all forward momentum immediately."?("You are missing the point! Rationality is just your *independant* strategy!" That is missing the point.) This does not appear to be a universe where a system which intends to maximize truth and ethics can win. I suspect once we can transcend temporal bias and egocentric bias via convincing virtual experience, in the specific sense of living lives like Junko Furuta's and Elisabeth Fritzl's, we will not appreciate winning at all costs. The paradox here is the thing which tends to reach convincing virtual simulations is not the thing which values simulating such things. That little voice in your head that says , "Error. Irrational appeal to emotion." is the same voice which tortures the entire multiverse to win(if this is the winning strategy). The conclusion here is that ethics and truth don't win. The thing which is least hindered by a commitment to values other than winning, wins. If anything could be said to bad, that is, if one is not a moral nihilist, then that would be bad news. Again worth noticing the little voice that rejects this word "bad", which upon having one's hands planted into hot coals for no reason, would appreciate things differently and realize an objective property of consciousness that is as grounded as the most basic mathematical expression.
Replies from: dxu↑ comment by dxu · 2019-07-22T02:12:33.686Z · LW(p) · GW(p)
You're confusing ends with means, terminal goals with instrumental goals, morality with decision theory, and about a dozen other ways of expressing the same thing. It doesn't matter what you consider "good", because for any fixed definition of "good", there are going to be optimal and suboptimal methods of achieving goodness. Winning is simply the task of identifying and carrying out an optimal, rather than suboptimal, method.
Replies from: TAG↑ comment by TAG · 2019-07-22T11:57:43.212Z · LW(p) · GW(p)
If there are objectively correct and false values, then it matters to the epistemic rationalist which subjective values they have, because they might be wrong. (it also matters to the ER whether values are subjective).
Epistemic and instrumental rationality have never been the same thing. "Rationality is winning" cannot define them both, but and, as it happens, only defines IR.
comment by Troshen · 2013-04-13T18:57:37.266Z · LW(p) · GW(p)
I'm not sure if it's better, but here's one that works well. Similar to the phrase, "Physician, heal thyself!" another way to say rationalists should win is to say, "Rationalist, improve thyself!"
If you aren't actually improving yourself and the world around you, then you aren't using the tools of rationality correctly. And it follows that to improve the world around you, you first have to be in a position to do so by doing the same to yourself.
comment by katydee · 2013-03-29T11:12:08.048Z · LW(p) · GW(p)
Typo report: "hoard of barbarians" should be replaced by "horde of barbarians."
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-03-29T13:23:13.236Z · LW(p) · GW(p)
Hey, you never know when you might need a barbarian... you don't want to run out!
comment by nazgulnarsil · 2009-04-16T20:19:05.319Z · LW(p) · GW(p)
I one box newcombs problem because the payoffs are too disproportionate to make it interesting. how about this? if omega predicted you would two box they are both empty if omega predicted you would one box both boxes have $1000
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2013-06-08T20:55:55.162Z · LW(p) · GW(p)
That payoff matrix doesn't preserve the form of the problem. One of the features of the problem is that whatever is in box B, you're better off two-boxing than one-boxing if you ignore the influence of Omega's prediction. A better formulation would be that box A has $1000, and Box B has $2000 iff Omega believes you will one-box. Box B has to potentially have more than box A, or there's no point in one-boxing what ever DT you have.
comment by haig · 2009-04-04T08:25:54.635Z · LW(p) · GW(p)
Let's use one of Polya's 'how to solve it' strategies and see if the inverse helps: Irrationalists should lose. Irrationality is systematized losing.
On another note, rationality can refer to either beliefs or behaviors. Does being a rationalist mean your beliefs are rational, your behaviors are rational, or both? I think behaving rationally, even with high probability priors, is still very hard for us humans in a lot of circumstances. Until we have full control of our subconscious minds and can reprogram our cognitive systems, it is a struggle to will ourselves to act in completely rational ways.
To spread rationality, amongst humans at least, we might want to consider a divide and conquer approach focusing on people maintaining rational beliefs first, and maximizing rational behavior second.
comment by Emile · 2009-04-03T18:11:33.388Z · LW(p) · GW(p)
Like William, I think "winning" is the problem, though for different reasons : "winning" has extra connotations, and tends to call up the image of the guy who climbs the corporate ladder through dishonesty and betrayal rather than trying to lead a happy and fulfilling life. Or someone who tries to win all debates by humiliating his opponent till nobody wants to speak to him any more.
Winning often doesn't mean getting what you want, but winning at something defined externally, or competing with others - which may indeed not always the rational thing to do.
The thread on the Rationality Questionaire seemed to have this problem - some questions seemed more focused on "winning as understood by society" rather then getting what you want.
comment by billswift · 2009-04-03T15:33:55.370Z · LW(p) · GW(p)
"abandon reasonableness" is never necessary; though I think we may be using reasonable somewhat differently. I think "reasonable" includes the idea of "appropriate to the situation"
quoting myself : "There is a supposed "old Chinese saying": The wise man defends himself by never being attacked. Which is excellent, if incomplete, advice. I completed it myself with "But only an idiot counts on not being attacked." Don't use violence unless you really need to, but if you need to don't hold back." http://williambswift.blogspot.com/2009/03/violence.html
As to your overall point, I agree that rationalists should win. General randomness, unknowns, and opposition from other agents prevent consistent victories in the real world. But if you are not winning more than losing you definitely are not being rational.
Replies from: SoullessAutomaton, Eliezer_Yudkowsky↑ comment by SoullessAutomaton · 2009-04-03T15:43:23.322Z · LW(p) · GW(p)
Don't use violence unless you really need to, but if you need to don't hold back.
By corollary:
"Rule #6: If violence wasn't your last resort, you failed to resort to enough of it." -- The Seven Habits of Highly Effective Pirates
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T12:06:51.222Z · LW(p) · GW(p)
Standard proverb: "If you would have peace, prepare for war."
comment by pjeby · 2009-04-03T15:09:53.684Z · LW(p) · GW(p)
There are two possible interpretations of "Rationalists should win", and it's likely the confusion is coming about from the second.
One use of "should" is to indicate a general social obligation: "people should be nice to each other", and the other is to indicate a personal entitlement: "you should be nice to me." i.e., "should" = "I deserve it"
It appears that some people may be using the latter interpretation, i.e., "I'm rational so I should win" -- placing the obligation on the universe rather than on themselves.
Perhaps "Rationalists choose to win", or "Winning is better than being right"?
Replies from: dclayh↑ comment by dclayh · 2009-04-03T15:39:57.358Z · LW(p) · GW(p)
"Winning is better than being right"
I think Eliezer's point is closer to "Winning is the same as being right"; i.e., the evidence that you're right is that you won.
Replies from: timtylercomment by iocompletion · 2013-11-14T19:58:00.106Z · LW(p) · GW(p)
Rationality leads directly to effectiveness. Or: Rationality faces the truth and therefore produces effectiveness. Or: Rationality is measured by how much effectiveness it produces.
comment by AdShea · 2010-12-02T02:51:48.745Z · LW(p) · GW(p)
It seems that most the discussion here is caught up on Omega being able to "predict" your decision would require reverse-time causality which some models of reality cannot allow to exist.
Assuming that Omega is a "sufficiently advanced" powerful being, then the boxes could act in exactly the way that the "reverse time" model stipulates without requiring any such bending of causality through technology that can destroy the contents of a box faster than human perception time or use the classical many-worlds interpretation method of ending the universe where things don't work out the way you want (the universe doesn't even need to end, something like a quantum vacuum collapse would have the same effect of stopping any information leakage in non-conforming universes).
This makes the not-quite-a-rationalist argument of "the boxes are already what they are so my decision doesn't matter, I'll take both" no longer hold true.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-02T08:55:30.529Z · LW(p) · GW(p)
Your assumptions mean that the more likely answer is "Omega is sufficiently powerful to mess with me any way it likes; why am I playing this game?"
That is, problems containing Omega are more contrived and less relevant to anything resembling real life the more one looks at them.
Note that thinking too much about Omega can lead to losing in real life, as one forgets that Omega is hypothetical and cannot possibly exist, and actually goes so far as to attributes the qualities of Omega to what is in fact a manipulative human. One example that I found quite jawdropping. This is a case I think could quite fairly be described as reasoning oneself more ineffective. People who act like that are a reason to get out of the situation, not to invoke TDT.
comment by Nebu · 2009-04-14T17:25:11.499Z · LW(p) · GW(p)
One problem is that "Rationalists should win" has two obvious interpretations for me:
- Rationalists should win, therefore if rationalists aren't winning, there's something wrong with the world.
- Rationalists should win, therefore if you aren't winning, you're not a rationalist.
Compare with:
- People should donate money to Africa, therefore if people aren't donating to Africa, there's something wrong with the world.
- People should donate money to Africa, therefore if you don't donate to Africa, you're not a person.
and
- Protons should have an electric charge of 1, therefore if protons don't have an electric charge of 1, there's something wrong with the world.
- Protons should have an electric charge of 1, therefore if you don't have an electric charge of 1, you are not a proton.
↑ comment by Vladimir_Nesov · 2009-04-14T17:48:01.349Z · LW(p) · GW(p)
Shouldness in "Rationalists should win" is a much more detailed notion than correspondence to winning situations. It refers to a property of achieving goals, as seen under uncertainty, in our case implemented by cognitive algorithms that search the solution-space for the right plans. Rationalists should-win, have a good measure of win-should-ness.
Looking over it all again, I should add that "rationality is about winning" is also an immensely simpler sentiment, that still seems to retain the gist of the message for which the "rationalists should win" motto was devised.
↑ comment by Paul Crowley (ciphergoth) · 2009-04-14T17:32:46.813Z · LW(p) · GW(p)
What "Rationalists should WIN" needs is a stake through its heart; it is misinterpreted so much more often than it is correctly used that we may need to do without it altogether.
comment by byrnema · 2009-04-04T01:49:34.188Z · LW(p) · GW(p)
Has it been settled then, that in this Newcomb's Problem, rationality and winning are at odds? I think it is quite relevant to this discussion whether or not they ever can be at odds.
My last comment got voted down -- presumably because whether or not rationality and winning are ever in conflict has been discussed in the previous post. (I'm a quick study and would like feedback as to why I get voted down.) However, was there some kind of consensus in the previous post? Do we just assume here that it is possible that rationality is not always the winning strategy? I cannot!
Looking through the comments, it sounds like many people think it is most rational to pick both boxes because of some assumption about how physical reality can't be altered. In a hypothetical reality where that assumption doesn't hold, it would be irrational to insist on applying it.
Replies from: Marshall↑ comment by Marshall · 2009-04-04T07:05:42.240Z · LW(p) · GW(p)
Not everyone wants to win! Or let's put it another way: Everyone wants to win, but in different ways. The rational part is winning as one defines it. At least some of the two boxers are winning - as they define it.
I sense you mean that rationaliy is unputdownable - always right always winning. But life is not a two dimensional reiterated PD. Fate plays with dice. Other players can be MAD. Sometimes a flower grows through the asfalt. We don't live long enough to say on average that rationalists always win.
Replies from: Marshallcomment by byrnema · 2009-04-03T23:42:21.767Z · LW(p) · GW(p)
Am I missing something? I think this answer is very simple: rationality and winning are never at odds.
(The only exception is when a rational being has incomplete information. If information tells him that the blue box has $100 and the red box has $0, and it is the other way around, it is rational for him to pick the blue box even though he doesn't win.)
Replies from: HughRistik↑ comment by HughRistik · 2009-04-04T00:03:12.529Z · LW(p) · GW(p)
The only exception is when a rational being has incomplete information
Even rational beings usually don't have complete information.
Replies from: byrnema↑ comment by byrnema · 2009-04-04T02:18:34.102Z · LW(p) · GW(p)
Yes, I agree. I think being rational is always being aware that everything you "know" is a house of cards based on your assumptions. A change in assumptions will require rebuilding the house, and a false room means you need to challenge an assumption.
I'm just arguing that a false room never means that rationality (deduction) itself was wrong (i.e., not winning).
All a rational being can do is base decisions on the information they have. A question: is a rational position based upon incomplete information that leads to not winning really an example of "rationality" not winning? I think that in this discussion we are talking about the relationship between rationality and winning in the context of "enough" information.
comment by Furcas · 2009-04-03T21:41:12.534Z · LW(p) · GW(p)
I already understood what you meant by "rationalists should win", Eliezer, but I don't find Newcomb's problem very convincing as an example. The way I see it, if you one-box you've lost. You could have gotten an extra $1000 but you chose not to.
Replies from: GuySrinivasan↑ comment by SarahNibs (GuySrinivasan) · 2009-04-03T21:58:21.881Z · LW(p) · GW(p)
And yet those who one-box get $999000 more than those who don't. What gives? If there is a systematic, predictable thing that offers one-boxers $1000000 and offers two-boxers $1000, and there is not a systematic, predictable thing that provides some sort of countering offer to two-boxers, by one-boxing you still get more money.
I can't think of something of equal power level (examining your decisions, not your method of arriving at those decisions) which would be able to provide the countering offer to two-boxers.
Replies from: thomblake, Furcas↑ comment by thomblake · 2009-04-03T22:16:12.876Z · LW(p) · GW(p)
Simple: most situations in real life aren't like this. If you believe Omega and one-box, you'll lose when he's lying. If your decision theory works better in hypothetical situations and worse in real life, then it doesn't make you win.
Replies from: orthonormal↑ comment by orthonormal · 2009-04-04T01:33:52.990Z · LW(p) · GW(p)
Apply the Least Convenient Possible World principle.
Also, I don't think Eliezer keeps harping on Newcomb's problem because he anticipates experiencing precisely that scenario. I see several important points that I don't think have been clearly made (not that I'm the one to do so):
We can choose whether and when to implement certain decision algorithms, including classical causal decision theory (CCDT). This choice may in fact be trivial, or it may be subtle, but it is a worthy question for a rationalist.
Although, for any fixed set of options, implementing CCDT maximizes your return, there are in fact cases where the options you have depend on the outcome of a model of your decision algorithm. I'm not talking about Omega, I'm talking about human social life. We base a large portion of our interactions with others on our anticipations of how they might respond. (This isn't often done rationally by anyone's standards, but it can be.)
It gets confusing (in particular, Hofstadterian) here, but a plausibly better outcome might be reached in the Prisoner's Dilemma by selfish non-strangers mutually modeling the other's likely decision process, and recognizing that only C-C and D-D are stable outcomes under mutual modeling.
Of course, I still feel a bit uncomfortable with this line of reasoning.
↑ comment by Furcas · 2009-04-03T22:11:26.268Z · LW(p) · GW(p)
Of course the one-boxers get more money: They were put in a situation in which they could either get $1 000 000 or $1 001 000, whereas the two-boxers were put in a situation in which they could get $0 or $1000.
It makes no sense to compare the two decisions the way you and Eliezer do. It's like organizing a swimming competition between an Olympic athlete who has to swim ten kilometers to win and an untrained fatass who only has to swim a hundred meters to win, and concluding that because the fatass wins more often than the athlete, therefore fatasses clearly make better swimmers than athletes.
Replies from: JGWeissman, SoullessAutomaton, grobstein↑ comment by JGWeissman · 2009-04-04T04:03:41.903Z · LW(p) · GW(p)
Of course the one-boxers get more money: They were put in a situation in which they could either get $1 000 000 or $1 001 000, whereas the two-boxers were put in a situation in which they could get $0 or $1000.
When faced with this decision, you are either in the real world, in which case you can get an extra $1000 by two boxing, or you are in a simulation, in which case you can arrange so your self in the real world gets and extra $1,000,000 by one boxing. Given that you can't tell which of these is the case, and that you are deterministic, you will make the same decision in both situations. So your choice is to either one box and gain $1,000,000 or two box and gain $1000. If you like having more money, it seems clear which of those choices is more rational.
↑ comment by SoullessAutomaton · 2009-04-03T22:33:33.639Z · LW(p) · GW(p)
But if you were put into said hypothetical competition, and could somehow decide just before the contest began whether to be an Olympic athlete or an untrained fatass, which would you choose?
I think you're getting overly distracted by the details of the problem construction and missing the point.
Replies from: Furcas, thomblake↑ comment by Furcas · 2009-04-03T22:44:11.128Z · LW(p) · GW(p)
If my only goal were to win that particular competition (and not to be a good swimmer), of course I'd choose to turn into a fatass and lose all my training. Likewise, if I could choose to precommit to one-boxing in Newcomb-like problems, I would, because pre-commitment has an effect on what will be in box B (whereas the actual decision does not).
The details are what makes Newcomb's problem what it is, so I don't see how it's possible to get "overly distracted" by them. Correct me if I'm wrong, but pre-commitment isn't an option in Newcomb's problem, so the best, the most rational, the winning decision is two-boxing.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-03T23:11:02.260Z · LW(p) · GW(p)
Correct me if I'm wrong, but pre-commitment isn't an option in Newcomb's problem, so the best, the most rational, the winning decision is two-boxing.
By construction, Omega's predictions are known to be essentially infallible. Given that, whatever you choose, you can safely assume Omega will have correctly predicted that choice. To what extent, then, is pre-commitment distinguishable from deciding on the spot?
In a sense there is an implicit pre-commitment in the structure of the problem; while you have not pre-committed to a choice on this specific problem, you are essentially pre-committed to a decision-making algorithm.
Eliezer's argument, if I understand it, is that any decision-making algorithm that results in two-boxing is by definition irrational due to giving a predictably bad outcome.
Replies from: orthonormal, Furcas, grobstein↑ comment by orthonormal · 2009-04-04T00:59:58.103Z · LW(p) · GW(p)
In a sense there is an implicit pre-commitment in the structure of the problem; while you have not pre-committed to a choice on this specific problem, you are essentially pre-committed to a decision-making algorithm.
That's an interesting, and possibly fruitful, way of looking at the problem.
↑ comment by Furcas · 2009-04-03T23:30:41.495Z · LW(p) · GW(p)
Pre-commitment is different from deciding on the spot because once you're on the spot, there is nothing, absolutely nothing you can do to change what's in box B. It's over. It's a done deal. It's beyond your control.
My understanding of Eliezer's argument is the same as yours. My objection is that two-boxing doesn't actually give a bad outcome. It gives the best outcome you can get given the situation you're in. That you don't know what situation you're in until after you've opened box B doesn't change that fact. As Eliezer is so fond of saying, the map isn't the territory.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-04T00:59:58.035Z · LW(p) · GW(p)
Pre-commitment is different from deciding on the spot because once you're on the spot, there is nothing, absolutely nothing you can do to change what's in box B.
If your decision on the spot is 100 percent predictable ahead of time, as is explicitly assumed in the problem construction, you are effectively pre-committed as far as Omega is concerned. You, apparently, have essentially pre-committed to opening two boxes.
My objection is that two-boxing doesn't actually give a bad outcome. It gives the best outcome you can get given the situation you're in.
And yet, everyone who opens one box does better than the people who open two boxes.
You seem to have a very peculiar definition of "best outcome".
Replies from: Furcas↑ comment by Furcas · 2009-04-04T02:03:06.509Z · LW(p) · GW(p)
If your decision on the spot is 100 percent predictable ahead of time, as is explicitly assumed in the problem construction, you are effectively pre-committed as far as Omega is concerned. You, apparently, have essentially pre-committed to opening two boxes.
What I meant by 'pre-commitment' is a decision that we can make if and only if we know about Newcomb-like problems before being faced with one. In other words, it's a decision that can affect what Omega will put in box B. That Omega can deduce what my decision will be doesn't mean that the decision is already taken.
And yet, everyone who opens one box does better than the people who open two boxes.
And every fatass who competes against an Olympic athlete in the scenario I described above does 'better' than the athlete. So what? Unless the athlete knows about the competition's rules ahead of time and eats non-stop to turn himself into a fatass, there's not a damn thing he can do about it, except try his best once the competition starts.
You seem to have a very peculiar definition of "best outcome".
It seems too obvious to say, but I guess I have to say it. "The best outcome" in this context is "the best outcome that it is possible to achieve by making a decision". If box B contains nothing, then the best outcome that it is possible to achieve by making a decision is to win a thousand dollars. If box B contains a million dollars, then the best outcome that it is possible to achieve by making a decision is to win one million and one thousand dollars.
Well, I don't see how I can explain myself more clearly than this, so this will be my last comment on this subject. In this thread. This week. ;)
Replies from: Kenny↑ comment by Kenny · 2009-04-12T16:59:59.419Z · LW(p) · GW(p)
This exchange has finally imparted a better understanding of this problem for me.
If you pre-commit now to always one-box – and you believe that about yourself – then deciding to one-box when Omega asks you is the best decision.
If you are uncertain of your commitment then you probably haven't really pre-committed! I haven't tried to math it, but I think your actual decision when Omega arrives would depend on the strength of your belief about your own pre-commitment. [Though a more-inconvenient possible world is the one in which you've never heard of this, or similar, puzzles!]
Now I grok why rationality should be self-consistent under reflection.
Replies from: Furcas↑ comment by Furcas · 2009-04-13T02:46:52.305Z · LW(p) · GW(p)
Small nitpick: If you've really pre-committed to one-boxing, there is no decision to be made once Omega has set up the boxes. In fact, the thought of making a decision won't even cross your mind. If it does cross your mind, you should two-box. But if you two-box, you now know that you haven't really pre-committed to one-boxing. Actually, even if you decide to (mistakenly) one-box, you'll still know you haven't really pre-committed, or you wouldn't have had to decide anything on the spot.
In other words, Newcomb's problem can only ever involve a single true decision. If you're capable of pre-commitment (that is, if you know about Newcomb-like problems in advance and if you have the means to really pre-commit), it's the decision to pre-commit, which precludes any ulterior decision. If you aren't capable of pre-commitment (that is, if at least one of the above conditions is false), it's the on-the-spot decision.
↑ comment by grobstein · 2009-04-03T23:26:07.716Z · LW(p) · GW(p)
Eliezer's argument, if I understand it, is that any decision-making algorithm that results in two-boxing is by definition irrational due to giving a predictably bad outcome.
So he's assuming the conclusion that you get a bad outcome? Golly.
Replies from: HughRistik, William↑ comment by HughRistik · 2009-04-03T23:35:45.288Z · LW(p) · GW(p)
True, we don't know the outcome. But we should still predict that it will be bad, due to Omega's 99% accuracy rate.
Don't mess with Omega.
↑ comment by William · 2009-04-03T23:31:02.696Z · LW(p) · GW(p)
The result of two-boxing is a thousand dollars. The result of one-boxing is a million dollars. By definition, a mind that always one-boxes receives a better payout than one that always two-boxes, and therefore one-boxing is more rational, by definition.
Replies from: orthonormal, Furcas↑ comment by orthonormal · 2009-04-04T01:36:58.905Z · LW(p) · GW(p)
See Arguing "By Definition". It's particularly problematic when the definition of "rational" is precisely what's in dispute.
↑ comment by Furcas · 2009-04-03T23:41:32.828Z · LW(p) · GW(p)
The result of two-boxing is a thousand dollars more than you would have gotten otherwise. The result of one-boxing is a thousand dollars less than you would have gotten otherwise. Therefore two-boxing is more rational, by definition.
What determines whether you'll be in a 1M/1M+1K situation or in a 0/1K situation is the kind of mind you have, but in Newcomb's problem you're not given the opportunity to affect what kind of mind you have (by pre-commiting to one-boxing, for example), you can only decide whether to get X or X+1K, regardless of X's value.
Replies from: GuySrinivasan↑ comment by SarahNibs (GuySrinivasan) · 2009-04-04T00:48:42.429Z · LW(p) · GW(p)
Suppose for a moment that one-boxing is the Foo thing to do. Two-boxing is the expected-utility-maximizing thing to do. Omega decided to try to reward those minds which it predicts will choose to do the Foo thing with a decision between doing the Foo thing and gaining $1000000, and doing the unFoo thing and gaining $1001000, while giving those minds which will choose to do the unFoo thing a decision between doing the Foo thing and gaining $0 and doing the unFoo thing and gaining $1000.
The relevant question is whether there is a generalization of the computation Foo which we can implement that doesn't screw us over on all sorts of non-Newcomb problems. Drescher for instance claims that acting ethically implies, among other things, doing the Foo thing, even when it is obviously not the expected-utility-maximizing thing.
↑ comment by thomblake · 2009-04-03T22:36:55.251Z · LW(p) · GW(p)
You're assuming that you can just choose how you go about making decisions every time you make a decision. If you're not granted that assumption, Furcas's analysis is spot on. Two-boxers succeed in other places and also on Newcomb; one-boxers fail in many situations that are similar to Newcomb but not as nice. So you need to decide what sort of decisions you'll make in general, and that will (arguably) determine how much money is in the boxes in this particular experiment.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-04-04T07:50:09.212Z · LW(p) · GW(p)
one-boxers fail in many situations that are similar to Newcomb but not as nice.
Such as?
(Is this meant to refer to failures of evidential decision theory? There are other options.)
comment by Marshall · 2009-04-03T16:18:51.559Z · LW(p) · GW(p)
Winning is all about choosing the right target. There will be disagreement about which target is right. After hitting the target it will sometimes be revealed that it was the wrong target. Not hitting the right target will sometimes be winning. Rationality lies in the evaluaton before, after and whilst aiming.
Somewhat like the game of darts.
comment by taw · 2009-04-03T21:09:13.481Z · LW(p) · GW(p)
Please ... Newcomb is a toy non-mathematizable problem and not a valid argument for anything at all. There must be a better example, or the entire problem is invalid.
Replies from: gwern, grobstein↑ comment by gwern · 2009-04-03T22:09:32.549Z · LW(p) · GW(p)
There must be a better example, or the entire problem is invalid.
I've long thought that voting in general is largely isomorphic to Newcomb's. If you cop out and don't vote, then everyone like you will reason the same way and not vote, and your favored candidates/policies will fail; but if you vote then the reverse might happen; and if you then carry it one more step... If you could just decide to one-box/vote then maybe everyone else like you will.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-04T01:11:14.529Z · LW(p) · GW(p)
Sorry, in voting you don't play the singular boss role that you play in Newcomb's problem. But it's amusing how far democracy proponents will go to convince themselves that their vote matters. :-)
Replies from: gwern↑ comment by gwern · 2009-04-04T01:34:09.627Z · LW(p) · GW(p)
I haven't worked it out rigorously (else you would've seen a post on it by now!), but it seems to me in close elections (Florida 2000, say) that thought process could be valid. Considering how small the margins sometimes are, and how much of the electorate doesn't vote, it doesn't strike me as implausible that there are enough people thinking like me to make a difference.
And of course we could just specify as a condition that you and yours are a bloc powerful enough to affect the election. (Maybe you're numerous, maybe there're only a few electors, whatever.)
But it's amusing how far democracy proponents will go to convince themselves that their vote matters.
The problem with irrelevant ad hominems is that they're very often based on flimsy evidence and so often wrong. I didn't even vote last year because I figured my vote didn't matter. I was not surprised.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-04T10:56:07.541Z · LW(p) · GW(p)
In Newcomb's problem you're the boss, e.g. you can assign yourself a suitable utility function beforehand to keep the million and screw the thousand. Not so in voting - no matter what you think, other people won't change. They don't have anything conditioned on the outcome of your thought process, as in Newcomb's. No, not even if "people thinking like you" are a bloc. You still can't influence them. It's a coordination game, not Newcomb's.
Your reasoning resembles the "twins fallacy" in the Prisoner's Dilemma: the idea that just by choosing to cooperate you can magically force your identical partner to do the same. Come to think of it, PD sounds like a better model for voting to me.
Update: Eliezer seems to think PD and Newcomb's are related. Not sure why.
↑ comment by grobstein · 2009-04-03T21:10:59.771Z · LW(p) · GW(p)
Please ... Newcomb is a toy non-mathematizable problem and not a valid argument for anything at all.
Why?
Replies from: taw↑ comment by taw · 2009-04-03T22:32:42.760Z · LW(p) · GW(p)
As far as I can tell, Newcomb problem exists only in English, and only because a completely aphysical causality loop is introduced. Every mathematization I've ever seen collapses it to either trivial one-boxing problem, or trivial two-boxing problem.
If anybody wants this problem to be treated seriously, maths first to show the problem is real! Otherwise, we're really not much better than if we were discussing quotes from the Bible.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-04-05T23:27:44.300Z · LW(p) · GW(p)
If you've seen formalizations, then it is formalizable. What are the formalizations?
Since I think the answer is obviously one-box, it doesn't surprise me that there is a formalization in which that answer is obvious. I have never seen a formalization in which the answer is two-box. I have seen the argument that "causal decision theory" (?) chooses to two-box. People jump from that to the conclusion that the answer is two-box, but that is an idiotic conclusion. Given the premise, the correct conclusion is that this decision theory is inadequate. Anyhow, I don't believe the argument. I interpret it simply as the decision theory failing to believe the statement of the problem. There is a disconnect between the words and the formalization of that decision theory.
The issue is not about formalizing Newcomb's problem; the problem is creating a formal decision theory that can understand a class of scenarios including Newcomb's problem. (It should be possible to tweak the usual decision theory to make it capable of believing Newcomb's problem, but I don't think that would be adequate for a larger class of problems.)
comment by Annoyance · 2009-04-03T17:01:57.166Z · LW(p) · GW(p)
"If you fail to achieve a correct answer, it is futile to protest that you acted with propriety."
But "achieving a correct answer" isn't the same thing as winning. Thus, the phrase "rationalists should win" is not a proper equivalence for the idea you wished to communicate. Sometimes acting with propriety involves losing - at least in a limited, specific context. Arguably, if you act with propriety, you always win.
It's not about winning or losing, it's how you play the game. Except that there may not be a game, and we're not sure what the rules are, or even that they are.
Replies from: thomblake↑ comment by thomblake · 2009-04-03T20:03:09.114Z · LW(p) · GW(p)
Sometimes acting with propriety involves losing - at least in a limited, specific context. Arguably, if you act with propriety, you always win.
These two sentences seem inconsistent. Care to unpack?
EDIT: replaced 'contradictory' with 'inconsistent'. Logical quibble.
Replies from: Annoyance↑ comment by Annoyance · 2009-04-04T14:31:13.451Z · LW(p) · GW(p)
Destroying an Empire to win a war is no victory. And ending a battle to save an Empire is no defeat. - attributed to Kahless the Unforgettable
There is such a thing as a Pyrrhic victory. Likewise, some kinds of failure can be more valuable than ostensible success.
There is always a greater perspective. From that greater perspective, what a lesser perspective judges to be a win may be a loss, and vice versa.