Rationalists lose when others choose

post by PhilGoetz · 2009-06-16T17:50:07.749Z · LW · GW · Legacy · 58 comments

Contents

58 comments

At various times, we've argued over whether rationalists always win.  I posed Augustine's paradox of optimal repentance to argue that, in some situations, rationalists lose.  One criticism of that paradox is that its strongest forms posit a God who penalizes people for being rational.  My response was, So what?  Who ever said that nature, or people, don't penalize rationality?

There are instances where nature penalizes the rational.  For instance, revenge is irrational, but being thought of as someone who would take revenge gives advantages.1

EDIT:  Many many people immediately jumped on this, because revenge is rational in repeated interactions.  Sure.  Note the "There are instances" at the start of the sentence.  If you admit that someone, somewhere, once faced a one-shot revenge problem, then cede the point and move on.  It's just an example anyway.

Here's another instance that more closely resembles the God who punishes rationalism, in which people deliberately punish rational behavior:

If rationality means optimizing expected utility, then both social pressures and evolutionary pressures tend, on average, to bias us towards altruism.  (I'm going to assume you know this literature rather than explain it here.)  An employer or a lover would both rather have someone who is irrationally altruistic.  This means that, on this particular (and important) dimension of preference, rationality correlates with undesirability.2

<ADDED>: I originally wrote "optimizing expected selfish utility", merely to emphasize that an agent, rational or not, tries to maximize its own utility function.  I do not mean that a rational agent appears selfish by social standards.  A utility-maximizing agent is selfish by definition, because its utility function is its own.  Any altruistic behavior that results, happens only out of self-interest.  You may argue that pragmatics argue against this use of the word "selfish" because it thus adds no meaning.  Fine.  I have removed the word "selfish".

However, it really doesn't matter.  Sure, it is possible to make a rational agent that acts in ways that seem unselfish. Irrelevant.  Why would the big boss settle for "unselfish" when he can get "self-sacrificing"?  It is often possible to find an irrational agent that acts more in your interests, than any rational agent will.  The rational agent aims for equitable utility deals.  The irrational agent can be inequitable in your favor.

This whole barrage of attacks on using the world 'selfish' are yet again missing the point.  If you read the entire post, you'll see that it doesn't matter if you think that rational agents are selfish, or that they can reciprocate.  You just have to admit that most persons A would rather deal with an agent B having an altruistic bias, or a bias towards A's utilities, than an agent having no such bias.  The level of selfishness/altruism of the posited rational agent is irrelevant, because adding a bias towards person A's utility is always better for person A.  Comparing "rational unbiased person" to "altruistic idiot" is not the relevant comparison here.  Compare instead "person using decision function F with no bias" vs. "person using decision function F with excess altruism".3

(Also note that, in the fMRI example, people don't get to see your utility function.  They can't tell that you have a wonderful  Yudkowskian utility function that will make you reliable.  They can only see that you don't have the bias most people do that would make most people a better employee.)

The real tricky point of this argument is whether you can define "irrational altruism" in a way that doesn't simply mean "utility function that values altruism".  You could rephrase "Choice by others encourages bias toward altruism" as "Choice by others selects for utility functions that value altruism highly".

Does an ant have an irrationally high bias towards altruism?  It may make more sense to say that an ant is less of an invididual, and more of a subroutine, than a human is.  So it is perfectly all right with me if you prefer to say that these forces select for valuing altruism, rather than saying that they select for bias.  The outcome is the same either way:  When one agent gets to choose what other agents succeed, and that agent can observe their biases and/or decision functions, those other agents are under selection pressure to become less like individuals and more like subroutines of the choosing agent.  You can call this "altruistic bias" or you can call it "less individuality".

</ADDED>

There are a lot of other situations where one person chooses another person, and they would rather choose someone who is biased, in ways encouraged by society or by genetics, than someone more rational.  When giving a security clearance, for example, you would rather give it to someone who loved his country emotionally, than to someone who loved his country rationally; the former is more reliable, while the rational person may suddenly reach an opposite conclusion on learning one new fact.

It's hard to tell how altruistic someone is.  But the May 29, 2009 issue of Science has an article called "The Computation of Social Behavior".  It's extremely skimpy on details, especially for a 5-page article; but the gist of it is that they can use functional magnetic resonance imaging to monitor someone making decisions, and extract some of that person's basic decision-making parameters.  For example (they mention this, although it isn't clear whether they can extract this particular parameter), their degree of altruism (the value they place on someone else's utility vs. their own utility).  Unlike a written exam, the fMRI exam can't be faked; your brain will reveal your true parameters even if you try to lie and game the exam.

So, in the future, being rational may make you unemployable and unlovable, because you'll be unable to hide your rationality.

Or maybe it already does?

ADDED:

Here is the big picture:  The trend in the future is likely to be one of greater and greater transparency of every agent's internal operations, whether this is via fMRI or via exchanging source code.  Rationality means acting to achieve your goals.  There will almost always be other people who are more powerful than you and who have resources that you need, and they don't want you to achieve your goals.  They want you to achieve their goals.  They will have the power and the motive to select against rationality (or to avoid building it in in the first place.)

All our experience is with economic and behavioral models that assume independent self-interested agents.  In a world where powerful people can examine the utility functions of less-powerful people, and reward them for rewriting their utility functions (or just select ones with utility functions that are favorable to the powerful people, and hence irrational), then having rational, self-interested agents is not the equilibrium outcome.

In a world in which agents like you or I are manufactured to meet the needs of more powerful agents, even more so.

You may claim that an agent can be 'rational' while trying to attain the goals of another agent.  I would instead say that it isn't an agent anymore; it's just a subroutine.

The forces I am discussing in this post try to turn agents into subroutines.  And they are getting stronger.

 

1 Newcomb's paradox is, strangely, more familiar to LW readers.  I suggest replacing discussions of one-boxing by discussions of taking revenge; I think the paradoxes are very similar, but the former is more confusing and further-removed from reality.  Its main advantage is that it prevents people from being distracted by discussing ways of fooling people about your intentions - which is not the solution evolution chose to that problem.

2 I'm making basically the same argument that Christians make when they say that atheists can't be trusted.  Empirical rejection of that argument does not apply to mine, for two reasons:

  1. Religions operate on pure rewards-based incentives, and hence destroy the altruistic instinct; therefore, I intuit that religious people have a disadvantage rather than an advantage compared to altruists WRT altruism.
  2. Religious people can sometimes be trusted more than atheists; the problem is that some of the things they can be trusted to do are crazy.

3 This is something LW readers do all the time:  Start reading a post, then stop in the middle and write a critical response addressing one perceived error whose truth or falsity is actually irrelevant to the logic of the post.

58 comments

Comments sorted by top scores.

comment by orthonormal · 2009-06-16T20:02:20.062Z · LW(p) · GW(p)

It's truly amazing just how much of the posts and discussions on LW you repeatedly ignore, Phil. There is a plurality opinion here that it can be rational to execute a strategy which includes actions that don't maximize utility when considered as one-shot actions, but such that the overall strategy does better.

I can genuinely understand disagreement on this proposal, but could you at least acknowledge that the rest of us exist and say things like "first-order rationality finds revenge irrational" or "altruistic sacrifices that violate causal decision theory" instead?

Replies from: Eliezer_Yudkowsky, Vladimir_Nesov, PhilGoetz
comment by Vladimir_Nesov · 2009-06-17T15:55:51.873Z · LW(p) · GW(p)

"first-order rationality finds revenge irrational"

I'm not sure what you mean by "first order rationality". But whatever the definition, it seems that it's not first order rationality itself that finds revenge irrational, but your own judgment of value, that depends on preferences. An agent may well like hurting people who previously hurt it (people who have a property of having previously hurt it).

Replies from: orthonormal
comment by orthonormal · 2009-06-17T16:26:41.435Z · LW(p) · GW(p)

Huh— a Google search returns muddled results. I had understood first-order (instrumental) rationality to mean something like causal decision theory: that given a utility function, you extrapolate out the probable consequences of your immediate options and maximize the expected utility. The problem with this is that it doesn't take into account the problems with being modeled by others, and thus leaves you open to being exploited (Newcomblike problems, Chicken) or losing out in other ways (known-duration Prisoner's Dilemma).

I was also taking for granted what I assumed to be the setup with the revenge scenario: that the act of revenge would be a significant net loss to you (by your utility function) as well as to your target. (E.g. you're the President, and the Russians just nuked New York but promised to stop there if you don't retaliate; do you launch your nukes at Russia?)

Phil's right that a known irrational disposition towards revenge (which evolved in us for this reason) could have deterred the Russians from nuking NYC in the first place, whereas they knew they could get away with it if they knew you're a causal decision theorist. But the form of decision process I'm considering (optimizing over strategies, not actions, while taking into account others' likely decision algorithms given a known strategy for me) also knowably avenges New York, and thus deters the Russians.

EDIT: First paragraph was a reply to Vladimir's un-edited comment, in which he also asked what definition of first-order rationality I meant.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-17T16:45:00.498Z · LW(p) · GW(p)

Sorry for the confusion with re-editing. I took out the question after deciding that by first-order rational decisions you most likely meant those that don't require you to act as if you believe something you don't (that is, believe to be false), which is often practically impossible. On reflection, this doesn't fit either.

comment by PhilGoetz · 2009-06-17T15:49:28.264Z · LW(p) · GW(p)

Okay. First-order rationality finds revenge irrational. I'm not ignoring it. It is simply irrelevant to the point I was making. A person who does your will because it makes them happy to do so, or because they are irrationally biased to do so, is more reliable than one who does your will as long as his calculus tells him to.

Replies from: orthonormal
comment by orthonormal · 2009-06-17T16:37:54.622Z · LW(p) · GW(p)

A person who does your will because it makes them happy to do so, or because they are irrationally biased to do so, is more reliable than one who does your will as long as his calculus tells him to.

Not if the latter explicitly exhibits the form of that calculus; then you can extrapolate their future decisions yourself, more easily than you can extrapolate the decisions of the former. Higher-order rationality includes finding a decision algorithm which can't be exploited if known in this manner.

Of course, actually calculating and reliably acting accordingly is a high standard for unmodified humans, and it's a meaningful question whether incremental progress toward that ideal will lead to a more reliable or less reliable agent. But that's an empirical question, not a logical one.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-17T16:58:33.444Z · LW(p) · GW(p)

Not if the latter explicitly exhibits the form of that calculus; then you can extrapolate their future decisions yourself, more easily than you can extrapolate the decisions of the former.

More easily? It's more easy to predict decisions based on a calculus, than decisions based on stimulus-response? That's simply false.

Note that in the fMRI example, it is impossible to examine the calculus. You can only examine the level of bias. There is no way for somebody to say, "Oh, he's unbiased, but he has an elaborate Yudkowskian utility function that will lead him to act in ways favorable to me."

comment by Nick_Tarleton · 2009-06-16T19:00:57.804Z · LW(p) · GW(p)

If rationality means optimizing expected selfish utility

...but it doesn't (except in the trivial sense that says any action I take to achieve my values is thus "selfish").

Replies from: Vladimir_Nesov, Psychohistorian, timtyler, PhilGoetz
comment by Vladimir_Nesov · 2009-06-16T19:36:00.028Z · LW(p) · GW(p)

And it may be perfectly rational (of high instrumental value) to be significantly altruistic (in your behavior), even if you place no terminal value whatsoever on helping other people, if it's what it takes to live comfortably in society, and you value your own comfort...

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-17T16:03:24.396Z · LW(p) · GW(p)

Yes, thank you. I think Eliezer, Nick, and the others complaining about this are confusing "acting selfishly" with "acting in a way that society judges as selfish".

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-17T16:16:09.439Z · LW(p) · GW(p)

You are not helping by being imprecise.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-17T16:38:08.723Z · LW(p) · GW(p)

If I had said "acting selfishly", that would be imprecise. I said "optimizing expected selfish utility", which is precise.

ADDED: Well, obviously not precise enough.

comment by Psychohistorian · 2009-06-16T19:27:53.480Z · LW(p) · GW(p)

If rationality means optimizing expected selfish utility

This is a convenient word swap. Simplifying slightly, and playing a little taboo, we get:

"If you have a strictly selfish utility function, and you have a system of thinking that is especially good at satisfying this function, people will never trust you where your interests may coincide."

Well, yes. Duh.

But if people actually liked your utility function, they'd want you to be more, not less, rational. That is, if both my lover and I value each others' utility about as much as our own, we both want each other to be rational, because we'd be maximizing a very similar utility function. If, as your example requires, my coefficient for my lover's utility is zero, they'd want me to be irrational precisely because they want my behaviour to maximize a term that has no weight in my utility function (unless of course their utility function also has a zero coefficient for their utility, which would be unusual).

Rationality, as generally used on this site, refers to a method of understanding the world rather than a specific utility function. Because it has been redefined here, this seems neither insightful nor a serious problem for rationality.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-17T16:07:04.276Z · LW(p) · GW(p)

Rationality, as generally used on this site, refers to a method of understanding the world rather than a specific utility function.

Most people here, when they say an agent is rational, mean that agent maximizes the expected value of its utility function. That is the definition I was using. That means it is a selfish utility maximizer - selfish because it maximizes its own utility.

I used the term because I want to contrast agents that maximize their own utility functions, with agents that rewrite their utility functions to incorporate someone else's, or that have an extra bias towards altruism (and thus are not maximizing their utility function).

Replies from: Psychohistorian, Psychohistorian
comment by Psychohistorian · 2009-06-17T19:57:34.822Z · LW(p) · GW(p)

The contents of utility functions are arational. There is nothing contradictory about a rational paperclip maximizer. If it acts in ways that prevents it from maximizing paperclips, it would be an irrational paperclip maximizer. Rationality is about how you pursue your utility function (among other things), not what that utility function seeks to maximize.

If you have a strictly selfish utility function, then, yes, acting to maximize it would be rational. Not everyone has a strictly selfish utility function. In fact, I would go so far as to say that the vast majority of people do not have strictly selfish utility functions. I have seen nothing on this site that would suggest a strictly selfish utility function is any more rational than any other utility function.

Thus, this conclusion really is trivial. You've used "rational" to imply a highly specific (and, I'm pretty sure, uncommon) utility function, when the use of the term on LW generally has no implication about the contents of a utility function. If you do not force "selfish utility function" into rationality, your conclusion does not follow from your premises.

I can, using the same method, prove that all rationalists can breath underwater, so long as "rationalist" means "fish." That's what I mean by trivial.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-18T16:30:49.610Z · LW(p) · GW(p)

By "selfish utility function" I mean exactly the same as "private utility function". I mean that it is that agent's utility function.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-18T16:33:47.989Z · LW(p) · GW(p)

The problem and confusion with this term is that you call the utility function "selfish" even when the agent cares about nothing except helping others. I think this is about the only reason people complain about this terminology or misinterpret you, thinking that whatever concept you mean by this term should somehow exclude helping others from terminal values.

comment by Psychohistorian · 2009-06-17T19:24:23.149Z · LW(p) · GW(p)

The most obvious interpretation of "selfish utility maximizer" is someone who has a selfish utility function, and if you meant something else, you should have clarified. The context suggests that "selfish utility function" is exactly what you meant. Moreover, your conclusions require that "selfish utility function" is what you meant. Under this reading, being a selfish utility maximizer has no relationship to being rational; the contents of utility functions are arational. Because rationality does not imply anything about your utility function, your conclusions simply don't follow. You argument seems to center on this:

Rationality means acting to achieve your goals. There will almost always be other people who are more powerful than you and who have resources that you need, and they don't want you to achieve your goals. They want you to achieve their goals.

"They don't want you to achieve your goals" is probably, in almost all cases where you apply it, false. My lover probably does want me to achieve my goals. My employer is, at the worst, indifferent as to whether I achieve my goals or not. Except of course where my goals coincide(oppose) their goals, then they want me to succeed(fail). But "your" and "their" in this context are not inherently oppositional, and your entire argument revolves around assuming that they are. As it is, there is simply no reason for them to prefer an irrational actor to a rational one. They prefer someone who achieves their goals. Being rational is not strictly better or worse than being irrational; it's a combination of their utility function and how efficiently they pursue their utility function. Rationality is only half of that and, in many ways, the less important half.

comment by timtyler · 2009-06-17T09:54:30.975Z · LW(p) · GW(p)

That was pretty close to what "instrumental rationality" means. Utility functions are not /necessarily/ selfish - but the ones biology usually makes are.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-17T20:57:19.471Z · LW(p) · GW(p)

Yes, but also: If they're not selfish, then you're not looking at an independent rational agent.

Replies from: timtyler
comment by timtyler · 2009-06-17T22:05:27.413Z · LW(p) · GW(p)

Definitions of "instrumental rationality" make no mention of selfishness. The term seems like a distraction.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-18T22:31:17.038Z · LW(p) · GW(p)

Yes. It's a distraction. I greatly regret using it.

comment by PhilGoetz · 2009-06-17T16:04:12.266Z · LW(p) · GW(p)

Yes, it's trivial. That doesn't make it untrue. "Selfish" = trying to achieve your values, rather than a blend of your values and other people's values.

Replies from: thomblake
comment by thomblake · 2009-06-17T16:45:20.615Z · LW(p) · GW(p)

'selfish', as it's used in ethics and ordinary speech, is a vice involving too much concern for oneself with respect to others. If virtue theory is correct, acting selfishly is bad for oneself.

comment by Jasen · 2009-06-16T19:32:25.667Z · LW(p) · GW(p)

There are instances where nature penalizes the rational. For instance, revenge is irrational, but being thought of as someone who would take revenge gives advantages.

I would generally avoid calling a behavior irrational without providing specific context. Revenge is no more irrational than a peacock's tail. They are both costly signals that can result in a significant boost to your reputation in the right social context...if you are good enough to pull them off.

Replies from: randallsquared
comment by randallsquared · 2009-06-16T20:23:12.749Z · LW(p) · GW(p)

Well, always revenge is more rational than always forgive, anyway. I would expect most people here to know about Axelrod's tit-for-tat, so maybe Phil means something else by revenge than the obvious.

Replies from: steven0461
comment by steven0461 · 2009-06-16T20:27:08.915Z · LW(p) · GW(p)

Not all revenge takes place in prisoner dilemmas. I think somebody, preferably somebody more informed than me, should write LW posts on the dynamics of repeated Chicken (there was some literature out there on this last time I looked).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-16T22:43:46.259Z · LW(p) · GW(p)

There are instances where nature penalizes the rational. For instance, revenge is irrational, but being thought of as someone who would take revenge gives advantages.

My decision theory which this margin is too small to contain, would, in fact, take revenge, as well as one-boxing on Newcomb's Problem, keeping its promise to Parfit's Hitchhiker, etcetera, so long as it believed the other could correctly simulate it, or attached high probability to being correctly simulated. (Nor would it be particularly difficult to simulate! The decision is straightforward enough.)

If rationality means optimizing expected selfish utility

And having gotten that far, I gave up on the article.

Replies from: SoullessAutomaton, PhilGoetz
comment by SoullessAutomaton · 2009-06-16T22:47:21.051Z · LW(p) · GW(p)

My decision theory which this margin is too small to contain,

I am probably not the only individual who remains curious as to when you might stumble upon a sufficiently spacious margin for this purpose.

comment by PhilGoetz · 2009-06-17T15:47:28.941Z · LW(p) · GW(p)

This is a pretty important point. I assume the word "selfish" is the one that gives you trouble? Why?

If you are imagining agents that don't optimize expected selfish utility, you're placing agenthood at the wrong level. (If a set of n agents all try to maximize the same joint utility, they are just n parts of a single agent.) Or you're imagining irrational agents.

If you meant that rational agents act in ways that don't appear to be selfish, then you're not actually disagreeing.

Replies from: pengvado
comment by pengvado · 2009-06-18T00:20:56.452Z · LW(p) · GW(p)

Imagine an agent that maximizes an altruistic utility function. It still only maximizes its own utility, no one else's. And its utility function wouldn't even directly depend on other agents' utility (you might or might not care about someone else's utility function, but a positive dependence on its value could cause a runaway feedback loop). But it does value other agents' health, happiness, freedom, etc. (i.e. most or all of the same inputs that would go into a selfish agent's utility function, except aggregated).

Two such agents don't have to have exactly the same utility function. As long as A values A's happiness, and B values A's happiness, then A and B can agree to take some action that makes A happier, even using ordinary causal decision theory with no precommitment mechanism.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-18T16:41:16.558Z · LW(p) · GW(p)

There is no such thing as an altruistic utility function. By "selfish" I mean exactly that it maximizes its own utility. It doesn't matter if it values the purring of kittens and the happy smiles of children. It is still selfish. An unselfish agent is one that lets you rewrite its utility function.

You are making exactly the same misinterpretation that almost every commenter here is making, and it is based on reading using pattern-matching instead of parsing. Just forget the word selfish. I have removed it from the original statement. I am sorry that it confused people, and I understand why it could.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-06-18T16:51:12.756Z · LW(p) · GW(p)

By your interpretation, using the word "selfish" will never add any extra information and "selfish utility maximizer" is a tautology.

If this is true, please stop using it. You're just confusing people since they're naturally expecting you to use the normal, non-tautological, more interesting definition of "selfish".

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-18T22:21:28.316Z · LW(p) · GW(p)

By your interpretation, using the word "selfish" will never add any extra information and "selfish utility maximizer" is a tautology.

Yes, you are correct. I'm sorry that I used the word selfish. If you had read my post before replying, you would have seen this sentence:

You may argue that pragmatics argue against this use of the word "selfish" because it thus adds no meaning. Fine. I have removed the word "selfish".

But, jeez, folks - can't any of you get past the use of the word 'selfish' and read the post? You are all off chasing red herrings. This is not an argument about whether rational agents are selfish or not. It does not make a difference to the argument I am presenting whether you believe rational agents are selfish or cooperative.

comment by Furcas · 2009-06-16T19:30:41.281Z · LW(p) · GW(p)

If you want to be thought of as someone who would take revenge, then it's rational to do what you can to obtain this status, which may or may not include actually taking revenge (you could boast about taking revenge on someone that no one you lied to is likely to meet, for example).

As for being subjected to a fMRI exam, I don't see how it's relevant. If nothing you can possibly do can have any effect on the result of the exam, then rationality (or irrationality) doesn't enter into it. Rationality is about decision-making and the beliefs that inform it; if the desired future is impossible to reach at the moment you make your decision, you haven't 'lost', because you were never in the game to begin with.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-17T16:01:10.640Z · LW(p) · GW(p)

If you want to be thought of as someone who would take revenge, then it's rational to do what you can to obtain this status, which may or may not include actually taking revenge (you could boast about taking revenge on someone that no one you lied to is likely to meet, for example).

If fMRI exams will detect that you will not actually take revenge, then faking it is impossible.

As for being subjected to a fMRI exam, I don't see how it's relevant. If nothing you can possibly do can have any effect on the result of the exam, then rationality (or irrationality) doesn't enter into it.

Try reading the post again. The question at issue is whether rationality always wins. If nothing you can do can make you, the rationalist, win, then rationality loses. That's part of the point.

Replies from: Furcas
comment by Furcas · 2009-06-17T17:57:19.778Z · LW(p) · GW(p)

Try reading the post again. The question at issue is whether rationality always wins. If nothing you can do can make you, the rationalist, win, then rationality loses. That's part of the point.

Then it's a trivially obvious point. There's no need to talk about mind-reading deities and fMRI exams; any scenario where the rationalist doesn't get what he wants because of circumstances beyond his control would be an equivalent example:

  • If a rationalist is fired because of the economic depression, then rationality 'loses'.

  • If a rationalist's wife leaves him because she's discovered she's a lesbian, then rationality 'loses'.

  • If a rationalist is hit by a meteor, then rationality 'loses'.

What makes your fMRI example seem different is that the thing that's beyond our control is having the kind of brain that leads to rational decision-making. This doesn't change the fact that we never had the opportunity to make a decision.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-17T18:27:28.529Z · LW(p) · GW(p)

A rationalist hit by a meteor was not hit because he was a rationalist. Completely different case.

Replies from: Furcas
comment by Furcas · 2009-06-17T19:12:09.207Z · LW(p) · GW(p)

The word "rationalist" is misleading here.

In your example, it's true that a person would be unemployable because he has the kind of brain that leads to rational decision-making. However, it's false that this person would be unemployable because he made a rational decision (since he hasn't made a decision of any kind).

Therefore, as far as rational behavior is concerned, a rationalist getting hit by a meteor and a rationalist being penalized because of a fMRI exam are equivalent scenarios.

Besides, being rational isn't having a particular kind of brain, it's behaving in a particular way, even according to your own definition, "optimizing expected selfish utility". Optimizing is something that an agent does, it's not a passive property of his brain.

comment by loqi · 2009-06-16T19:17:54.162Z · LW(p) · GW(p)

So, in the future, being rational may make you unemployable and unlovable, because you'll be unable to hide your rationality.

This seems either irrelevant or contradictory. If we're incapable of altering our behavior to account for rationality-punishers, then the issue is moot, it's just plain discrimination against a minority like any other. If we are capable, and we don't account for them, then we're not being rational.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-17T16:01:56.823Z · LW(p) · GW(p)

If altering your behavior to account for rationality-punishers requires training yourself to be irrational, the issue is not moot.

Replies from: loqi
comment by loqi · 2009-06-17T18:35:26.793Z · LW(p) · GW(p)

I still think what you're saying is contradictory. We're using "rationality" to mean "maximizing expected utility", correct? If we are aware that certain classes of attempts to do so will be punished, then we're aware that they will not in fact maximize our expected utility, so by definition such attempts aren't rational.

It seems like you're picking and choosing which counterfactuals "count" and which ones don't. How does punishment differ from any other constraint? If I inhabited a universe in which I had an infinite amount of time and space with which to compute my decisions, I'd implement AIXI and call it good. The universe I actually inhabit requires me to sacrifice that particular form of optimality, but that doesn't mean it's irrational to make theoretically sub-optimal decisions.

comment by Vladimir_Nesov · 2009-06-16T18:05:22.305Z · LW(p) · GW(p)

This is Bayesians vs. Barbarians all over again. If it's better for you to be seen as someone who precommits to certain behaviors, be that kind of person, even if the local choices made in accordance with the precommitment look disadvantageous. By failing to follow a commitment on one occasion, you may demolish the whole cause for which precommitment was made, and so if that cause is dear to you, don't be "clever", just stick to the plan.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-16T18:22:25.588Z · LW(p) · GW(p)

The Bayesians vs. Barbarians scenario is more complicated, because one can argue ways that a rational society could fend off the barbarians.

In this scenario, however, someone looks into your brain and sees how biased you are, and deliberately rejects you if you're too rational. There's no arguing around it.

But, yes, maybe this is too similar to stuff we've already gone over.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-16T18:34:44.918Z · LW(p) · GW(p)

In this scenario, however, someone looks into your brain and sees how biased you are, and deliberately rejects you if you're too rational.

How silly of them.

comment by timtyler · 2009-06-17T09:47:37.015Z · LW(p) · GW(p)

Newcomb's problem and taking revenge seem like specific instances of the more general problem of making credible commitments. Keeping promises is also a candidate for an example taken from this class of problems.

comment by Psychohistorian · 2009-06-19T05:47:37.163Z · LW(p) · GW(p)

The idea in general is interesting, but this argument itself is (still) rather incoherent. "Irrational" in this context does not seem to be different from "having an unusually high coefficient for the utility of the deciding entity," and if it does, I'm really curious as to what that is. If you give actual examples of what a real irrational person would be like, it would make this argument much more coherent. Basically, you seem to be baking part of a utility function into rational and irrational, which seems wholly inappropriate.

If your idea of rational v. irrational is that if Bob has to decide between hiring "Joe" and "Joe who really, really values Bob's utility a whole lot but is in no other respect different," then it seems like you don't have much of a point. Employers/lovers/decision makers will not be facing this dichotomy, and so it is of no real concern.

Also, employers care not about how much you value their goals, but how well you accomplish them, and rationality seems to be a relevant positive in this respect.

Oh, and I'm pretty sure I can want to help my spouse accomplish her goals without being a subroutine of my spouse. The whole subroutine argument seems convoluted and, well, unrelated to rationality.

comment by SilasBarta · 2009-06-17T16:47:00.353Z · LW(p) · GW(p)

Although the consensus seems to be that this post by PhilGoetz is an unhelpful, uninformed one, I believe I got something out of it:

1) I had never before even realized the similarity between Newcomb's problem and revenge. Sorry. bows head

2) It suggests to me a better way to phrase the problem:

a) Replace Omega with "someone who's really good at reading people" and give example of how she (makes more sense as a she) caught people in lies based on subtle facial expressions, etc.

b) Restate the question as "Are you the sort of person who would one-box?" Or "Do/should you make it a habit of one-boxing in cases like this?" rather than "Would you one-box?" This subtle difference is important.

If the above is all obvious, it's because I've done a poor job following the Newcomb threads, as many here seem to think Phil did, since they didn't interest me.

comment by Annoyance · 2009-06-17T13:44:04.778Z · LW(p) · GW(p)

For instance, revenge is irrational,

Says whom? It seems to me that revenge can easily be rationally justifiable, even if what motivates people to actually do it is usually non-rational emotional states.

It's rational for birds to build nests, but they don't do so because they possess a rational understanding of why. They don't use rationality. They don't have it. But the rational justification for their actions still exists.

Replies from: lockeandkeynes
comment by lockeandkeynes · 2011-01-01T16:31:13.743Z · LW(p) · GW(p)

I think the idea is that revenge both requires time, effort, and resources, whilst breeding further malcontent between you and the person you take revenge against, causing you to have a greater field of people who would not wish to help you, or who would work against you.

Alternately, if you were to try and make the same person like you better (though that's not always possible), it would confer more advantages to you generally.

Replies from: David_Gerard
comment by David_Gerard · 2011-01-01T17:21:54.613Z · LW(p) · GW(p)

It would, of course, depend on the situation. Perhaps not the word "revenge", but "retribution" can indeed be a calculated and well thought out effective response. Note "response", not "reaction". Revenge is a heuristic reaction, not a thought-out response.

comment by Will_Sawin · 2011-01-01T17:52:53.463Z · LW(p) · GW(p)

Even if Phil's specific examples don't work, the general point does. There exists a situation in which rationality must lose:

An agent, because it is irrational or has strange motivations or for another reason, chooses to reward those agents that are irrational and punish those that are rational. It is smart enough to tell the difference.

comment by hrishimittal · 2009-06-16T18:38:18.646Z · LW(p) · GW(p)

When giving a security clearance, for example, you would rather give it to someone who loved his country emotionally, than to someone who loved his country rationally;

Can you clarify how you distinguish between loving one's country emotionally as opposed to rationally?

comment by billswift · 2009-06-16T19:48:16.854Z · LW(p) · GW(p)

while the rational person may suddenly reach an opposite conclusion on learning one new fact.

I would have serious doubts about the rationality of anyone who made significant changes in their beliefs based on one new fact.

Replies from: asciilifeform, PhilGoetz
comment by asciilifeform · 2009-06-16T20:06:13.711Z · LW(p) · GW(p)

Regardless of exactly what the new fact was?

Replies from: billswift
comment by billswift · 2009-06-18T09:33:21.899Z · LW(p) · GW(p)

Yes. A new fact is much more likely to be wrong or misunderstood than the entirety of your previous experience. Updating is a cumulative process.

comment by PhilGoetz · 2009-06-17T16:28:25.593Z · LW(p) · GW(p)

True. The "rationalists can't agree to disagree" theorem actually applies better to data and arguments within a single person's head. When I'm presented with an apparent fact - such as good solid experimental data indicating that married couples can communicate via ESP - that would make me make sudden significant changes, I generally ignore it because I update the strength of my belief in that fact with respect to the strength all my other beliefs, just like in the inter-person case of updating estimates in response to other peoples' estimates.