The Costs of Rationality
post by RobinHanson · 2009-03-03T18:13:17.465Z · LW · GW · Legacy · 81 commentsContents
81 comments
The word "rational" is overloaded with associations, so let me be clear: to me [here], more "rational" means better believing what is true, given one's limited info and analysis resources.
Rationality certainly can have instrumental advantages. There are plenty of situations where being more rational helps one achieve a wide range of goals. In those situtations, "winnners", i.e., those who better achieve their goals, should tend to be more rational. In such cases, we might even estimate someone's rationality by looking at his or her "residual" belief-mediated success, i.e., after explaining that success via other observable factors.
But note: we humans were designed in many ways not to be rational, because believing the truth often got in the way of achieving goals evolution had for us. So it is important for everyone who intends to seek truth to clearly understand: rationality has costs, not only in time and effort to achieve it, but also in conflicts with other common goals.
Yes, rationality might help you win that game or argument, get promoted, or win her heart. Or more rationality for you might hinder those outcomes. If what you really want is love, respect, beauty, inspiration, meaning, satisfaction, or success, as commonly understood, we just cannot assure you that rationality is your best approach toward those ends. In fact we often know it is not.
The truth may well be messy, ugly, or dispriting; knowing it make you less popular, loved, or successful. These are actually pretty likely outcomes in many identifiable situations. You may think you want to know the truth no matter what, but how sure can you really be of that? Maybe you just like the heroic image of someone who wants the truth no matter what; or maybe you only really want to know the truth if it is the bright shining glory you hope for.
Be warned; the truth just is what it is. If just knowing the truth is not reward enough, perhaps you'd be better off not knowing. Before you join us in this quixotic quest, ask yourself: do you really want to be generally rational, on all topics? Or might you be better off limiting your rationality to the usual practical topics where rationality is respected and welcomed?
81 comments
Comments sorted by top scores.
comment by swestrup · 2009-03-03T22:40:27.551Z · LW(p) · GW(p)
This parallels a discussion I've had numerous times in the field of computer games. I've had any number of artists / scripters / managers say that what a computer game needs is not a realistic physics engine, but a cinematic physics engine. They don't want it to be right, they want it to be pretty.
But, you'll find that "cinematic style" isn't consistent, and if you start from that basis, you won't be able to make boring, every-day events look realistic, and you'll have to add special-case patch-upon-patch and you'll never get it right in the end. The cinematic stuff will look right, but nothing else will.
If you start with a rigidly-correct physics engine (or at least, within current state-of-the-art) you'll find it MUCH easier to layer cinematic effects on top when asked for. Its usually far simpler than the other way around.
In an analogous way, I find that rationality makes it far easier for one to achieve one's goals, EVEN WHEN SAID GOALS ARE NON-RATIONAL. Now, that may mean that the rational thing to do in some cases is to lie to people about your beliefs, or to present yourself in a non-natural way. If you end up being uncomfortable with that, then one needs to reassess what, exactly, one's goals are, and what you are willing to do to achieve them. This may not be easy, but its far simpler than going the route of ignorance and emotionally-driven actions and then trying to put your life back together when you don't end up where you thought you would.
Replies from: Vladimir_Nesov, AnnaSalamon, None↑ comment by Vladimir_Nesov · 2009-03-04T00:20:26.836Z · LW(p) · GW(p)
You'll need to clarify what you mean by "non-rational goals".
Replies from: swestrup↑ comment by swestrup · 2009-03-05T18:50:01.313Z · LW(p) · GW(p)
Yes, I suppose I should. By a non-rational goal I meant a goal that was not necessarily to my benefit, or the benefit of the world, a goal with a negative net sum worth. Things like poisoning a reservoir or marrying someone who will make your life miserable.
Replies from: Vladimir_Nesov, Nick_Tarleton↑ comment by Vladimir_Nesov · 2009-03-06T12:25:04.860Z · LW(p) · GW(p)
You decided to try achieving that "non-rational" goal, so it must be to your benefit (at least, you must believe so).
An example that I usually give at this point is as follows. Is it physically possible that in the next 30 seconds I'll open the window and jump out? Can I do it? Since I don't want to do it, I won't do it, and therefore it can not happen in reality. The concept of trying to do something you'll never want to do is not in reality either.
Replies from: swestrup↑ comment by swestrup · 2009-03-10T00:57:44.440Z · LW(p) · GW(p)
You decided to try achieving that "non-rational" goal, so it must be to your benefit (at least, you must believe so).
Yes, exactly. The fact that you think its to your benefit, but it isn't, is the very essence of what I mean by a non-rational goal.
Replies from: Yosarian2↑ comment by Yosarian2 · 2013-01-28T03:09:51.254Z · LW(p) · GW(p)
That might actually be the main cost of rationality. You may have goals that will hurt you if you actually achieve them, and by not being rational, you manage to not achieve those goals, making your life better. Perhaps, in fact, people avoid rationality because they don't really want to achieve those goals, they just think they want to.
There's an Amanda Palmer song where the last line is "I don't want to be the person that I want to be."
Of course, if you become rational enough, you may be able to untangle those confused goals and conflicting desires. There's a dangerous middle ground, though, where you may get just better at hurting yourself.
↑ comment by Nick_Tarleton · 2009-03-05T18:58:28.874Z · LW(p) · GW(p)
"Not to my benefit" is ambiguous; I assume you mean working against other goals, like happiness or other people not dying. But since optimizing for one thing means not optimizing for others, every goal has this property relative to every other (for an ideal agent). Still, the concept seems very useful; any thoughts on how to formalize it?
Replies from: swestrup↑ comment by AnnaSalamon · 2009-03-04T00:22:08.446Z · LW(p) · GW(p)
This is a plausible claim, but do you have concrete details, proposed mechanisms, or examples from your own or others lives to back it up? "I find that rationality makes it far easier" is a promising-sounding claim, and it'd be nice to know the causes of your belief.
Replies from: swestrup↑ comment by swestrup · 2009-03-05T18:56:23.535Z · LW(p) · GW(p)
Hmm. This is a simple question that seems difficult to articulate an answer to. I think the heart of my argument is that it is very difficult to achieve any goal without planning, and planning (to be effective) relies upon a true and consistent set of beliefs and logical inferences from them. This is pretty much the definition of rationality.
Now, its not the case that the opposite is random activity which one hopes will bring about the correct outcome. To be driven by emotions, seat-of-the-pants decisions and gut-instincts is to allow an evolutionarily-derived decision-making process to run your life. Its not a completely faulty process, but it did not evolve for the kinds of situations modern people find themselves in so, in practice, its not hard to do better by applying rational principals.
↑ comment by [deleted] · 2010-12-08T03:43:44.246Z · LW(p) · GW(p)
As I understand, computer animation (as in Pixar) has built-in capabilities for the physically impossible. For example, there's no constraint in the software that solid bodies have to have constant volume -- when Ratatouille bounces around, he's changing volume all the time for extra expressiveness and dramatic effect. In that way, "cinematic" reality is simpler than realistic reality -- though of course it takes more artistry on the part of the animator to make it look good.
Replies from: wedrifid, Alicorn↑ comment by wedrifid · 2010-12-08T04:16:37.876Z · LW(p) · GW(p)
As I understand, computer animation (as in Pixar) has built-in capabilities for the physically impossible. For example, there's no constraint in the software that solid bodies have to have constant volume
That isn't technically impossible. ;)
comment by pwno · 2009-03-04T08:26:07.232Z · LW(p) · GW(p)
I always made a distinction between rationality and truth-seeking. Rationality is only intelligible when in the context of a goal (whether that goal be rational or irrational). Now, if one acts rationally, given their information set, will chose the best plan-of-action towards succeeding their goal. Part of being rational is knowing which goals will maximize their utility function.
My definition of truth-seeking is basically Robin's definition of "rational." I find it hard to imagine a time where truth-seeking is incompatible with acting rationally (the way I defined it). Can anyone think of an example?
Replies from: timtyler, mark_spottswood, TobyBartels↑ comment by timtyler · 2009-03-04T14:17:51.092Z · LW(p) · GW(p)
Well, sure. Repeating other posts - but one of the most common examples is when an agent's beliefs are displayed to other agents. Imagine that all your associates think that there is a Christian god. This group includes all your prospective friends and mates. Do you tell them you are an agnostic/atheist - and that their views are not supported by the evidence? No, of course not! However, you had better not lie to them either - since most humans lie so poorly. The best thing to do is probably to believe their nonsense yourself.
Replies from: Yvain, steven0461↑ comment by Scott Alexander (Yvain) · 2009-03-04T16:34:03.667Z · LW(p) · GW(p)
Tim, that's an excellent argument for why rationality isn't always the winning strategy in real life. People have been saying this sort of thing all week, but it was your "most humans lie so poorly" comment that really made it click for me, especially in the context of evolutionary psychology.
I'd really like to hear one of the "rationalists should always win" people address this objection.
Replies from: Johnicholas↑ comment by Johnicholas · 2009-03-04T19:16:03.077Z · LW(p) · GW(p)
We're talking about at least two different notions of the word "rational":
Robin Hanson used the definition at the top of this post, regarding believing the truth. There are social/evolutionary costs to that, partly because humans lie poorly.
The causal decision theorists' definition that Eliezer Yudkowsky was annoyed by. CDT defines rationality to be a specific method of deciding what action to take, even though this leads to two-boxing (losing) Newcomb's problem. Yudkowsky's objection, summarized by the slogan "Rationalists should WIN." was NOT a definition. It is a quality of his informal concept of rationality which the CDT definition failed to capture.
The claim "rationalists should always win" comes from taking Yudkowsky's slogan as a definition of rationality. If that is the definition that you are using, then the claim is tautological.
Please note that I don't endorse this misreading of Yudkowsky's post, I'm just trying to answer your question.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-03-04T23:01:20.469Z · LW(p) · GW(p)
Thanks, John.
As you say, defining rationality as winning and then saying rationalists always win is a tautology. But aside from your two definitions, there's a third definition: the common definition of rationality as basing decisions on evidence, Bayes, and logic. So as I see it, supporters of "rationalists always win" need to do one of the following:
Show that the winning definition is the same as the Bayes/logic/evidence definition. Tim's counterexample of the religious believer who's a poor liar makes me doubt this is possible.
Stop using "rationality" to refer to things like the Twelve Virtues and Bayesian techniques, since these virtues and techniques sometimes lose and are therefore not always rational.
Abandon "rationalists always win" in favor of Robin's "rationalists always seek the truth". I think that definition is sufficient to demonstrate that a rationalist should one-box on Newcombe's problem anyway. After all, if it's true that one boxing is the better result, a seeker of truth should realize that and decide to one-box.
↑ comment by Kenny · 2009-03-07T23:44:19.171Z · LW(p) · GW(p)
There are no supporters of "rationalists always win" – the slogan is "rationalists should win". Long-term / on-average, it's rational to expect a high correlation between rationality and success.
[1] – I'd bet that the rationalist strategy fares well against other heuristics; let's devise a good test. There may always be an effective upper-bound to the returns to increasing rationality in any community, but reality is dangerous – I'd expect rationalists to fair better.
[2] – Winning or losing one 'round' isn't sufficient grounds to declare a strategy, or particular decisions, as being non-rational. Buying lottery tickets isn't rational because some people win. And sometimes, winning isn't possible.
[3] – I like "rationalists always seek the truth" but would add "... but they don't seek all truths."
↑ comment by steven0461 · 2009-03-04T15:21:56.403Z · LW(p) · GW(p)
You realize, of course, that under this policy everyone stays Christian forever.
Replies from: timtyler, byrnema↑ comment by byrnema · 2009-04-14T15:57:12.536Z · LW(p) · GW(p)
Interesting, if rationality corresponds to winning, and Christianity is persistent, then we should give up on trying to eliminate Christianity. Not merely because it is a waste of resources, but also because their belief in God is not directly tied to winning and losing. Some beliefs lead to winning (philanthropy, community) and some beliefs lead to losing (insert any one of many here). We should focus energies on discouraging the losing beliefs with whatever means at our disposal, including humoring their belief in God in specific arguments. (For example, we could try and convince a bible literalist that God would forgive them for believing evolution because he deliberately gave us convincing evidence of it.) -- learning as I go, I just learned such arguments are called "Pragmatism".
Replies from: byrnema↑ comment by byrnema · 2009-04-14T16:45:47.900Z · LW(p) · GW(p)
I will likely delete this post now that it has been down-voted. I wrote it as a natural response to the information I read and am not attached to it. Before deleting, I'm curious if I can solicit feedback from the person who down-voted me. Because the post was boring?
↑ comment by mark_spottswood · 2009-03-05T18:08:05.050Z · LW(p) · GW(p)
Pwno said: I find it hard to imagine a time where truth-seeking is incompatible with acting rationally (the way I defined it). Can anyone think of an example?
The classic example would invoke the placebo effect. Believing that medical care is likely to be successful can actually make it more successful; believing that it is likely to fail might vitiate the placebo effect. So, if you are taking a treatment with the goal of getting better, and that treatment is not very good (but it is the best available option), then it is better from a rationalist goal-seeking perspective to have an incorrectly high assessment of the treatment's possibility of success.
This generalizes more broadly to other areas of life where confidence is key. When dating, or going to a job interview, confidence can sometimes make the difference between success and failure. So it can pay, in such scenarios, to be wrong (so long as you are wrong in the right way).
It turns out that we are, in fact, generally optimized to make precisely this mistake. Far more people think they are above average in most domains than hold the opposite view. Likewise, people regularly place a high degree of trust in treatments with a very low probability of success, and we have many social mechanisms that try and encourage such behavior. It might be "irrational" under your usage to try and help these people form more accurate beliefs.
↑ comment by TobyBartels · 2010-08-27T19:20:47.921Z · LW(p) · GW(p)
I like to distinguish information-theoretic rationality from decision-theoretic rationality. (But these are rather long terms.) Often on this blog it's unclear which is meant (although you and Robin did make it clear.)
Replies from: thomblake, Pavitra↑ comment by thomblake · 2010-08-27T19:36:33.960Z · LW(p) · GW(p)
The relevant articles: What do we mean by rationality wiki
Replies from: TobyBartels↑ comment by TobyBartels · 2010-08-27T21:54:03.763Z · LW(p) · GW(p)
Yeah, I'd just been reading those, but they don't fix the terminology either.
comment by MichaelHoward · 2009-03-03T22:43:23.189Z · LW(p) · GW(p)
Willful stupidity is often easier and more profitable in the short run, but you just might be picking up pennies in front of a steamroller.
I think it's best to go out of your way to believe the truth, even though you won't always succeed. I'm very suspicious when tempted to do otherwise, it's usually for most unwise or unhealthy reasons. There are exceptions, but they're much rarer than we'd like to think.
Replies from: igoresquecomment by Vladimir_Nesov · 2009-03-03T20:23:13.830Z · LW(p) · GW(p)
Learning many true facts that are not Fun and are morally irrelevant (e.g. learning as many digits of pi as you can by spending your whole life on the activity), because this way you can avoid thinking about facts that are much less certain, shouldn't be considered rational. Rationality intrinsically needs to serve a purpose, the necessity for this is implicit even in apparently goal-neutral definitions like the one Robin gave in the post.
Another problem, of course, is that you don't know the cost of irrationality if you are irrational.
Replies from: timtyler↑ comment by timtyler · 2009-03-03T23:21:27.562Z · LW(p) · GW(p)
I don't see how "seeking truth" is "goal-neutral". It is a goal much like any other.
The main thing I feel the urge to say about "seeking truth" is that it usually isn't nature's goal. Nature normally cares about other things a lot more than the truth.
Replies from: Kenny↑ comment by Kenny · 2009-03-07T23:54:16.841Z · LW(p) · GW(p)
If nature can be said to have goals, it has "seeking truth" in so far that any thing, including ourselves, does.
Replies from: timtyler↑ comment by timtyler · 2009-03-26T21:51:24.864Z · LW(p) · GW(p)
Perhaps I was too brief. Organisms are goal oriented - or at least they look as though they are. Teleonomy, rather than teleology, technicallly, of course.
Organisms act as though their primary goal is to have grandchildren. Seeking the truth is a proximate goal - and not an especially high-priority one.
Prioritising seeking the truth more highly than having babies would be a bizarre and unnatural thing for any living organism to do. I have no idea why anyone would advocate it - except, perhaps as part of some truth-worshiping religion.
comment by Johnicholas · 2009-03-03T18:48:20.745Z · LW(p) · GW(p)
Are commitment mechanisms rational?
A malicious genius is considering whether to dose the dashing protagonist with a toxin. The toxin is known to be invariably fatal unless counteracted, and the malicious genius has the only antidote. The antagonist knows that the protagonist will face a choice: Either open a specific locked box containing, among other things, the antidote - surviving, but furthering the antagonist's wicked plan, or refuse to open the box, dying, and foiling the plan.
We analyze this as an extensive form game: The antagonist has a choice to dose or not to dose. If dose, then protagonist gets a choice, to die or not to die.
If only the protagonist was not so very very rational! Because the protagonist is known to be very very rational, the antagonist knows that the protagonist will choose to live, and thereby further the antagonist's plan.
A commitment mechanism, then, is the protagonist (rationally) sabotaging their rationality before the antagonist has an opportunity to dose. The "irrational revenge circuit" will revenge harm at any cost, even an irrationally high cost. Even antagonists step carefully around people with revenge circuits installed. (Yes, evolution has already installed some of these.)
Replies from: Vladimir_Nesov, Jack↑ comment by Vladimir_Nesov · 2009-03-03T20:08:49.681Z · LW(p) · GW(p)
A decision theory that doesn't need to go through the motions of making a commitment outside the cognitive algorithm is superior. Act as if you have made a commitment in all the situations where you benefit from having made the commitment. Actually make commitment only if it's necessary to signal the resulting decision.
(Off-point:) The protagonist may well be rational about sacrificing his life, if he cares about stopping the antagonist's plan more.
Replies from: Benja↑ comment by Benya (Benja) · 2009-03-03T21:39:25.334Z · LW(p) · GW(p)
I believe I agree with the intuition. Does it say anything about a problem like the above, though? Does the villain decide not to poison the hero, because the hero would not open the box even if the villain decided to poison the hero? Or does the hero decide to open the box, because the villain would poison the hero even if the hero decided not to open the box? Is there a symmetry-breaker here? -- Do we get a mixed strategy à la the Nash equilibrium for Rock-Paper-Scissors, where each player makes each choice with 50% probability?
(I'm assuming we're assuming the preference orderings are: The hero prefers no poison to opening the box to dying; the villain prefers the box opened to no poison to the hero dying [because the latter would be a waste of perfectly good poison].)
Replies from: Benja↑ comment by Benya (Benja) · 2009-03-06T10:28:00.252Z · LW(p) · GW(p)
I'm not sure why I'm getting downmodded into oblivion here. I'll go out on a limb and assume that I was being incomprehensible, even though I'll be digging myself in deeper if that wasn't the reason...
In classical game theory (subgame-perfect equilibrium), if you eat my chocolate, it is not rational for me to tweak your nose in retaliation at cost to myself. But if I can first commit myself to tweaking your nose if you eat my chocolate, it is no longer rational for you to eat it. But, if you can even earlier commit to definitely eating my chocolate even if I commit to then tweaking your nose, it is (still in classical game theory) no longer rational for me to commit to tweaking your nose! The early committer gets the good stuff.
Eliezer's arguments have convinced me that a better decision theory would work like Vladimir says, acting as if you had made a commitment in all situations where you would like to make a commitment. But as far as I can see, both the nose-tweaker and the chocolate-eater can do that -- speaking in intuitive human terms, it comes down to who is more stubborn. So what does happen? Is there a symmetry breaker? Can it happen that you commit to eating my chocolate, I commit to tweaking your nose, and we end up in the worst possible world for both of us? (Well, I'm pretty confident that that's not what Eliezer's theory (not shown) would do.)
Borrowing from classical game theory, perhaps we say that one of the two commitment scenarios happens, but we can't say which (1. you eat my chocolate and I don't tweak your nose; 2. you don't eat my chocolate, which is a good thing because I would tweak your nose if you did). In the simple commitment game we're considering here, this amounts to considering all Nash equilibria instead of only subgame perfect equilibria (Nash = "no player can do better by changing their strategy" -- but I'm allowed to counterfactually tweak your nose at cost to myself if we don't actually reach that part of the game tree at equilibrium). But of course, if you accept Eliezer's arguments, Nash equilibrium is wrong in general, and in any case, it's not obvious to me if "either of the two scenarios can happen" is the right solution to this game.
To make the implicit motivation behind these two comments explicit: I'm worried that there's a danger of writing "the rightful owner will keep their chocolate" on the bottom line, noticing that a proper decision theory would allow them to retaliate, and saying "done!" without even considering whether the same logic allows the nefarious villain to spitefully commit to eating the chocolate anyhow. If the theory says that either of the two commitment outcomes may happen, ok, but I think it deserves mention. And if the theory says is something else, I want to know that too. :-)
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-03-06T19:47:40.322Z · LW(p) · GW(p)
You can't argue with a rock, so you can't stop a rock-solid commitment, even with your own rock-solid commitment. But you can solve the game given the commitments, with the outcome for each side. If this outcome is inferior to other possible commitments, then those other commitments should be used instead.
So, if the hero expects that his commitment to die will still result in villain making him die, this commitment is not a good idea and shouldn't be made (for example, maybe the villain just wants to play the game). The tricky part is that if the hero expected his commitment to stop the villain, he still needs to dutifully die once the villain surprised him, to the extent this would be necessary to communicate the commitment to the villain prior to his decision, since it's precisely this communicated model of behavior that was supposed to stop him.
comment by teageegeepea · 2009-03-04T04:53:47.365Z · LW(p) · GW(p)
You seem to be taking the opposite tack as in this video, where rationality was best for everyone no matter their cause.
Replies from: RobinHanson↑ comment by RobinHanson · 2009-03-04T15:48:44.139Z · LW(p) · GW(p)
I usually use "rationality" the way most economists use the word, but Eliezer has chosen to use the word differently here, and I am trying to accommodate him.
Replies from: timtylercomment by Annoyance · 2009-03-04T19:06:19.872Z · LW(p) · GW(p)
A rational belief isn't necessarily correct or true. Rational beliefs are justified, in that they logically follow from premises that are accepted as true. In the case of probabilistic statements, a rational strategy is one that maximizes the chance of being correct or otherwise reaching a defined goal state. It doesn't have to work or be correct in any ultimate sense to be rational.
If I play the lottery and win, playing the lottery turned out to be a way to get lots of money. It doesn't mean that playing the lottery was a rational strategy. If I make a reasonable investment and improbable misfortune strikes, losing the money, that doesn't mean that the investment wasn't rational.
Replies from: grobstein↑ comment by grobstein · 2009-03-05T20:55:40.749Z · LW(p) · GW(p)
This has no bearing on the point above. In essence you're just rephrasing Robin's definition, "better believing what is true, given one's limited info and analysis resources." The disposition best-calculated to lead to true beliefs will not produce true beliefs in every instance, because true beliefs will not always be justified by available evidence.
So what?
comment by Jack · 2009-03-03T20:19:31.597Z · LW(p) · GW(p)
Places where rationality* is not welcome:
Churches, political parties, Congress, family reunions, dates, cable news, bureaucracy, casinos... . *Of course rationality might dictate deception- but I take it lying confers some cost on the liar.
Please list the rest. Also, who here is involved with any of the things on the list? Am I wrong to include something and if not how do you deal with being rational in a place that discourages it.
Replies from: christopherj↑ comment by christopherj · 2013-10-11T00:31:22.209Z · LW(p) · GW(p)
I would say rationality is welcome in those places, conditional on it not opposing their goals. It could be argued that opposing your own goals isn't rational -- if acting rationally means you lose, is it really rationality? I guess this is another case where rationality as truth-seeking and rationality as goal-following can conflict. In fact there are many places where truth can be enemy to varying degrees, places where incomplete truth, misleading truth, even outright lies can be advantageous to a goal.
For example, in chess it is disadvantageous to explain why you did a move or what you plan to do next, even if your opponent explicitly asked you (so that you either disadvantage yourself or refuse to tell the truth). In fact in almost any non-cooperative interaction you could be disadvantaged by your opponent knowing certain things. Even when mostly cooperating, there are also non-cooperative elements. Even when you are alone, knowing the truth about your chances of success can be discouraging, and you have to account for the fact that you're not perfectly rational and so being discouraged from a course of action might mean you don't take it even if it is the best option. This is probably why self-deception for overestimating one's abilities is so rampant.
Replies from: Vaniver↑ comment by Vaniver · 2013-10-11T01:13:01.751Z · LW(p) · GW(p)
I would say rationality is welcome in those places, conditional on it not opposing their goals.
I suspect that Jack is commenting on the likelihood that the condition is satisfied. The interests of organizers and participants are likely to conflict in many of those places- casinos being perhaps the most obvious example- and thus it furthers organizer-goals to insist on or encourage irrationality in participants.
comment by timtyler · 2009-03-03T19:01:28.442Z · LW(p) · GW(p)
Re: The word "rational" is overloaded with associations, so let me be clear: to me, more "rational" means better believing what is true, given one's limited info and analysis resources.
Ouch! A meta discussion, perhaps - but why define "rational" that way? Isn't the following much more standard?
"In economics, sociology, and political science, a decision or situation is often called rational if it is in some sense optimal, and individuals or organizations are often called rational if they tend to act somehow optimally in pursuit of their goals. [...] In this concept of "rationality", the individual's goals or motives are taken for granted and not made subject to criticism, ethical or otherwise. Thus rationality simply refers to the success of goal attainment, whatever those goals may be."
Replies from: thomblake↑ comment by thomblake · 2009-03-03T19:04:07.499Z · LW(p) · GW(p)
Indeed - that was my first thought, but I was waiting till I figured out a good way of stating it. RH's definition of 'rational' seems to go against the usual definition presented above, while EY's seems to embrace it.
Replies from: Eliezer_Yudkowsky, timtyler↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-03T19:29:13.561Z · LW(p) · GW(p)
My definition differs from the one in Wikipedia because I require that your goals not call for any particular ritual of cognition. When you care more about winning then about any particular way of thinking - and "winning" is not defined in such a way as to require in advance any particular method of thinking - then you are pursuing rationality.
This, in turn, ends up implying epistemic rationality: if the definition of "winning" doesn't require believing false things, then you can generally expect to do better (on average) by believing true things than false things - certainly in real life, despite various elaborate philosophical thought experiments designed from omniscient truth-believing third-person standpoints.
Conversely you can start with the definition of rational belief as accuracy-seeking, and get to pragmatics via "That which can be destroyed by the truth should be" and the notion of rational policies as those which you would retain even given an epistemically rational prediction of their consequences.
Replies from: RobinHanson, mark_spottswood, CronoDAS↑ comment by RobinHanson · 2009-03-03T23:39:15.167Z · LW(p) · GW(p)
For most people, most of the things they want do in fact prefer some ways of thinking, so your definition requires us to consider a counterfactual pretty far from ordinary experience. In contrast, defining in terms of accuracy-seeking is simple and accessible. If this site is going to use the word "rational" a lot, we'd better have a simple clear definition or we'll be arguing this definitional stuff endlessly.
Replies from: Eliezer_Yudkowsky, Johnicholas↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-04T03:07:54.213Z · LW(p) · GW(p)
I usually define "rationality" as accuracy-seeking whenever decisional considerations do not enter. These days I sometimes also use the phrase "epistemic rationality".
It would indeed be more complicated if we began conducting the meta-argument that (a) an ideal Bayesian not faced with various vengeful gods inspecting its algorithm should not decide to rewrite its memories to something calibrated away from what it originally believed to be accurate, or that (b) human beings ought to seek accuracy in a life well-lived according to goals that include both explicit truth-seeking and other goals not about truth.
But unless I'm specifically focused on this argument, I usually go so far as to talk as if it resolves in favor of epistemic accuracy, that is, that pragmatic rationality is unified with epistemic rationality rather than implying two different disciplines. If truth is a bad idea, it's not clear what the reader is doing on Less Wrong, and indeed, the "pragmatic" reader who somehow knows that it's a good idea to be ignorant, will at once flee as far as possible...
Replies from: RobinHanson, timtyler↑ comment by RobinHanson · 2009-03-04T15:57:05.611Z · LW(p) · GW(p)
You started off using the word "rationality" on this blog/forum, and though I had misgivings, I tried to continue with your language. But most of the discussion of this post seems to be distracted by my having tried to clarify that in the introductory sentence. I predict we won't be able to get past this, and so from now on I will revert to my usual policy of avoiding overloaded words like "rationality."
↑ comment by timtyler · 2009-03-04T15:07:14.785Z · LW(p) · GW(p)
If truth is a bad idea, it's not clear what the reader is doing on Less Wrong [...]
Believing the truth is usually a good idea - for real organisms.
However, I don't think rationality should be defined in terms of truth seeking. For one thing, that is not particularly conventional usage. For another, it seems like a rather arbitrary goal. What if a Buddhist claims that rational behaviour typically involves meditating until you reach nirvana. On what grounds would that claim be dismissed? That seems to me to be an equally biologically realistic goal.
I think that convention has it right here - the details of the goal are irrelevances to rationality which should be factored right out of the equation. You can rationally pursue any goal - without any exceptions.
↑ comment by Johnicholas · 2009-03-04T02:25:10.720Z · LW(p) · GW(p)
I'm confused by the phrase "most of the things they want do in fact prefer some ways of thinking".
I thought that EY was saying that he requires goals like "some hot chocolate" or "an interesting book", rather than goals like: "the answer to this division problem computed by the Newton-Raphson algorithm"
↑ comment by mark_spottswood · 2009-03-05T18:22:58.525Z · LW(p) · GW(p)
Eliezer said: This, in turn, ends up implying epistemic rationality: if the definition of "winning" doesn't require believing false things, then you can generally expect to do better (on average) by believing true things than false things - certainly in real life, despite various elaborate philosophical thought experiments designed from omniscient truth-believing third-person standpoints.
--
I think this is overstated. Why should we only care what works "generally," rather than what works well in specific subdomains? If rationality means whatever helps you win, than overconfidence will often be rational. (Examples: placebo effect, dating, job interviews, etc.) I think you need to either decide that your definition of rationality does not always require a preference for true beliefs, or else revise the definition.
It also might be worthwhile, for the sake of clarity, to just avoid the word "rationality" altogether in future conversations. It seems to be at risk of becoming an essentially contested concept, particularly because everyone wants to be able to claim that their own preferred cognitive procedures are "rational." Why not just talk about whether a particular cognitive ritual is "goal-optimizing" when we want to talk about Eliezer-rationality, while saving the term "truth-optimizing" (or some variant) for epistemic-rationality?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-05T18:57:51.868Z · LW(p) · GW(p)
Maybe "truth-seeking" versus "winning", if there's a direct appeal to one and not the other. But I am generally willing to rescue the word "rationality".
Replies from: mark_spottswood↑ comment by mark_spottswood · 2009-03-05T19:26:35.963Z · LW(p) · GW(p)
Sorry -- I meant, but did not make clear, that the word "rationality" should be avoided only when the conversation involves the clash between "winning" and "truth seeking." Otherwise, things tend to bog down in arguments about the map, when we should be talking about the territory.
Replies from: Kenny↑ comment by CronoDAS · 2009-03-04T01:53:41.572Z · LW(p) · GW(p)
Regarding "rationalists should win" - that still leaves us with the problem of distinguishing between someone who won because he was rational and someone who was irrational but won because of sheer dumb luck.
For example, buying lottery tickets is (almost always) a negative EV proposition - but some people do win the lottery. Was it irrational for lottery winners to have bought those specific tickets, which did indeed win?
Given a sufficiently large sample, the most spectacular successes are going to be those who pursued opportunities with the highest possible payoff regardless of the potential downside or even the expected value... for every spectacular success, there are probably several times as many spectacular failures.
Replies from: timtyler↑ comment by timtyler · 2009-03-04T15:10:09.823Z · LW(p) · GW(p)
Re: Regarding "rationalists should win" - that still leaves us with the problem of distinguishing between someone who won because he was rational and someone who was irrational but won because of sheer dumb luck.
Just don't go there in the first place. Attempting to increase your utility is enough.
↑ comment by timtyler · 2009-03-03T19:54:06.018Z · LW(p) · GW(p)
A common example of where rationality and truth-seeking come into conflict is the case where organisms display their beliefs - and have difficulty misrepresenting them. In such cases, it may thus benefit them to believe falsehoods for reasons associated with signalling their beliefs to others:
"Definitely on all fronts is has become imperative not to bristle with hostility every time you encounter a stranger. Instead observe him, find out what he might be. Behave to him with politeness, pretending that you like him more than you do - at least while you find out how he might be of use to you. Wash before you go to talk to him so as to conceal your tribal odour and take great care not to let on that you notice his own, foul as it may be. Talk about human brotherhood. In the end don't even just pretend that you like him (he begins to see through that); instead, really like him. It pays."
- Discriminating Nepotism - as reprinted in: Narrow Roads of Gene Land, Volume 2 Evolution of Sex, p.359.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-03T19:36:46.915Z · LW(p) · GW(p)
Replies from: MichaelHoward↑ comment by MichaelHoward · 2009-03-03T23:22:20.646Z · LW(p) · GW(p)
Eek, now there's Transhuman babyeaters! I see it also says "Man needs lies like children need toys." :-)
Replies from: Eliezer_Yudkowsky, swestrup↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-04T04:33:42.924Z · LW(p) · GW(p)
Ew. Didn't know that was where it came from, just saw the demotivator.
↑ comment by swestrup · 2009-03-05T19:02:16.028Z · LW(p) · GW(p)
While I don't necessarily agree that Man needs lies, Terry Pratchett made a very good argument for it in Hogfather:
Death: Yes. As practice, you have to start out learning to believe the little lies.
Susan: So we can believe the big ones?
Death: Yes. Justice, mercy, duty. That sort of thing.
Susan: They're not the same at all.
Death: You think so? Then take the universe and grind it down to the finest powder, and sieve it through the finest sieve, and then show me one atom of justice, one molecule of mercy. And yet, you try to act as if there is some ideal order in the world. As if there is some, some rightness in the universe, by which it may be judged.
Susan: But people have got to believe that, or what's the point?
Death: You need to believe in things that aren't true. How else can they become?
Replies from: Vladimir_Nesov, Annoyance↑ comment by Vladimir_Nesov · 2009-03-06T12:33:57.015Z · LW(p) · GW(p)
See Angry Atoms. Systems can have properties inapplicable to their components. This is not a lie.
↑ comment by Annoyance · 2009-03-05T19:22:36.509Z · LW(p) · GW(p)
I can't agree that it's a good argument. Pratchett, through the character of Death, conflates the problem of constructing absolute standards with the 'problem' of finding material representations of complex concepts through isolating basic parts.
It's the sort of alchemical thinking that should have been discarded with, well, alchemists. Of course you can't grind down reality and find mercy. Can you smash a computer and find the essence of the computations it was carrying out? The very act of taking the computer apart and reducing it destroys the relationships it embodied.
Of course, you can find computation in atoms... just not the ones the computer was doing.
Replies from: swestrup↑ comment by swestrup · 2009-03-05T20:36:37.032Z · LW(p) · GW(p)
No, I don't think that Death is conflating them at all. He is saying that Mercy, Justice and the like are human constructs and are not an inherent part of the universe. In this he is completely correct.
Where he goes wrong is in having only two categories "Truth" which seems to include only that which is inherent to the universe and "Lies" which he uses to hold everything else. There is no room in this philosophy for conjecture, goals, hopes, dreams, and the like.
Sadly, I have met folks who, while perhaps not as extreme in their classifications as this, nevertheless have no place in their personal philosophies for unproven conjectures, potentially true statements, partially supported beliefs, and the like. They are not comfortable with areas of gray between what they know is true and what they know is false.
I think the statements of Death are couched to appeal more to their philosophy than ours, but perhaps that is because Pratchett thinks such people more in need of the instruction.
comment by steven0461 · 2009-03-04T06:41:44.878Z · LW(p) · GW(p)
As I see it, rationality is much more about choosing the right things to use one's success for than it is about achieving success (in the conventional sense). Hopefully it also helps with the latter, but it may well be that rationality is detrimental to people's pursuit of various myopic, egoist, and parochial goals that they have, but that they would reject or downgrade in importance if they were more rational.
comment by HalFinney · 2009-03-03T23:35:53.120Z · LW(p) · GW(p)
It may be possible to have it both ways, to know rationality without having it interfere with achieving happiness and other goals.
For me, rationality is a topic of interest, but I don't make a religion out of it. I cultivate a sort of Zen attitude towards raaitonalit, trying not to grasp it too tightly. I am curious to know what the rational truth is, but I'm willing and, with the right frame of mind, able to ignore it.
I can be aware at some level that my emotional feelings are technically irrational and reflect untrue beliefs, but so what. They're true enough. They're as true as everyone else's. That's good enough for me. I can still embrace them wholeheartedly.
Now how about overconfidence. The truth is that I am either lucky or unlucky on this issue, depending on how you look at it. I am not very overconfident. Modesty comes naturally to me. Asserting my opinions makes me uncomfortable. When someone else says I'm wrong, I take it very much to heart. This is just my personality. When I was younger I was more assertive, but as I've gotten older I find that my uncertainties have grown. Probably learning the rational truth on these matters has contributed to the change, but it is one which has come naturally to me.
Most people are different, but they can still know that their overconfidence is irrational and mistaken, without particularly acting any less confident. After all, they know that the other guy's confidence is just as inflated, so they are equally as justified in flaunting their excellence as anyone else.
Can one really use rationality like this? Listen to Lewis Carroll: "The question is, which is to be the master - that's all."
comment by igoresque · 2009-03-29T00:03:06.978Z · LW(p) · GW(p)
I believe a 'simple' cost-benefit analysis is warranted on a case-by case basis. It is not some absolute, abstract decision. Clearly truth has a price. Sometimes the price is relatively high and other times it is lower. The reward will equally vary. Rationally speaking, truth will sometimes be worth the trouble, and sometimes not.
Replies from: subod_83comment by Marshall · 2009-03-03T20:03:06.037Z · LW(p) · GW(p)
I endorse the thusly parsed: "rational" means better believing what is "true" (given one's limited info and analysis resources). This introduces the social dimension and the pragmatic dimension into rationality - which should never be about "paperclips" alone, as Tim Tyler seems to suggest.
comment by pnkflyd831 · 2009-03-05T21:02:33.966Z · LW(p) · GW(p)
The true cost of acting rational is the difference between acting truly rational versus acting purely rational in a situation.
First we have to make the distinction between what is factual truth or strategically optimal and what an agent believes is true or strategically optimal. For non-rational agents these are different, there is at least one instance where what they believe is not what is true or how they act is not optimal. For rationalists, what they believe has to be proven true and they must act optimally.
A situation with only rational agents, they would find the optimal cumulative payoff and distribute it optimally in-line with potentially different goals. This assumes rational agents can agree on an optimal distribution strategy, possibly based on incurred costs (time, resources spent) and goal priority, and have non-conflicting goals.
Non-rational agents may have conflicting goals, be greedy and not achieve optimal distribution, and may not find or implement the strategy to achieve the best cumulative payoff. Each of these areas identifies costs of non-rationality.
In a situation in which rationalist agents must work with non-rationalist agents there will be a cost of non-rationality no matter how the rationalist acts. In these situations a pure rationalist will act differently than a true rationalist. Truly rational agents take into account both factual truth and non-rational agents beliefs and strategies when deciding strategy. The true rationalists will pursue strategies that achieve a sub-optimal but minimum cost. Purely rational agents only take into account factual truth when deciding strategy and does not account for non-rational agents beliefs or strategies. Pure rationalists will incur costs above this sub-optimal minimum. Thus, the true cost of acting rational is the difference between the cost of acting purely rational and acting truly rational.
comment by [deleted] · 2014-06-13T08:04:07.582Z · LW(p) · GW(p)
"There is also empirical evidence that high self-efficacy can be maladaptive in some circumstances. In a scenario-based study, Whyte et al. showed that participants in whom they had induced high self-efficacy were significantly more likely to escalate commitment to a failing course of action.[28] Knee and Zuckerman have challenged the definition of mental health used by Taylor and Brown and argue that lack of illusions is associated with a non-defensive personality oriented towards growth and learning and with low ego involvement in outcomes.[29] They present evidence that self-determined individuals are less prone to these illusions. In the late 1970s, Abramson and Alloy demonstrated that depressed individuals held a more accurate view than their non-depressed counterparts in a test which measured illusion of control.[30] This finding held true even when the depression was manipulated experimentally. However, when replicating the findings Msetfi et al. (2005, 2007) found that the overestimation of control in nondepressed people only showed up when the interval was long enough, implying that this is because they take more aspects of a situation into account than their depressed counterparts.[31][32] Also, Dykman et al. (1989) showed that depressed people believe they have no control in situations where they actually do, so their perception is not more accurate overall.[33] Allan et al. (2007) has proposed that the pessimistic bias of depressives resulted in “depressive realism” when asked about estimation of control, because depressed individuals are more likely to say no even if they have control.[34]
A number of studies have found a link between a sense of control and health, especially in older people.[35]
Fenton-O’Creevy et al.[7] argue, as do Gollwittzer and Kinney,[36] that while illusory beliefs about control may promote goal striving, they are not conducive to sound decision-making. Illusions of control may cause insensitivity to feedback, impede learning and predispose toward greater objective risk taking (since subjective risk will be reduced by illusion of control)."
comment by zslastman · 2012-11-22T18:50:33.917Z · LW(p) · GW(p)
I often wish I could have two brains - one which is fully, painfully aware of the truth and another which holds that set of beliefs which optimize happiness. I sometimes comfort myself with the thought that on a societal level, people like us function as that first, unhappy brain.
Altering the structure of the second brain to deal with the truth better should be a primary concern.