Rationality and Winning
post by lukeprog · 2012-05-04T18:31:25.586Z · LW · GW · Legacy · 84 commentsContents
84 comments
Someone who claims to have read "the vast majority" of the Sequences recently misinterpreted me to be saying that I "accept 'life success' as an important metric for rationality." This may be a common confusion among LessWrongers due to statements like "rationality is systematized winning" and "be careful… any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility."
So, let me explain why Actual Winning isn't a strong measure of rationality.
In cognitive science, the "Standard Picture" (Stein 1996) of rationality is that rationality is a normative concept defined by logic, Bayesian probability theory, and Bayesian decision theory (aka "rational choice theory"). (Also see the standard textbooks on judgment and decision-making, e.g. Thinking and Deciding and Rational Choice in an Uncertain World.) Oaksford & Chater (2012) explain:
Is it meaningful to attempt to develop a general theory of rationality at all? We might tentatively suggest that it is a prima facie sign of irrationality to believe in alien abduction, or to will a sports team to win in order to increase their chance of victory. But these views or actions might be entirely rational, given suitably nonstandard background beliefs about other alien activity and the general efficacy of psychic powers. Irrationality may, though, be ascribed if there is a clash between a particular belief or behavior and such background assumptions. Thus, a thorough-going physicalist may, perhaps, be accused of irrationality if she simultaneously believes in psychic powers. A theory of rationality cannot, therefore, be viewed as clarifying either what people should believe or how people should act—but it can determine whether beliefs and behaviors are compatible. Similarly, a theory of rational choice cannot determine whether it is rational to smoke or to exercise daily; but it might clarify whether a particular choice is compatible with other beliefs and choices.
From this viewpoint, normative theories can be viewed as clarifying conditions of consistency… Logic can be viewed as studying the notion of consistency over beliefs. Probability… studies consistency over degrees of belief. Rational choice theory studies the consistency of beliefs and values with choices.
Thus, one could have highly rational beliefs and make highly rational choices and still fail to win due to akrasia, lack of resources, lack of intelligence, and so on. Like intelligence and money, rationality is only a ceteris paribus predictor of success.
So while it's empirically true (Stanovich 2010) that rationality is a predictor of life success, it's a weak one. (At least, it's a weak predictor of success at the levels of human rationality we are capable of training today.) If you want to more reliably achieve life success, I recommend inheriting a billion dollars or, failing that, being born+raised to have an excellent work ethic and low akrasia.
The reason you should "be careful… any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility" is because you should "never end up envying someone else's mere choices." You are still allowed to envy their resources, intelligence, work ethic, mastery over akrasia, and other predictors of success.
84 comments
Comments sorted by top scores.
comment by bryjnar · 2012-05-04T20:09:44.681Z · LW(p) · GW(p)
Another thing that's pretty crucial here is that rationality is only aimed at expected winning.
Suppose we live on Lottery Planet, where nearly everyone has a miserable life, but you can buy a lottery ticket for a chance of $BIGNUM dollars. Nonetheless, the chances of winning the lottery are so small that the expected value of buying a ticket is negative. So the rational recommendation is to refrain from buying lottery tickets.
Nonetheless, the agents who would be "smiling down from their huge piles of utility" could only be the ones who "irrationally" bought lottery tickets. (Credit for this example goes to someone else, but I can't remember who...)
You shouldn't expect rationality to help you win absolutely. Some people will just get lucky. You should expect it to help you do better than average, however. The rationalist on lottery planet is certainly likely to be doing better than the average lottery-ticket buyer.
Replies from: scientism, timtyler, Alsadius, shminux↑ comment by scientism · 2012-05-04T20:41:17.425Z · LW(p) · GW(p)
On a similar note: rationally succeeding and simply succeeding might involve two entirely different approaches. For example, if success is largely a result of other successful people conferring success on you because they see you displaying certain signals, it doesn't follow that gaming the system will be as easy as naturally producing those signals. Signalling often relies on displays that are difficult to fake. The cognitive resources needed to fake it are often vastly disproportionate to the resources used in sincere signalling and, regardless, in many cases we may not even know what the signals are or how to fake them. The rational road to, say, political success might involve a multibillion dollar research program in social neuroscience whereas the natural road simply involves being born into the right family, going to the right schools, etc, and naturally acquiring all the signalling associated with that.
↑ comment by timtyler · 2012-05-05T00:36:14.842Z · LW(p) · GW(p)
Another thing that's pretty crucial here is that rationality is only aimed at expected winning.
Yes, we already had that one out the first time around.
↑ comment by Alsadius · 2012-05-05T16:49:53.850Z · LW(p) · GW(p)
True. I've seen a few comments from successful folks(the one that was most memorable was the founders of Home Depot) saying that you need to gamble to be successful. In that particular case, it basically involved calling his boss an idiot and completely rearranging the business model of hardware stores. Now obviously, they wouldn't have founded Home Depot without doing that, but I was thinking as I read this "For every one of you, there's probably a thousand folks who got fired and a hundred who ran the business into the ground". It's a good guide for being extraordinarily successful, but by definition that can't be done ordinarily.
↑ comment by Shmi (shminux) · 2012-05-04T23:16:42.442Z · LW(p) · GW(p)
So the rational recommendation is to refrain from buying lottery tickets.
The "rational recommendation" would be to figure out a way to decrease your effective ticket price (example: "Mom, next time, instead of buying me yet another pair of black socks I hate, just give me a few lottery tickets!").
Replies from: bryjnar↑ comment by bryjnar · 2012-05-05T14:49:30.658Z · LW(p) · GW(p)
I feel like you're fighting the hypothetical here. The point of the example was to illustrate a particular feature of rationality; obviously it's going to break down if you allow yourself other options.
comment by JGWeissman · 2012-05-04T18:58:55.323Z · LW(p) · GW(p)
I am wary of excluding work ethic and mastery over akrasia from rationality, and I am not sure about intelligence.
Akrasia and work ethic are choices. Aspiring rationalists who find themselves not making the choices they have found to be rational should seek to remedy this situation, not excuse themselves for having akrasia.
Some limitations on how rational you can be might be unfair, but that doesn't stop them from making you irrational.
Replies from: lukeprog, DanielLC↑ comment by lukeprog · 2012-05-04T20:05:41.961Z · LW(p) · GW(p)
The problem with this is that multiple motivation systems contribute to action, and only one of them looks anything like "do the thing I expect will achieve my goals given what I believe about the world." For example, I wouldn't call a blind reflex a "choice" or "decision."
Replies from: JGWeissman, keefe↑ comment by JGWeissman · 2012-05-04T20:21:57.712Z · LW(p) · GW(p)
Still, I think it's useful to ask if the whole person, with all the their motivation systems, is rational. Asking if a person's subsystems are rational seems relevant when you are figuring out to focus your training efforts on those systems most holding the person back.
A blind reflex may not itself be rational or irrational, but I can train my reflexes, and make rational choices about what I want to train my reflexes to do. Of course, I can only train reflexes to follow simple heuristics far short of computing a rational decision, and that is an "unfair" limit on my rationality, but that doesn't mean that a system that makes better choices isn't more rational than me.
Replies from: lukeprog↑ comment by lukeprog · 2012-05-04T20:39:29.684Z · LW(p) · GW(p)
The cogsci notion of rationality is indeed a personal rather than a subpersonal one. I'm not trying to describe subprocesses as rational or irrational, though. I'm describing the whole person as rational or irrational, but rationality is an ideal standard for choices, not actions, and reflexes are not "choices." In any case, I can't find a sentence in your latest comment that I disagree with.
↑ comment by keefe · 2012-05-09T16:10:11.638Z · LW(p) · GW(p)
I think it's appropriate to separate work ethic and akrasia mastery from rationality. Saying that work ethic is a choice is, imho, a relatively simplistic view. People often get fired for something trivial (smoking when a drug test is coming up, repeated absence, etc) that they know full well is a suboptimal decision and the short term benefits of getting high (or whatever) override their concern for the long term possible consequences. I think it makes sense to make some distinction that rationality is the ability to select the right path to walk and self discipline is the wherewithal to walk it.
I wonder how well defined "my goals" are here or how much to trust expectations. I think a rough approximation could involve these various systems generating some impulse map and then OPFC and some other structures get involved in selecting an action. I don't think a closed form expression of a goal is required in order to say that the goal exists.
↑ comment by DanielLC · 2012-05-04T21:03:49.413Z · LW(p) · GW(p)
The definitions I've seen on here are (paraphrased):
Epistemic Rationality: Ability to find truth in a wide variety of environments
Instrumental Rationality: Ability to alter reality to fit your desires in a wide variety of environments
Work ethic and akrasia are part of epistemic rationality, in that they affect your ability to find the truth, but once you figure out what you need to do, any akrasia in actually doing it is strictly instrumental.
Replies from: eurg↑ comment by eurg · 2012-05-05T16:03:55.112Z · LW(p) · GW(p)
I may be misreading this, but it seems to me that you inverted the meaning of akrasia.
Replies from: Viliam_Bur, DanielLC, DanielLC, DanielLC↑ comment by Viliam_Bur · 2012-05-05T16:50:36.991Z · LW(p) · GW(p)
After careful reading, my understanding is that DanielLC is saying:
"Akrasia generally harms your instrumental rationality only. Except that you need some basic knowledge to bootstrap your epistemic rationality -- and if akrasia prevents you from ever learning this, then it has directly harmed your epistemic rationality, too."
as a reply to JGWeissman saying:
"If you know akrasia harms you significantly, and you don't make solving this problem your high priority, you are not even epistemically rational!"
Which, by the way, made me realize that I really am not epistemically rational enough. :(
Replies from: JGWeissman↑ comment by JGWeissman · 2012-05-05T18:38:14.182Z · LW(p) · GW(p)
as a reply to JGWeissman saying:
"If you know akrasia harms you significantly, and you don't make solving this problem your high priority, you are not even epistemically rational!"
More like, "If you know akrasia harms you significantly, and you don't make solving this problem your high priority, then it doesn't matter if you are epistemically rational because it's not helping you be (instrumentally) rational."
"Rationality" by itself should refer to instrumental rationality. Epistemic rationality is tool of instrumental rationality. Despite these concepts being described as different adjectives modifying the same noun, it is suboptimal to think of them as different aspects of the same category. Epistemic rationality belongs in a category with other tools of rationality, such as actually choosing what you know you should choose.
comment by Shmi (shminux) · 2012-05-04T23:25:09.582Z · LW(p) · GW(p)
I'm having trouble calling a person who can rattle out a perfectly rational thing to do in every circumstance but instead spending their life complaining about how they would do this and that if only they didn't have akrasia a rational one.
Replies from: None, DanArmak, private_messaging, duckduckMOO, albeola↑ comment by [deleted] · 2012-05-05T04:44:03.137Z · LW(p) · GW(p)
you can be an expert on rationality without being an expert at rationality.
Replies from: JGWeissman, shminux↑ comment by JGWeissman · 2012-05-05T05:00:58.373Z · LW(p) · GW(p)
With that terminology, I would read shminux's comment as saying: "I have trouble calling rational a person who is an expert on rationality but not an expert at rationality." Where is the failure?
Replies from: None↑ comment by Shmi (shminux) · 2012-05-05T05:34:16.122Z · LW(p) · GW(p)
"... those who cannot, teach."
Replies from: None, billswift↑ comment by [deleted] · 2012-05-05T15:07:00.942Z · LW(p) · GW(p)
"Those who can, do, those who know, teach"
The less cynical and more realistic original formulation
Replies from: private_messaging↑ comment by private_messaging · 2012-05-06T12:12:23.677Z · LW(p) · GW(p)
Unfortunately, in practice, those who don't know like to teach too. Fortunately, some of those who can, also teach, so you could listen to those who can.
↑ comment by DanArmak · 2012-05-05T11:50:43.904Z · LW(p) · GW(p)
That's what akrasia means. That your actions differ from your spoken intentions in certain ways. In your example, the intentions are rational, the actions are not - in a particular pattern we call akrasia.
It comes down to what you identify with more as a "person". The fragment who acts? Or the fragment who talks about how they wish to act differently? And which fragment do you want to assist in making the other fragment be more like them - the intentions like the actions, or the other way around?
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-05T17:39:17.220Z · LW(p) · GW(p)
There is an smbc for that.
↑ comment by private_messaging · 2012-05-06T12:11:32.095Z · LW(p) · GW(p)
Agreed completely. If you can't use it, you didn't learn it.
↑ comment by duckduckMOO · 2012-05-08T19:01:34.894Z · LW(p) · GW(p)
just world fallacy at 10 upvotes. wonderful.
edit: unless you mean "rattling" to tell us that they don't really know and they're just making noises. If that is the point it would be nice if you were explicit about it.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-08T19:34:50.379Z · LW(p) · GW(p)
just world fallacy at 10 upvotes. wonderful.
Feel free to explain how irrational actions (despite rational words/intentions) constitute a just world fallacy. Sure, you can call akrasia an incurable disease and give up, or you can keep trying to win despite it. Some have.
Replies from: duckduckMOO↑ comment by duckduckMOO · 2012-05-08T19:58:36.393Z · LW(p) · GW(p)
people exist who are good at figuring out the best thing to do and not good at doing it. These people are not necessarrilly irrational. e.g. it's hard for a parapelegic to be good at tennis, or an idiot to be good at maths. The playing field is not level.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-08T20:11:13.099Z · LW(p) · GW(p)
people exist who are good at figuring out the best thing to do and not good at doing it.
Yes, absolutely. Then a rational thing to do would be figuring out what they are good at doing, and start doing it. Does not mean it is easy, just rational.
These people are not necessarrilly irrational. e.g. it's hard for a parapelegic to be good at tennis, or an idiot to be good at maths.
A paraplegic can find something else to be good at. We had a quadriplegic mayor here for awhile.
The playing field is not level.
Design your own playing field.
Replies from: duckduckMOO↑ comment by duckduckMOO · 2012-05-08T20:55:53.083Z · LW(p) · GW(p)
"Find or make a niche" is not a strategy someone can automatically pull off when they hit a certain level of rationality. That someone has not successfully done so does not mean they are irrational. Your original comment implies (basically states) that someone who is not getting anything done is, QED, not rational. This is nonsense for the same reason.
You are proposing solutions for which rationality is not the sole determiner of success. people can fail for reasons other than irrationality. Emblematic example of the just world fallacy, with justice here being "rational people succeed."
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-08T21:03:42.522Z · LW(p) · GW(p)
Emblematic example of the just world fallacy, with justice here being "rational people succeed."
It seems that you are intent on applying this label, no matter what, so I will disengage.
Replies from: duckduckMOO↑ comment by duckduckMOO · 2012-05-08T21:13:54.230Z · LW(p) · GW(p)
edit: My response was useless so I've removed it.
↑ comment by albeola · 2012-05-06T19:20:34.538Z · LW(p) · GW(p)
You're changing the subject. The question was whether actually having akrasia is compatible with rationality. The question was not whether someone who claims to have akrasia actually has akrasia, or whether it is rational for someone who has akrasia to complain about akrasia and treat it as not worth trying to solve.
Replies from: army1987, shminux↑ comment by A1987dM (army1987) · 2012-05-08T10:04:45.573Z · LW(p) · GW(p)
Having akrasia is no more compatible with rationality than having myopia is: saying “if only I had better eyesight” while not wearing eyeglasses is not terribly rational.
↑ comment by Shmi (shminux) · 2012-05-06T20:45:28.631Z · LW(p) · GW(p)
I'm pretty sure I expressed my opinion on this topic precisely ("no, it's not compatible"). It's up to you how you choose to misunderstand it, I have no control over it.
Replies from: albeola↑ comment by albeola · 2012-05-06T21:24:51.341Z · LW(p) · GW(p)
spending their life complaining about how they would do this and that if only they didn't have akrasia
Do you agree the quoted property differs from the property of "having akrasia" (which is the property we're interested in); that one might have akrasia without spending one's life complaining about it, and that one might spend one's life complaining about akrasia without having (the stated amount of) akrasia (e.g. with the deliberate intent to evade obligations)? If this inaccuracy were fixed, would your original response retain all its rhetorical force?
(It's worth keeping in mind that "akrasia" is more a problem description saying someone's brain doesn't produce the right output, and not an actual specific mechanism sitting there impeding an otherwise-functioning brain from doing its thing, but I don't think that affects any of the reasoning here.)
comment by private_messaging · 2012-05-06T13:23:53.750Z · LW(p) · GW(p)
There's a tale of Naive Agent. When the Naive Agent comes across a string, NA parses it into a hypothesis, and adds this hypothesis into his decision system if the hypothesis is new to NA (it is computationally bounded and doesn't have full list of possible hypotheses); NA tries his best to set the prior for the hypothesis and to adjust the prior in the most rational manner. Then the NA is acting on his beliefs, consistently and rationally. One could say that NA is quite rational.
Exercise for the reader: point out how you can get NAs to give you money by carefully crafting strings.
The flaw in NA is that when NA comes across a string that is parse-able into a hypothesis, NA is doing invalid update, adjusting the probability of something from effectively 0 to a non-zero value. That has to be done to be able to learn new things. At the same time, doing so makes the subsequent 'rational' processing exploitable.
Replies from: prase↑ comment by prase · 2012-05-06T17:56:30.982Z · LW(p) · GW(p)
Exercise for the reader: point out how you can get NAs to give you money by carefully crafting strings.
Give them strings "giving me money is the best thing you can do"?
I am not sure how exactly naïve agents are relevant to the post, but it seems interesting. Could you write a full discussion post about naïve agents, so that the readers needn't guess how to pump money from them?
Replies from: private_messaging↑ comment by private_messaging · 2012-05-06T18:56:27.210Z · LW(p) · GW(p)
Pascal's mugging, and it's real world incarnations. The agent I am speaking of, is what happens when you try to be a rationalist on bounded hardware and given the tendency to insert parsed strings as hypotheses given some made up generic priors. That simply does not combine without creating a backdoor for other agents to exploit.
Replies from: prase↑ comment by prase · 2012-05-06T19:30:40.642Z · LW(p) · GW(p)
Well, sounds plausible, but I would prefer if you described the idea in greater detail. You seem to think that bounded hardware together with Bayesian rationality are necessarily exploitable. At least you have made some assumptions you haven't specified explicitly, haven't you?
Replies from: private_messaging↑ comment by private_messaging · 2012-05-06T19:41:14.987Z · LW(p) · GW(p)
Introduction of hypothesis you parsed out of string is the most important issue. You reading this idea I posted was not a proper Bayesian belief update. Your value for the hypothesis I posted was effectively zero (if you didn't think of it before), now it is nonzero (I hope).
Of course one could perhaps rationally self modify to something more befitting the limited computational hardware and the necessity to cooperate with other agents in presence of cheaters, if one's smart enough to reinvent all of the relevant strategies. Or better yet, not self modify away from this in the first place.
Replies from: prase↑ comment by prase · 2012-05-06T21:32:34.745Z · LW(p) · GW(p)
Say that I must decide between actions A and B. The decision depends on an uncertain factor expressed by a hypothesis X: if X is true, then deciding for A gives me 100 utilons while B gives 0, conversely if X is false A yields 0 and B earns me 100 utilons. Now I believe X is true with 20% probability, so the expected utilities are U(A) = 20 and U(B) = 80. You want to make me pick A. To do that, you invent a hypothesis Y such that P(X|Y) = 60% (given my prior beliefs, via correct updating). I haven't considered Y before. So you tell me about it.
Now, do you say that after you tell me that Y exists (as a hypothesis) my credence in X necessarily increases? Or that it happens only with a specially crafted Y which nevertheless always can be constructed? Or something else?
It's clear that one can somewhat manipulate other people by telling them about arguments they hadn't heard before. But that's not specific to imperfect Bayesian agents, it applies to practically anybody. So I am interested whether you have a formal argument which shows that an imperfect Bayesian agent is always vulnerable to some sort of exploitation, or something along these lines.
Replies from: private_messaging↑ comment by private_messaging · 2012-05-06T21:43:43.070Z · LW(p) · GW(p)
The issue is that as you added hypothesis Y, with nonzero probability, yes, the propagation will increase your belief in X. You got to have some sort of patch over this vulnerability, or refuse to propagate, etc. You have to have some very specific imperfect architecture so that agent doesn't get scammed.
There's a good very simple example that's popular here. Pascal's mugging. Much been written about it, with really dissatisfying counter rationalizations. Bottom line is, when the agent hears of the Pascal's mugging, at the time 0, the statement gets parsed into a hypothesis, and then at time t, some sort of estimation can be produced, and at time <t , what will agent do?
edit: To clarify, the two severe cases are introduction of a hypothesis that should have incredibly low prior. You end up with agent that has a small number of low probability hypotheses, cherry picked out of enormous sea of such hypotheses that are equally or more likely.
Replies from: prase↑ comment by prase · 2012-05-06T22:22:55.954Z · LW(p) · GW(p)
The issue is that as you added hypothesis Y, with nonzero probability, yes, the propagation will increase your belief in X.
Adding Y we get by standard updating
P(X | being told Y) = P(being told Y | X) P(X) / P(being told Y).
Even if Y itself is a very strong evidence of X, I needn't necessarily believe Y if I am told Y.
Pascal's mugging.
Pascal's mugging is a problem for unbounded Bayesian agents as well, it doesn't rely on computation resource limits.
Replies from: private_messaging↑ comment by private_messaging · 2012-05-07T07:01:30.530Z · LW(p) · GW(p)
Adding Y we get by standard updating
P(X | being told Y) = P(being told Y | X) P(X) / P(being told Y).
Even if Y itself is a very strong evidence of X, I needn't necessarily believe Y if I am told Y.
That update is not the problematic one. The problematic is the one where when you are told Y, you add Y itself with some probability set by
P(Y | being told Y) = P(being told Y | Y) P(Y) / P(being told Y).
Then you suddenly have Y in your system (not just 'been told Y') . If you don't do that you can't learn, if you do that you need a lot of hacks not to get screwed over. edit: Or better yet there are hacks that make such agent screw over other agents, as the agent self deludes on some form of pascal's mugging, and tries to broadcast the statement that subverted it, but has hacks not to act in self damaging ways out of such beliefs. For example an agent could invent gods that need to be pleased (or urgent catastrophic problems that needs to be solved), then set up a sacrifice scheme and earn some profits.
Pascal's mugging is a problem for unbounded Bayesian agents as well, it doesn't rely on computation resource limits.
Until unbounded Bayesian agent tells me it got pascal's mugged, that's not really known. I wonder how the Bayesian agent would get the meaning out of pixel values, all way to seeing letters, all way to seeing a message, and then to paying up. Without the 'add hypothesis where none existed before' thing. The unbounded agent got to have pre-existing hypotheses that giving a stranger money will save various numbers of people.
Replies from: prase↑ comment by prase · 2012-05-07T18:34:35.336Z · LW(p) · GW(p)
Then you suddenly have Y in your system (not just 'been told Y') . If you don't do that you can't learn, if you do that you need a lot of hacks not to get screwed over.
I don't think I can't learn if I don't include every hypothesis I am told into my set of hypotheses with assigned probability. A bounded agent may well do some rounding on probabilities and ignore every hypothesis with probability below some threshold.
But even if I include Y with some probability, what does it imply?
Until unbounded Bayesian agent tells me it got pascal's mugged, that's not really known.
Has a bounded agent told you that it got Pacal-mugged? The problem is a combination of a complexity-based prior together with unbounded utility function, and that isn't specific to bounded agents.
Can you show how a Bayesian agent with bounded utility function can be exploited?
Replies from: private_messaging↑ comment by private_messaging · 2012-05-08T06:17:01.657Z · LW(p) · GW(p)
You're going on the road of actually introducing necessary hacks. That's good. I don't think simply setting threshold probability or capping the utility on a Bayesian agent results in the most effective agent given specific computing time, and it feels to me that you're wrongfully putting a burden of both the definition of what your agent is, and the proof, on me.
You got to define what the best threshold is, or what is the reasonable cap, first - those have to be somehow determined before you have your rational agent that works well. Clearly I can't show that it is exploitable for any values, because assuming hypothesis probability threshold of 1-epsilon and utility cap of epsilon, the agent can not be talked into doing anything at all. edit: and trivially, by setting threshold too low and cap too high, the agent can be exploited.
We were talking about LW rationality. If LW rationality didn't give you procedure for determining the threshold and the cap, then I already demonstrated the point I was making. I don't see huge discussion here on the optimal cap for utility, and on the optimal threshold, and on best handling of the hypotheses below threshold, and it feels to me that rationalists have thresholds set too low and caps set too high. You can of course have an agent that will decide with commonsense and then set threshold and cap as to match it, but that's rationalization not rationality.
comment by Vladimir_Nesov · 2012-05-04T20:56:52.190Z · LW(p) · GW(p)
rationality is a normative concept defined by logic, Bayesian probability theory, and Bayesian decision theory
Compare with what Russell said about mathematics:
"Pure Mathematics is the class of all propositions of the form “p implies q,” where p and q are propositions containing one or more variables, the same in the two propositions, and neither p nor q contains any constants except logical constants."
Where human rationality is concerned, simple measures analogous to deductive correctness in mathematics don't capture many important aspects that reflect aesthetic of purpose or quality of understanding. In mathematics, understanding of an argument is often more important than its deductive correctness, for a flawed presentation of a sound idea can be fixed, or given a rigorous foundation long after it's first formulated.
[This distinction (and analogy that carries it over from the similar argument about mathematics) deserves a full-length article, but for now I'll just point it out.]
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2012-05-05T13:48:28.317Z · LW(p) · GW(p)
deserves a full-length article
+1
comment by gRR · 2012-05-04T20:16:45.445Z · LW(p) · GW(p)
It would be more correct to say that "Winning as defined by general society norms" is not a strong measure of rationality. "Actual Winning", as defined by the agent's own total values, certainly must be.
Replies from: Normal_Anomalycomment by EE43026F · 2012-05-04T22:45:29.555Z · LW(p) · GW(p)
Also, while the prior probability of winning is (should be) higher in the rationality group, and lower outside, there are likely still many more winners outside the rationality group, because there are so many more people outside it than within. Making use of the availability heuristic to estimate "winning" and decide whether rationality pays off won't work well.
comment by John_Maxwell (John_Maxwell_IV) · 2012-05-07T04:48:57.687Z · LW(p) · GW(p)
Rational thinking has helped me overcome my akrasia in the past, so if someone isn't very good at overcoming theirs, I see that as weak evidence of poor rationality.
comment by billswift · 2012-05-04T22:09:05.794Z · LW(p) · GW(p)
Thus, one could have highly rational beliefs and make highly rational choices and still fail to win due to akrasia, lack of resources, lack of intelligence, and so on. Like intelligence and money, rationality is only a ceteris paribus predictor of success.
I disagree here. Akrasia, resources, and intelligence are all factors that should be taken into account by a rational agent. The reason rational agents don't always win is the complexity of factors in the real world is too great to predict reliably, no matter how rational and intelligent you are. Rationality provides the best possible means of "balancing the odds", but nothing can guarantee success.
comment by John_Maxwell (John_Maxwell_IV) · 2012-05-09T17:34:09.170Z · LW(p) · GW(p)
First, it seems to me that this is mainly a debate over the definition of instrumental rationality. And I suspect the reason people want to have this debate is so they can figure out whether they count as especially "instrumentally rational" or not.
The simplest definition of "instrumentally rational" I can think of is "a person is instrumentally rational to the extent they are good at acting to achieve their goals". Thus somebody with akrasia would not qualify as a very instrumentally rational under this simple definition. Your definition amounts to drawing the boundary of agency differently so it doesn't end at the person's body, but slices through their brain between them and their akrasia. I don't much like this definition because it seems as though knowing what the best thing to do (as opposed to doing it) should be in the domain of epistemic rationality, not instrumental rationality.
I would prefer to draw the line at the person's entire brain, so that someone who had a better intuitive understanding of probability theory might qualify as being more instrumentally rational, but an especially wealthy or strong person would not, even if those characteristics made them better at acting to achieve their goals.
Related thread on word usage: http://lesswrong.com/lw/96n/meta_rational_vs_optimized/
Replies from: thomblake↑ comment by thomblake · 2012-05-09T17:42:49.232Z · LW(p) · GW(p)
The post actually seems to equivocate between epistemic and instrumental rationality - note the use of "rational beliefs" and "rational choices" in the same sentence.
I think it's easy to defend a much weaker version of the thesis, that instrumental rationality maximizes expected utility, not utility of results.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-05-09T17:52:49.918Z · LW(p) · GW(p)
Here is a thought experiment that illustrates the slipperiness of instrumental rationality: Let's say there is a world everyone is respected according to their (ELO ranked) chess ability and nothing else. In this world your ability to make friends, earn a high salary, etc. all depend on how well you play chess. Should somebody who is better at playing chess be considered more instrumentally rational in this world?
My definition says yes, because chess playing is an ability that resides in the brain. If you define instrumental rationality as "ability to make choices with high expected value" or some such, that definition says yes as well because playing chess is a series of choices. You can imagine a hypothetical Flatland-weird universe where making good choices depends more on the kind of skills required to play chess and less on probabilistic reasoning, calculating expected values, etc. In this world the equivalent of Less Wrong discusses various chess openings and endgame problems in order to help members become more instrumentally rational.
comment by private_messaging · 2012-05-07T13:59:30.580Z · LW(p) · GW(p)
It seems to me that the LessWrong rationality does not concern itself with the computational limitations of the agents, using as normative the idealized model that ignores limitations, and lacks extensive discussion of comparative computational complexities of different methods, as well as the security of the agent from deliberate (or semi accidental) subversion by other agents. (See my post about naive agent)
Thus the default hypothesis should be that the teachings of LessWrong for the most part do not increase the efficacy (win-ness) of computationally bounded agents, and likely decrease the efficacy. Most cures do not work, even those that intuitively should; furthermore there is a strong placebo effect when it comes to reporting of the efficacy of the cures.
The burden of proof is not on those who claim it does not work. The expected utility of the LW teachings should start at zero, or small negative value (for the time spent, instead of spending that time e.g. training for a profession, studying math in more conventional way, etc).
As intuition pump for the computationally limited agents, consider a weather simulator that has to predict weather on specific hardware, having to 'outrun' the real weather. If you replace each number in the weather simulator with the probability distribution of the sensor data (with Bayesian updates if you wish), you will obtain a much, much slower weather simulator, which will have to simulate weather on a lower resolution grid, and will perform much worse than original weather simulator, on same hardware. Improving weather prediction within same hardware is a very difficult task with no neat solutions, that will involve a lot of timing of the different approaches.
Replies from: bryjnar, amcknight↑ comment by bryjnar · 2012-05-08T11:11:32.956Z · LW(p) · GW(p)
So, it seems you've hit the nail on the head when you say it's an idealized model. Full rationality (in the sense it's used here) isn't something that you can implement as a computationally bounded agent. There's a whole different question which is how to come up with good approximations to it, though.
It's analagous to, say, proving the completeness of natural deduction for first-order logic. That tells you that there is a proof for any true statement, but not that you, as a computationally bounded agent, will be able to find it. And coming up with better heuristics for proving things is a big question of its own.
Replies from: private_messaging↑ comment by private_messaging · 2012-05-08T14:45:03.230Z · LW(p) · GW(p)
The issue is that LW handwavy preaches it as lifestyle of some kind (instead of studying it rigorously as idealized model). It is also unlike the ideal models in physics. Ideal gas is a very close approximation to air at normal conditions. Computationally unbounded agent on other hand... it's to bounded agent as ideal gas from classical physics is to cooking omelette.
I doubt even the 'coming up with good approximations to it' offers anything (for human self improvement) beyond trivial 'making agent win the most'. One has to do some minor stuff, such as e.g. studying math, and calculating probabilities correctly in some neat cases like medical diagnosis. Actual winning the most is too much about thinking about the right things.
edit: and about strategies, and about agent-agent interaction where you want to take in reasoning by other agents but don't want to be exploited, don't want other agent's failures to propagate to you, don't want to fall prey to odd mixture of exploitation and failure where the agent takes own failed reasoning seriously enough to convince you but not seriously enough to allow that failure to damage itself, etc. Overall, a very very complex issue.
↑ comment by amcknight · 2012-05-08T19:42:46.695Z · LW(p) · GW(p)
It seems to me that the LessWrong rationality does not concern itself with the computational limitations of the agents
The LessWrong community is made up of a lot of people that concern themselves with all kinds of things. I get annoyed when I hear people generalizing too much about LessWrong members, or even worse, when they talk about LessWrong as if it is a thing with beliefs and concerns. Sorry if I'm being too nit-picky.
comment by XiXiDu · 2012-05-04T18:59:54.305Z · LW(p) · GW(p)
So, let me explain why Actual Winning isn't a strong measure of rationality.
Your post is basically saying that if you believe that a negative Singularity is likely and that a positive Singularity has lots of expected utility, then if you work to achieve a positive Singularity you are rational (consistency) and therefore winning. And since nobody can disprove your claim that the Singularity is near, until the very end of the universe, you will be winning winning winning....without actually achieving anything ever.
I hope your next post is going to explain why I should care.
Replies from: JGWeissman, Alsadius, thomblake↑ comment by JGWeissman · 2012-05-04T19:04:47.552Z · LW(p) · GW(p)
Did you even read the post? Luke doesn't even mention the Singularity, much less claim that it is near, or that working on it automatically rational and winning.
Replies from: XiXiDu↑ comment by XiXiDu · 2012-05-04T19:23:07.180Z · LW(p) · GW(p)
Did you even read the post? Luke doesn't even mention the Singularity, much less claim that it is near, or that working on it automatically rational and winning.
Huh? I could have used any other example to highlight that consistency of beliefs and actions can not be a sufficient definition of rationality to care about. I just thought since he is the president of the SIAI it would be an appropriate example.
Replies from: thomblake, David_Gerard↑ comment by thomblake · 2012-05-04T19:39:37.856Z · LW(p) · GW(p)
You didn't phrase it as though it were an example, you phrased it as a summary. Your comment states that Luke's point is about the Singularity, which was not mentioned in the post.
Replies from: XiXiDu↑ comment by XiXiDu · 2012-05-05T08:31:44.817Z · LW(p) · GW(p)
You didn't phrase it as though it were an example, you phrased it as a summary.
Phew, I certainly didn't expect that. I thought it was completely obvious to everyone that the post does not talk about the Singularity and that therefore my comment couldn't possible be about the Singularity either.
Let's analyze my comment:
1a) Your post is basically saying that if you believe that a negative Singularity is likely and that a positive Singularity has lots of expected utility,...
Since his original post did not talk about the Singularity it is instantly obvious that the above sentence can be read as:
1b) Your post is basically saying that if you hold belief X and that belief X is the right thing to do,...
2a) ...then if you work to achieve a positive Singularity you are rational (consistency) and therefore winning.
The end of that sentence makes it clear that I was actually talking about the original post by referring to the consistency of acting according to your beliefs. It could be read as:
2b) ...then if you act according to belief X you are rational (consistency) and therefore winning.
3a) And since nobody can disprove your claim that the Singularity is near, until the very end of the universe, you will be winning winning winning....without actually achieving anything ever.
That sentence shows how anyone could choose any belief about the future, frame it as an unprovable prediction and act accordingly and yet fit the definition of rationality that has been outlined in the original post. It could be read as:
3b) And since nobody can disprove belief X, you will be winning winning winning....without actually achieving anything ever.
Replies from: Kaj_Sotala, Viliam_Bur, atorm↑ comment by Kaj_Sotala · 2012-05-06T06:22:42.969Z · LW(p) · GW(p)
I thought it was completely obvious to everyone that the post does not talk about the Singularity and that therefore my comment couldn't possible be about the Singularity either.
The problem is that you have a history of bringing Singularity issues into posts that are not about the Singularity. (Or at least, have a history of making comments that look like that.) Two examples that spring readily to mind are using a post about Leverage Research to critique SIAI and bringing in post-Singularity scenarios when commenting on a post about current-day issues. With such a history, it's not obvious that your comment couldn't have been about the Singularity.
↑ comment by Viliam_Bur · 2012-05-05T17:54:12.209Z · LW(p) · GW(p)
You have succeeded to mix together an unbased personal accusation with a difficult epistemic problem. The complexity of the problem makes it difficult to exactly point out the inappropriateness of the offense... but obviously, it is there, readers see it and downvote accordingly.
The epistemic problem is basicly this: feeling good is an important part of everyone's utility function. If a belief X makes one happy, shouldn't it be rational (as in: increasing expected utility) to believe it, even if it's false? Especially if the belief is unfalsifiable, so the happiness caused by belief will never be countered by a sadness of falsification.
And then you pick Luke as an example, accusing him that this is exactly what he is doing (kind of wireheading himself psychologically). Since what Luke is doing is a group value here, you have added a generous dose of mindkilling to a question that is rather difficult even without doing so. But even without that, it's unnecessarily personally offensive.
The correct answer is along the lines that if Luke has also something else in his utility function, believing a false belief may prevent him from getting it. (Because he might wait for Singularity to provide him this thing, which would never happen; but without this belief he might have followed his goal directly and achieved it.) If the expected utility of achieving those other goals is greater than expected utility of feeling good by thinking false thoughts, then false belief is a net loss, and it even prevents one from realizing and fixing it. But this explanation can be countered by more epistemic problems, etc.
For now, let me just state openly that I would prefer to discuss difficult epistemic problems in a thread without this kind of contributions. Maybe even on a website without this kind of contributions.
↑ comment by David_Gerard · 2012-05-04T21:23:25.071Z · LW(p) · GW(p)
You could have used "working for the second coming of Jesus" as just as good an example and just as personal a one.
Replies from: XiXiDu↑ comment by XiXiDu · 2012-05-05T09:05:17.955Z · LW(p) · GW(p)
You could have used "working for the second coming of Jesus" as just as good an example and just as personal a one.
Incidentally I am 95% sure to know why he made this post and it has to do with the Singularity. Which will become clear in a few days.
↑ comment by thomblake · 2012-05-04T20:11:17.705Z · LW(p) · GW(p)
A more straightforward example:
If I believe that buying a lottery ticket for $1 has a 90% chance of winning $1 million, the rational thing to do [1] is buy the ticket. Even if the actual chance of winning [2] is 1 in a billion.
And if I was correct about the probability of winning but still lost, that also does not change that it was the rational thing to do.
[1]: under ordinary assumptions about preferences
[2]: the probability I would have assigned if I had much better information
↑ comment by private_messaging · 2012-05-06T12:50:03.580Z · LW(p) · GW(p)
What if you have the option c: think and figure out that the actual chance is 1 in a billion. This completely summarizes the issue. If there's 3 populations of agents:
A: agents whom grossly overestimate chance of winning but somehow don't buy the ticket (perhaps the reasoning behind the estimate, due to it's sloppiness, is not given enough weight against the 'too good to be true' heuristics),
B: agents whom grossly overestimate chance of winning, and buy the ticket,
C: agents whom correctly estimate chance of winning and don't buy the ticket.
C does the best, A does the second best, and B loses. B may also think itself a rationalist, but is behaving in an irrational manner due to not accommodating for B's cognitive constraints. Perhaps agents from A whom read of cognitive biases and decide that they don't have those biases, become agents in B, while to become an agent in C you have to naturally have something, and also train yourself.