Comments for "Rationality"

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-16T22:34:51.045Z · LW · GW · Legacy · 42 comments

Contents

42 comments

I wrote an Admin page for "What do we mean by 'Rationality'?" since this has risen to the status of a FAQ.  Comments can go here.

42 comments

Comments sorted by top scores.

comment by gjm · 2009-03-16T22:56:08.729Z · LW(p) · GW(p)

The only legitimate purpose of words having definitions in the first place is to let two people communicate - the purpose of attaching certain syllables to certain concepts is to help transport meanings from one mind to another.

Not quite, I think. I frequently use words in my own thinking, and have much reason to think I'm not alone in this, and if you really think that's illegitimate then I'd be interested to know why. (I bet you don't.) And it sure seems like I make use of the fact that (some) words have definitions when I do that.

Replies from: Eliezer_Yudkowsky, jimmy, Vladimir_Nesov
comment by jimmy · 2009-03-17T07:05:19.131Z · LW(p) · GW(p)

Even when talking to yourself, the rest of his point holds- there is nothing magical about the label "rationality" that you're gonna find in the territory- it's just part of how you mark up your map.

That said, I don't actually seem to think in English (or any other interhuman language) all that much unless I'm planning out what to say (or what I want to say). This is something that I've only noticed fairly recently, and it seems to be something that most people don't realize.

Talking to yourself when planning what to say certainly counts as "for communication between two people".

Replies from: gjm, billswift
comment by gjm · 2009-03-17T10:20:26.294Z · LW(p) · GW(p)

Yes, the rest of Eliezer's point holds; that would be why I didn't criticize the rest of Eliezer's point.

Different people think in words to different extents. (And for some of what seems like thinking in words, perhaps the word-generation is more or less epiphenomenal -- though I'd expect it always has some value, e.g. in helping the short-term memory along.) I find that I use words in the same sort of way as I use diagrams or mathematical symbols: as a way to avoid losing track of what I'm thinking, and to enable some degree of rigour when it's needed.

Yes, there are situations when talking to yourself can usefully be considered "communication between two people", but those aren't the situations I had in mind.

comment by billswift · 2009-03-17T07:57:38.366Z · LW(p) · GW(p)

There are only two ways for the mind to consciously process information - language or images. There are some people who can apparently think clearly and precisely in images - Nikola Tesla and Temple Grandin spring immediately to mind. Language is the only way other than visual images to think consciously and precisely. For this purpose mathematics is a language.

Replies from: pjeby
comment by pjeby · 2009-03-17T16:00:27.723Z · LW(p) · GW(p)

You mean composers can't think consciously and precisely about sound? Chefs about taste? Perfumers (sp?) about smell? Gymnasts about the feel of their moves?

Replies from: Annoyance
comment by Annoyance · 2009-03-17T19:34:52.487Z · LW(p) · GW(p)

They generally don't - at least, not in ways that they can communicate to others, and if they can't do that, why would we describe their thoughts as 'conscious' and 'precise'?

Replies from: pjeby
comment by pjeby · 2009-03-17T21:13:30.918Z · LW(p) · GW(p)

I don't see how "communicate to others" and "conscious/precise" are related. If something is unconscious, it can still be communicated unconsciously (e.g. body language). If something is imprecise, that doesn't stop it from being communicated. Conversely, just because something is conscious or precise doesn't mean it can be communicated, if there are no points of reference on the receiving end. If a chef or a gymnast tried to communicate with me about such matters, they would probably fail, but that doesn't mean the failure was on their end of the conversation -- and would have nothing to do with the consciousness or precision of the thoughts involved.

comment by Vladimir_Nesov · 2009-03-16T23:42:23.180Z · LW(p) · GW(p)

The definitions you specify for a word don't actually define it, they merely name a concept on your map. The concept is far richer than the "definition" by which you found it, and the lever of the word that you attached to it allows to patch into the deeper machinery of your mind. You can make use the levers yourself, to craft new structures with your own machinery.

Replies from: gjm, AndySimpson
comment by gjm · 2009-03-17T01:42:41.263Z · LW(p) · GW(p)

None of which makes it any less true that words-with-definitions are sometimes useful in private thought as well as in communication. For instance, technical terms in mathematics such as "transitive" or "uncountable" can be used robustly in lengthy chains of reasoning largely because they have precise definitions. The fact that when I use such a word (privately or publicly) I have plenty of mental machinery linked with it besides the bare definition doesn't stop it being a definition. (Perhaps you're using "define" in what seems to me to be an eccentric way, such that in fact essentially no words have actual definitions. Feel free, but I don't find that helpful.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-03-17T02:00:25.531Z · LW(p) · GW(p)

I think we agree, I'm not sure what distinction you are trying to show in this comment. Consider chess: what is the definition of knight's moves? There are rules of the game that the actions of the player must follow, the distilled form of conclusions, and there is overarching machinery of thought. The rules make sure that you stay within the game after however many moves you need, and the thought allows to find the winning moves.

Replies from: gjm
comment by gjm · 2009-03-17T02:30:56.399Z · LW(p) · GW(p)

You seemed to be disagreeing with me, but declined to say just what your disagreement was. So I had to guess, and I tried to respond to the criticism I thought you were making. Now it appears that we are in agreement. Fair enough; what then was your point?

(My point, in case it wasn't obvious, was that I think Eliezer erred when he wrote that the only legitimate use of definitions is to ease communication; I think they are sometimes helpful in private thought too.)

Replies from: Annoyance
comment by Annoyance · 2009-03-17T19:36:01.859Z · LW(p) · GW(p)

"I think they are sometimes helpful in private thought too."

Here I think you're erring: definitions are absolutely necessary in conscious thought. Without them, you don't have conscious processing.

comment by AndySimpson · 2009-03-17T07:16:15.992Z · LW(p) · GW(p)

A definition is not merely a name on your map, it's the location in the greater scheme of the map, the longitude and latitude. A definition fixes a notion with respect to some other notions, all of which together form your machinery, your belief network, your map. This machinery may bear no relation to reality, but then, to me, the point of definitions is to be clear, not accurate.

comment by timtyler · 2009-03-17T08:12:19.981Z · LW(p) · GW(p)

Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

In my opinion, Wikipedia puts things much better here:

Rationality is a central principle in artificial intelligence, where a rational agent is specifically defined as an agent which always chooses the action which maximises its expected performance, given all of the knowledge it currently possesses.

The advantage wikipedia has is that it is talking about expected performance on the basis of the available information, not about actual performance. That emphasis is correct - rationality is (or should be) defined in terms of whether operations performed on the available information constitute correct use of the tools of induction and deduction - and should not depend on whether the information the agent has is accurate or useful.

This has been discussed many times: there is a distinction between trying to win and winning.

Replies from: Annoyance
comment by Annoyance · 2009-03-17T21:01:01.180Z · LW(p) · GW(p)

Exactly. Rationality is a property of our understanding of our thinking, not the thinking itself.

Being rational doesn't involve choosing correctly, it's about having a justified expectation that the choices you're making are correct.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-03-17T21:10:04.287Z · LW(p) · GW(p)

Being rational doesn't involve choosing correctly, it's about having a justified expectation that the choices you're making are correct.

Well, if the expectation is justified, you are choosing correctly.

Replies from: Annoyance
comment by Annoyance · 2009-03-18T02:12:51.884Z · LW(p) · GW(p)

Depends on how you look at it.

If the expectation is justified, then the choice is correct from your point of view. But it can easily be wrong in an absolute sense.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-03-18T02:24:05.162Z · LW(p) · GW(p)

If you are allowed to look at statements in a way that varies their meaning to the opposite, you may as well close your eyes. Justified means being supported by a powerful truth-engine, not being accompanied by a believed rationalization. If "from my point of view", it is correct to expect to safely fly when I step out the window, it doesn't make it correct, this expectation won't be justified in the normal use of the word.

Replies from: timtyler, Annoyance
comment by timtyler · 2009-03-18T11:09:27.133Z · LW(p) · GW(p)

Are you not getting the point? Agents can correctly apply inductive and deductive reasoning, but draw the wrong conclusion - because of their priors, or because of misleading sensory data. Rationality is about reasoning correctly. It is possible to reason correctly and yet still do badly - for example if a hostile agent has manipulated your sense data without giving you a clue about what has happened. Maybe you could have done better by behaving "irrationally". However, if you had no way of knowing that, the behaviour that led to the poor outcome could still be rational.

Replies from: Cameron_Taylor, Vladimir_Nesov
comment by Cameron_Taylor · 2009-03-18T11:45:20.867Z · LW(p) · GW(p)

Good point Tim, rational doesn't mean right.

Garbage in, Garbage out.

comment by Vladimir_Nesov · 2009-03-18T13:17:32.162Z · LW(p) · GW(p)

I absolutely agree with this point. Rationality in this sense is that truth-engine I named in the comment you replied to: it's built for a range of possible environments, but can fail in case of an unfortunate happenstance. As opposed to having an insane maintainer who is convinced that the engine works when in fact it doesn't, not just on the actual test runs, but on the range of possible environments for which it's supposedly built. When you are 90% sure that something will happen, you expect it NOT to happen 1 time in 10.

comment by Annoyance · 2009-03-18T20:26:11.565Z · LW(p) · GW(p)

"If "from my point of view", it is correct to expect to safely fly when I step out the window, it doesn't make it correct, "

Yeah, but your "point of view" doesn't include any stupid belief you have. If you could explicitly justify why you expected to fly when you stepped out that window, and trace that justification all the way back to elementary logic and fundamental observations, it would be totally rational for you to expect that.

It wouldn't be your fault if the "rules" suddenly changed so that you fell, instead.

comment by anonym · 2009-03-17T20:42:31.051Z · LW(p) · GW(p)

That's a great analysis, and it should bring more clarity to our discussions if we can all agree on that or modify it as necessary until we basically agree.

One thing I am wondering about though is that the analysis seems to present the two subspecies of rationality -- epistemic and instrumental -- as being on equal footing, or as somehow equally fundamental. (That's my reading based on labeling them as 1 and 2 and not indicating that either is more fundamental.)

It seems to me though that instrumental rationality is what we really, really mean by rationality, and what you have referred to as epistemic rationality is one particular (astoundingly powerful) technique of instrumental rationality.

There are always multiple ways of mapping the territory. We have no way of deciding between them other than in terms of instrumental rationality, by choosing the one that seems most useful for achieving our values.... We might call that most useful map truth or corresponding to reality, but it only acquires that status via instrumental rationality. Epistemic rationality thus depends on instrumental rationality, but the converse is not true.

comment by Nebu · 2009-03-17T17:00:37.910Z · LW(p) · GW(p)

Due to the recent flurry of arguments about "defining rationality", and that my karma has risen over 20, I was considering writing a post on the same topic as yours Eliezer. There's sort of a "darn it" feeling that you beat me to the punch, but I'm also glad that you did, because your writing is much more clear and elegant than mine. Plus your linking to your past posts on OB on the subject is much more comprehensive than anything I could have accomplished.

My one comment is that I noticed you never used the terms "descriptivism" and "prescriptivism" while I almost always do when talking about e.g. the absurdity of thinking that the contents of "the" dictionary (as if there is only one dictionary, or as if all dictionaries are in perfect agreement with each other) determines the meaning of the words. Are you intentionally avoiding these terms, or simply don't find them useful, or some other reason?

comment by abigailgem · 2009-03-17T12:54:47.871Z · LW(p) · GW(p)

Eliezer: "Similarly, if you find yourself saying "The rational thing to do is X, but the right thing to do is Y" then you are almost certainly using one of the words "rational" or "right" in a way that a huge chunk of readers won't agree with. In this case - or in any other case where controversy threatens - you should substitute more specific language: "The self-benefiting thing to do is to run away, but I hope I would at least try to drag the girl off the railroad tracks""

Yes. Rational does not equal "sensible" or "putting self first".

So can we be rational in arguing about morality? If I decide that human life has value, I can argue from that prior, rationally, that it is Right to try and drag the girl off the railroad tracks.

I believe that human life has value, even though that is not a completely rigorous, defined statement of my belief about human life. I doubt I have the words to fully express my beliefs about the value of human life.

It is possible that I generalise "human life has value" from my own selfish needs, I do not like being alone for too long, I would have to adjust and learn a great deal before I could survive without Society.

So I believe that for me to believe "human life has value" is Right, or at least permissible, but not necessarily Rational (epistemic or instrumental) in itself, though I can take it as an axiom, and argue rationally based upon it.

Or if my belief that "human life has value" derives rationally from "I will base my values on my own selfish needs" which derives from "I want to survive": in "I want to survive" there is a Want, which is not derived rationally from anything.

comment by Keerthana_SunshineRegiment · 2019-01-15T13:26:11.905Z · LW(p) · GW(p)

I'm new to LessWrong so pardon me if my question has an obvious answer or seems silly, and please let me know if there are flaws in my reasoning.

"The self-benefiting thing to do is to run away, but I hope I would at least try to drag the girl off the railroad tracks." In this context, is the rational choice different for different people?

I believe that rationality can mean different things to different people because people have different moral compasses- a set of values they believe is right or are the most important to them. If I place another human's life in high enough regard that I would try to drag the girl off the railroad tracks, and that instinct is greater than my need to run away (self preservation), then I will end up trying to help the girl. If my priorities are reversed and I value self preservation more, I will likely run away.

Now what if I'm the sort of person who values self preservation more and hence am likely to run away, but I WANT to be the kind of person who would stop and help the girl? I'm making a conscious effort to be selfless in life. What is the rational choice for me then? I understand that if I were to actually be in such a situation, I would not have the time to logically make up my mind about what to do but would simply act on instinct, but I'm still interested in understanding what the rational choice would be in that situation.

comment by [deleted] · 2009-05-27T20:40:05.403Z · LW(p) · GW(p)

Suppose I have a blind man and a sighted man. The blind man has a cataract that could be repaired with surgery. I tell them that a jar contains a very large number of pebbles, all of them are either green or blue, and there are twice as many of one color than the other. I pull out a random pebble, which turns out to be green, and show it to both the men. I ask them to write down what they think the probability is that the next pebble I pull out will be green. The sighted man writes 51%; the blind man writes 50%. Who is more rational?

Replies from: Vladimir_Nesov, timtyler, Alicorn
comment by Vladimir_Nesov · 2009-05-27T20:56:10.934Z · LW(p) · GW(p)

You are being cryptic.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-27T21:20:05.113Z · LW(p) · GW(p)

The sighted man is executing an incorrect probability update on better information, leading him to a slightly higher expected score. I answer that the blind man is more rational, unless he has refused to repair the cataract for no apparent reason, in which case he is exhibiting a different, unusual, and in this case slightly more damaging form of irrationality.

Replies from: None, Vladimir_Nesov
comment by [deleted] · 2009-05-27T21:55:59.407Z · LW(p) · GW(p)

But if you define rationality as either "obtaining beliefs that correspond to reality as closely as possible" or "achieving your values", it seems that the sighted man has been more successful. I guess "believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory" is better, since the blind man has less evidence. I think the question now, though, is when a person "has" a given piece of evidence. What if I fail to recognize that a certain fact is evidence for a certain hypothesis? What if I do recognize this, but don't have the time to apply Bayes' law?

comment by Vladimir_Nesov · 2009-05-27T21:33:22.574Z · LW(p) · GW(p)

In other words, it's possible to construct and then resolve an arbitrary problem based on the given description.

comment by timtyler · 2009-05-27T21:51:20.147Z · LW(p) · GW(p)

The blind man should guess 50%. The sighted man should guess 66%. So, the blind man has made a good guess, given the information he has, while the sighted man has better information, has made a mess of his reasoning, but still has the more accurate figure.

How to quantify rationality? I don't know. Maybe model the methodology used in IQ tests. If you are comparing rationality across individuals, you should probably make sure they have the same access to the test information - or the results are likely to be screwed.

comment by Alicorn · 2009-05-27T20:42:51.936Z · LW(p) · GW(p)

Does it count if I want to call the blind man irrational for not having repaired his eyesight? Or are we pretending he has a good reason? Or would the intended example be the same if he had irreparable blindness?

comment by Cameron_Taylor · 2009-03-17T04:42:19.477Z · LW(p) · GW(p)

Thanks Eleizer,

I needed appropriate names to specify which of those two I was referring to. I'll be sure to use 'epistemic' or 'instrumental' when the context demands. That'll save many a distracting explanatory sentence.

comment by MarsColony_in10years · 2015-02-17T21:41:20.184Z · LW(p) · GW(p)

When reading a textbook or technical work, I frequently use marginalia to comment on the work. I find it a useful tool to increase reading comprehension, force me to organize my thoughts, and allow me to return to that part of the book years later to use it as a reference. I'm reading through The Sequences, but since it is in digital form I am unable to make use of my usual practice. Instead, I intend to leave several comments such as this in the appropriate discussion threads. I initially was using a word document, but have found it tedious to constantly transfer between computers and devices. If these comments and notifications are objectionable to anyone, I'll switch back. The FAQ says that it is worthwhile to comment on ancient posts and long-dead threads, so I took that as encouragement. I only have a couple tangential comments on this particular piece, but I expect most future marginalia to generally be much more extensive.

It’s worrying that people will falsely guess P("Bill plays jazz") < P("Bill plays jazz" & "Bill is accountant"). If these were the profiles for two murder suspects, the jury could easily make a very bad judgment call. However, we are evolutionarily wired to be good at social profiling. I suspect that the error here is that people are reading this problem as if it was the type that we are good at. When all you have is a hammer, everything looks like a nail. For example, they might read the problem to mean P(A) < P(A + B), where + indicates “and/or” (logical disjunction). This would also explain why people tend to believe “if A implies B, than B implies A” (that correlation implies causation).

comment by timtyler · 2009-03-17T00:02:14.229Z · LW(p) · GW(p)

Heh: the royal "we".

Replies from: topynate
comment by topynate · 2009-03-17T01:03:54.751Z · LW(p) · GW(p)

That's petty. The purpose of such statements is to establish group norms, not assert high status. You're shocked that someone would create a community website and then propose to determine what sort of community would arise from it?

Replies from: timtyler
comment by timtyler · 2009-03-17T07:26:46.492Z · LW(p) · GW(p)

I am not sure you understood the intent of my comment. "The royal we" is a phrase with a technical meaning. See: http://en.wikipedia.org/wiki/Pluralis_Majestatis It seems like an accurate statement of the facts to me.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-17T08:14:10.511Z · LW(p) · GW(p)

http://www.odlt.org/ballast/pluralis_auctoris.html

Replies from: timtyler
comment by timtyler · 2009-03-17T09:14:17.832Z · LW(p) · GW(p)

Whatever it refers to, my immediate reaction was that the "we" doesn't seem to include me - which seems unfortunate, since - AFAICS - my usage is the more standard one. Anyway: your blog - lay down whatever terminology you like.

Replies from: Marshall
comment by Marshall · 2009-03-17T18:32:54.366Z · LW(p) · GW(p)

Is this Eliezers Blog?!

I thought it was OUR blog - as in our community and not Eliezer's community.

And yes the more I think about it the more I think a FAQ which defines rationality as "we" use it needs this comment section.

I do not find Eliezer's definition in itself sufficient. Defining rationality will always be a work in progress and new suggestions should be added. As I see it the present defintion limits itself to a mechanical rationality (as is Eliezer's want) and excludes "searching" - the act of imagination.