Taboo "rationality," please.

post by MBlume · 2009-03-15T22:44:13.162Z · LW · GW · Legacy · 54 comments

Contents

54 comments

Related on OB: Taboo Your Words

I realize this seems odd on a blog about rationality, but I'd like to strongly suggest that commenters make an effort to avoid using the words "rational," "rationality," or "rationalist" when other phrases will do.  I think we've been stretching the words to cover too much meaning, and it's starting to show.

Here are some suggested substitutions to start you off.

Rationality:

Rationalist:

Are there any others?

54 comments

Comments sorted by top scores.

comment by JamesAndrix · 2009-03-16T22:16:38.374Z · LW(p) · GW(p)
* one who reliably wins
* one who can reliably be expected to speak truth

Yeah it must have been getting pretty bad if we got to the point that the word could mean two contradictory things. ;-)

comment by mark_spottswood · 2009-03-16T13:07:09.526Z · LW(p) · GW(p)

A good reason to take this suggestion to heart: The terms "rationality" and "rational" have a strong positive value for most participants here—stronger, I think, than the value we attach to words like "truth-seeking" or "winning." This distorts discussion and argument; we push overhard to assert that things we like or advocate are "rational" in part because it feels good to associate our ideas with the pretty word.

If you particularize the conversation—i.e., you are likely to get more money by one-boxing on Newcomb's problem, or you are likely to hold more accurate beliefs if you update your probability estimates based solely on the disagreement of informed others—than it is less likely that you will grow overattached to particular procedures of analysis that you have previously given an attractive label.

comment by Roko · 2009-03-15T23:57:31.888Z · LW(p) · GW(p)

Definitions of words usually only make much sense in some restricted part of reality, where certain things correlate with certain other things.

For "rationality", it is the case that a certain collection of human mental habits tend to correlate with success in a wide field of endeavors, and that the primary reason for this is that the human mind is an example of a somewhat broken truth seeker (i.e. a partially accurate map), and in many cases mending the map causes the human in question to better achieve his/her goals.

However, there are some contexts in which "mending" the map does not cause the human to better achieve his/her goals. These situations (such as the man who would do best to assign probability zero to his wife cheating on him, for example) reflect the fact that the human mind is not a perfect disembodied AGI. We leak our preferences out in our body language and mannerisms. We have preferences that supervene on our own mental states (making the map a self-referential one). We find it hard to motivate ourselves to act, and sometimes believing that things are better than they really are can motivate us better.

It is thus the case that rationality-as-winning can be very weird in general. There are possible worlds where dark-side epistemology is good for you. Imagine an impoverished medieval peasant being confronted with the truth about his existence - religion was probably the only thing that made life even remotely worth living. It is only the empirical content of this world that makes rationality-as-winning look a lot like rationality-as-accurate-map, and thus allows us to further our goals in a particularly general and simple way. Eliezer's observations about dark side epistemology and reality as an interconnected web, and about how if you abandon rationality-as-accurate-map because you think it will be good for you (i.e. you delude yourself), you can never be sure that there isn't something crucial you've missed - something like cryonics - that will change the entire calculation. This tells me that once the optimal policy on self-delusion looks (naively) like "have an almost accurate map", then the true optimal policy (taking black swans like cryonics into account) is to have a completely accurate map. Thus the shift from a world where map-accuracy and winningness are not the same thing to one where they're identical a posteriori is a nonlinear shift, and in my mind this legitimizes the use of the term "rationality".

It is my opinion that the balance towards rationality-as-winning being rationality-as-accurate-map has only just tipped. The possibility of a technological singularity, of technological immortality, of technological utopia and the insurance that cryonics offers really do tip the balance I think. Without these, even in a comfortable world like the world of the US/EU today, most people's rationality-as-winning probably involves a lot of irrationality-as-inaccurate-map and dark side epistemology to perform the cover up.

Replies from: gjm, PhilGoetz, PhilGoetz
comment by gjm · 2009-03-16T09:02:05.283Z · LW(p) · GW(p)

What a ... convenient coincidence ... that we live just past the transition where perfect rationality becomes the optimal strategy. Doesn't it seem a little too convenient?

Replies from: JulianMorrison, Roko
comment by JulianMorrison · 2009-03-16T13:17:36.025Z · LW(p) · GW(p)

It does make some sense - there is common cause.

Why are we here on this website? Because science got good.

Why is rationality a personal win? Because science got good.

Replies from: Roko
comment by Roko · 2009-03-16T13:30:46.050Z · LW(p) · GW(p)

snap

comment by Roko · 2009-03-16T12:31:20.045Z · LW(p) · GW(p)

Yes, actually, I had the same thought myself last night and was about to edit.

Now, as a matter of fact it does seem likely to me that the turnaround was between 1950 and today. This does seem like a "too good to be true" coincidence... suggesting the alternative hypothesis that I am busy rationalizing.

But I've reviewed the object-level arguments, and they seem solid.

I think that the explanation for half of the co-incidence is that science is a common cause of rationality arts and the improved standard of living that makes them worth applying. JulianMorrison simultaneously made the same suggestion.

The other half of the co-incidence (why aren't we alive waaaaay after it becomes rational to be rational) is actually not such a co-incidence... there is nothing interesting to be learned by observing that we're alive just after technology becomes quite an important factor. (other than the usual doomsday argument/negative singularity stuff which we won't go into)

EDIT: the possibility that we are rationalizing should cause us to decrease our confidence somewhat in the position stated above. Though, the fact that JulianMorrison thought of the same explanation independently of me should be noted. We should now consider more seriously the alternative hypothesis that "perfect rationality" is still not optimal, and that deliberately deluding ourselves in some significant part of our lives is a good idea. [I stand by my position that there were definitely some periods where rationality was a real downer]. Thanks to gjm and johnicholas for pointing this out.

Replies from: Johnicholas, Roko
comment by Johnicholas · 2009-03-16T15:00:44.382Z · LW(p) · GW(p)

I voted it down, and this is my reasoning:

  1. The post starts out by saying "me too". This is not helpful.
  2. The post admits that there's evidence of rationalization, and rather than reducing confidence in the conclusion, it merely reaffirms the original claim.
  3. The post throws very strong anthropic and singularity talk around incautiously and (in my opinion) inexpertly. These are controversial and nearly off-topic ideas, and discussion should admit the controversy and treat them cautiously.
Replies from: thomblake, Roko
comment by thomblake · 2009-03-16T15:15:09.035Z · LW(p) · GW(p)
  1. This point does not seem to degrade the comment.

  2. Rationalization is the standard method of generating explanations - first, one acts, then one comes up with a set of reasons why that happened, which are not necessarily causally related to the action. Wish I had a link to the relevant studies handy.

  3. it's ridiculous to expect a comment to even mention the controversy around the Singularity and the 'anthropic argument'. If you don't know what they are, then you can look them up in all their controversial glory. If you do, then you already know they're controversial. And if something is 'nearly off-topic' then by the definition of 'nearly' it's not off-topic, so I'm not sure what your point is there. And to find a singularity-fan in this crowd should not be a surprise to you; it's another perspective from which to approach the question, and it's clear that this is the perspective of the poster.

comment by Roko · 2009-03-16T15:53:45.011Z · LW(p) · GW(p)

I'd be interested to hear a more detailed critique of how I used [EDIT] anthropic reasoning

Replies from: Johnicholas, Johnicholas
comment by Johnicholas · 2009-03-16T16:09:05.703Z · LW(p) · GW(p)

From About Less Wrong:

To prevent topic drift while this community blog is being established, please avoid mention of the following topics on Less Wrong until the end of April 2009:

  1. The Singularity
  2. Artificial General Intelligence
comment by Johnicholas · 2009-03-16T16:57:53.430Z · LW(p) · GW(p)

Nick Bostrom's introduction to the Doomsday Argument is an example of smart, cautious discussion of anthropic reasoning.

You should take the fact that the best argument that you can find for the proposition: "Rationality is optimal now, but it wasn't in 1950." is appealing to the Doomsday Argument, as evidence that your brain is in rationalization mode.

Replies from: Roko
comment by Roko · 2009-03-16T17:09:52.124Z · LW(p) · GW(p)

But ... (and now I'm genuinely curious) why aren't we living in a period way after rationality became the optimal choice? JulianMorrison and my suggestion provides the lower bound, but what is the upper bound?

Replies from: Johnicholas, Vladimir_Nesov
comment by Johnicholas · 2009-03-16T17:14:37.881Z · LW(p) · GW(p)

To falsify the conjunction "Rationality is optimal now" and "Rationality was not optimal previously", you only need to falsify one of the conjuncts. For example, "Rationality is not optimal now" or "Rationality was optimal previously".

EDIT: I said that awkwardly. To change your mind regarding "Rationality is optimal now and rationality was not optimal previously", you would have to change your mind regarding one of the conjuncts. For example, you could accept the statement "Rationality is not optimal now."

Robin Hanson has posted on the costs of rationality.

Replies from: Roko
comment by Roko · 2009-03-16T17:39:13.821Z · LW(p) · GW(p)

So, anthropic reasoning involves using facts about how the observer came into being to “explain” certain supposed coincidences and thereby not give so much weight to alternative hypotheses which might need to be invoked to explain the coincidence.

In this case, there is a coincidence between us asserting that rationality is good for us, and us being the first generation out of a long line of humans for whom this is the case. (and, indeed, the same argument applies spatially as temporally; rationality is probably a bad move for many very disadvantaged people in the world today).

The alternative hypothesis under consideration is “rationality is not good for you, you are just rationalizing”.

So, I assume that I am sampled from the set of people who ask the question "is it optimal to be rational, or to delude myself?". What is the probability of me answering "yes"? Well, JulianMorrison argues (correctly, IMO) that there is a systematic correlation between being able to ask the question and answering "yes", so the probability is not worryingly small. Nothing unusual has happened here.

So we should not be suspicious that we are rationalizing just because we answered "yes".

Secondly, what is the probability of me finding myself to be the first (or second) generation of humans for which the answer to this question is "yes"? In the case where there are zillions of similar humans in the future, this probability could be very small. But... there's no interesting alternative hypothesis to explain this coincidence, so we can't conclude anything particularly interesting.

Replies from: thomblake, Johnicholas
comment by thomblake · 2009-03-16T17:46:30.514Z · LW(p) · GW(p)

Yeah, you're basically making the doomsday argument. Note that you could use the same reasoning about any question that you expect to come up from time to time, for instance "do I like cheese?"

Replies from: Roko
comment by Roko · 2009-03-16T18:03:46.918Z · LW(p) · GW(p)

Correct. I've edited my comment since you commented. Read the corrected version and critique...

comment by Johnicholas · 2009-03-16T17:45:46.727Z · LW(p) · GW(p)

Please reread my post. I think I was editing while you were reading my post.

comment by Vladimir_Nesov · 2009-03-16T17:15:42.132Z · LW(p) · GW(p)

Are you asking an explanation for why anthropic reasoning is bunk?

comment by Roko · 2009-03-16T14:37:45.064Z · LW(p) · GW(p)

I'd love to know why someone downvoted this...

(this comment also downvoted to 0)

ROFL someone is out to get me, I can see ;-0

Aaah! negative karma! Everybody hates me! I'm considering killing myself if this comment gets downvoted any more...

comment by PhilGoetz · 2009-03-16T02:20:46.047Z · LW(p) · GW(p)

Thus the shift from a world where map-accuracy and winningness are not the same thing to one where they're identical a posteriori is a nonlinear shift, and in my mind this legitimizes the use of the term "rationality".

Where's the nonlinearity, and why does nonlinearity legitimize the term "rationality"?

Replies from: Roko
comment by Roko · 2009-03-16T13:40:13.627Z · LW(p) · GW(p)

The nonlinearity is the following: once you realize the dark side epistemology/interconnected web of reality material, and you already think that the most "winning" course of action is to have an almost accurate map, you should decide that the most "winning" way is to have a fully accurate map. The intuition is that it is not winningness-promoting to deliberately make a small subset of your beliefs inaccurate.

This legitimizes the term because it is then empriically the case that winningness and map accuracy coincide exactly, so we can afford to use them interchangeably.

EDIT:

I think that it is fair to use the term rationality most of the time. In certain discussions, such as this one, it may be neccessary to resort to "this behavior will track truth better" and "this behavior will make you win more"

Replies from: PhilGoetz
comment by PhilGoetz · 2009-03-16T16:21:48.487Z · LW(p) · GW(p)

Thanks; that explanation makes sense.

comment by PhilGoetz · 2009-03-16T16:23:15.680Z · LW(p) · GW(p)

I voted this up, just for the line "It is my opinion that the balance towards rationality-as-winning being rationality-as-accurate-map has only just tipped." This is plausible and interesting.

comment by John_Maxwell (John_Maxwell_IV) · 2009-03-17T00:08:17.942Z · LW(p) · GW(p)

Human psychology is so weird that having correct beliefs actually works against winning reliably. For example, having correct beliefs requires you to make a special effort to seek evidence that your beliefs are wrong, making you less certain than others and therefore less determined.

comment by timtyler · 2009-03-16T07:01:45.677Z · LW(p) · GW(p)

Re: Are there any others?

Well, yes. Rationality, the way I think of it, isn't about "winning" or "truth-seeking".

If you think of a cybernetic diagram of an organism - like this:

Sensory input -> Computations -> Motor output

...then I think "rationality" needs to be confined to the middle unit. It is a computational process. You might need some motor output in order to be able to detect it - but that isn't part of rationality itself.

Truth-seeking is a goal. It is one goal among the many possible that it is possible to rationally pursue. Rational agents often adopt truth-seeking as a proximate goal - whatever they want to do - but embracing false beliefs is sometimes rational too - it depends on what your goal is.

For me, rationality has a lot to do with the valid use of inductive and deductive reasoning in pursuit of a goal.

Replies from: AndySimpson
comment by AndySimpson · 2009-03-17T02:22:50.912Z · LW(p) · GW(p)

I think "probability updates under Bayes' rule" is very clever and highly accurate, and it gets to just what you're talking about. Also, since this thread is trending towards everyone defining (or at least characterizing) rationality for themselves, here goes: rationality is what happens when evidence is recognized by a consciousness, subjected to ordered thought, and used to form or modify beliefs.

That's as close as I can get to "correct" for myself with a few minutes of thought and natural language. It seems to fit with the notion of rationality as a computational process.

comment by PhilGoetz · 2009-03-16T02:17:39.872Z · LW(p) · GW(p)

I think it's fine to use the terms if you say what you mean by them, and not especially bad to use them even if you don't. We can't define all our terms. Why single those ones out as problematic? More potential for confusion lurks in the words we're not watching as closely.

comment by Annoyance · 2009-03-16T01:38:24.522Z · LW(p) · GW(p)

I will not cease using perfectly good words.

I will, however, ask that people be prepared to define and explain the words when they use them. Words are problematic only when they become empty signifiers, labels attached to nothing.

Replies from: MBlume, mark_spottswood
comment by MBlume · 2009-03-16T01:40:35.990Z · LW(p) · GW(p)

Words are problematic only when they become empty signifiers, labels attached to nothing.

this is precisely what I'm worried about -- that eventually we'll be using "rationality" to mean "things that LW readers like"

Replies from: Annoyance
comment by Annoyance · 2009-03-16T01:42:01.783Z · LW(p) · GW(p)

Y'know how we get around that? Insist on definitions. They're still pretty sparse on the ground, here. And the one that's had the most publicity is a very poor match for the generally-accepted meaning of the term.

Replies from: Johnicholas
comment by Johnicholas · 2009-03-16T01:54:07.052Z · LW(p) · GW(p)

Normally, people manage to communicate using our informal, muddly, complicated, natural language abilities. Sometimes this breaks down when we're discussing value-laden or highly abstract concepts.

Breaking words down into definitions doesn't solve the problem - the components that you define with need to be communicated, too. This lowest-level communication needs to be informal, non-defined primitives.

Tabooing words reboots the informal process of achieving communication, without the fuss of arguing about whether a definition is correct, or queries about which definition you are using.

Replies from: Annoyance, Vladimir_Nesov
comment by Annoyance · 2009-03-16T18:49:49.799Z · LW(p) · GW(p)

"Normally, people manage to communicate using our informal, muddly, complicated, natural language abilities."

I think that, in actuality, they don't. Or rather, they communicate very little: mostly by indicating positions that the listener is already familiar with.

Ever try explaining a truly new idea to someone? With most people, I find that if they don't already have a referent, they simply can't understand, because they're not used to extracting complex information from natural language.

Replies from: Johnicholas
comment by Johnicholas · 2009-03-16T19:21:43.321Z · LW(p) · GW(p)

We're in agreement. The position that I was arguing against is something like: "People can't communicate unless they first define their terms." That would be an infinite regress; the only possibility would be that people never manage to communicate.

Replies from: Annoyance
comment by Annoyance · 2009-03-16T19:36:38.928Z · LW(p) · GW(p)

Okay, I'll accept that.

I offer a restatement: people can't communicate at a complex and abstract level unless their words are first defined in terms of words with already-accepted and -understood meanings.

If I begin to talk about gilxorfibbin without explaining what that is, it's unlikely the context will make it possible for you to know what I'm discussing.

comment by Vladimir_Nesov · 2009-03-16T02:32:47.600Z · LW(p) · GW(p)

The problem is that definitions are not hierarchical, you never get to the lowest level, because there isn't one. You need to choose a way to the target concept that communicates it as unambiguously as possible. The words spoken by one person guide another on his own map, pointing to the deeper and deeper concepts that require nontrivial arrangements from the words to single out, or even build anew.

Some words are broken, and lead the listener in the swamps. We should avoid these words, and use other healthier landmarks instead. Sometimes it requires a lengthy detour to get around the swamps, but the road is not necessarily any bumpier or conversely more streamlined than what would be expected of the original one.

comment by mark_spottswood · 2009-03-16T17:30:45.572Z · LW(p) · GW(p)

Words can become less useful when they attach to too much as well as too little. A perfectly drawn map that indicates only the position and exact shape of North America will often be less useful than a less-accurate map that gives the approximate location of its major roads and cities. Similarly, a very clearly drawn map that does not correspond to the territory it describes is useless. So defining terms clearly is only one part of the battle in crafting good arguments; you also need terms that map well onto the actual territory and that do so at a useful level of generality.

The problem with the term "rationality" isn't that no one knows what it means; there seems to be wide agreement on a number of tokens of rational behavior and a number of tokens if irrational behavior. Rather, the problem is that the term is so unspecific and so emotionally loaded that it obstructs rather than furthers discussion.

comment by AndySimpson · 2009-03-17T02:13:26.099Z · LW(p) · GW(p)

I think we should be careful of elegant variation, which can be awkward and introduce ambiguity. Rather than simply using sobriquets or synonyms like "truth-seeking", "lucidity", or "ratiocination", we might do better to interrogate each other on what rationality is, and make frequent, almost-repetitive reference to the essentials of rationality. This would work especially well for people with idiosyncratic definitions.

comment by PhilGoetz · 2009-03-16T14:17:26.115Z · LW(p) · GW(p)

There was a (tiny) movement about 20 years ago to get people to stop using the word "is". Usually, an "is" renders a judgement while concealing the reasons: "Cindy is sweet. The GPL is stupid and destructive."

I think I can make as good a case for banning "is" as for banning "rationality". And if we should ban "is", what shouldn't we ban? Can you name any words that shouldn't be banned?

Maybe we should just point.

Replies from: MichaelHoward, Nebu
comment by MichaelHoward · 2009-03-16T20:12:54.517Z · LW(p) · GW(p)

For more on this sort of thing, see E-Prime.

comment by Nebu · 2009-03-16T20:19:11.506Z · LW(p) · GW(p)

There was a (tiny) movement about 20 years ago to get people to stop using the word "is". Usually, an "is" renders a judgement while concealing the reasons: "Cindy is sweet. The GPL is stupid and destructive."

I think I can make as good a case for banning "is" as for banning "rationality". And if we should ban "is", what shouldn't we ban? Can you name any words that shouldn't be banned?

I find your arguments a bit muddled and confusing: I can make as good a case for banning genocide as I can for banning pleasure (e.g. by making a very poor case in both cases). That doesn't mean I've established that either one should be banned; nor does it mean that I've established that they are equally "ban-worthy"; nor have I established that the reasons for banning one are in any way related with the reasons for banning the other.

It seems like your arguments for banning "is" is that it could be used to "renders a judgement while concealing the reasons". But if people think it's appropriate to render judgment without concealing reasons, then there's no reason to ban "is", correct?

Contrast this with the argument for banning "rational" in that people here are using it to mean different thing, and we're having a lot of confusion due to not knowing which meaning is intended.

Even if we accept that both arguments are equally logically sound, we might choose to ban one without banning the other based on our values (e.g. if we very highly value the non-concealing of reasons, but don't value lack of confusion, we may choose to ban "is" without banning "rational").

comment by MichaelHoward · 2009-03-16T00:00:09.182Z · LW(p) · GW(p)

Are there any others?

Eliezer's not saying the obvious so I will...

Hitting small targets in large search spaces to produce coherent real-world effects that steer reality into regions that are higher in your preference ordering.

Replies from: Liron
comment by Liron · 2009-03-16T06:44:20.938Z · LW(p) · GW(p)

You're thinking of "optimization".

Replies from: MichaelHoward
comment by MichaelHoward · 2009-03-16T14:24:25.679Z · LW(p) · GW(p)

It's still a vital part of being rational, at least in some uses of the word, which is the point of the post - to point out the different meanings people might mean when they use the word.

comment by billswift · 2009-03-16T15:20:09.117Z · LW(p) · GW(p)

Very good idea. Too many here seem to have idiosyncratic definitions of these terms.

Replies from: Annoyance
comment by Annoyance · 2009-03-16T18:48:06.717Z · LW(p) · GW(p)

Wouldn't it be better if people with idiosyncratic definitions just made clear what they were?

Replies from: Sideways
comment by Sideways · 2009-03-16T20:42:04.440Z · LW(p) · GW(p)

That's exactly what tabooing "rationality" does--with the added benefit of bringing all definitions out into the open. The conventional definition of rationality should be as explicit as idiosyncratic definitions. Furthermore, people in general are bad at noticing when they have idiosyncratic ideas and tend to assume everyone uses words in the same way they do.

comment by Andy_McKenzie · 2009-03-16T21:03:05.245Z · LW(p) · GW(p)

I agree with this point generally but it is difficult to find specific examples because they will be heavily context-dependent. Ratiocinative is one probably underused word, as is lucid.

comment by Kaj_Sotala · 2009-03-15T23:47:50.535Z · LW(p) · GW(p)

Rationalist: one who seeks to only have contagious beliefs, that is, beliefs which are tightly correlated/entangled with reality.

Replies from: Nebu, Roko
comment by Nebu · 2009-03-16T20:09:21.549Z · LW(p) · GW(p)

Rationalist: one who seeks to only have contagious beliefs, that is, beliefs which are tightly correlated/entangled with reality.

Note that while I agree with Eliezer that "rational beliefs are contagious", I disagree with the claim that "contagious beliefs are rational". See, e.g., religion.

comment by Roko · 2009-03-16T00:05:23.036Z · LW(p) · GW(p)

meta: how does markup work here? I can't get links to work using HTML notation...

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-16T00:09:22.685Z · LW(p) · GW(p)

When commenting (that is, while writing) you should see a link marked "Help" under the edit box. Click it.

Replies from: Roko
comment by Roko · 2009-03-16T00:11:05.495Z · LW(p) · GW(p)

thanks ;-0 [Fails obvious rationality test... ]

comment by zaph · 2009-03-16T19:41:36.621Z · LW(p) · GW(p)

Error reduction?

Probability or possibility pruning?

Parachute venting? (The mechanism for letting hot air out of an air balloon to bring it back to the ground).