Deciding on our rationality focus

post by Kaj_Sotala · 2009-07-22T06:27:56.562Z · LW · GW · Legacy · 51 comments

Contents

51 comments

I have a problem: I'm not sure what this community is about.

To illustrate, recently I've been experimenting with a number of tricks to overcome my akrasia. This morning, a succession of thoughts struck me:

  1. The readers of Less Wrong have been interested in the subject of akrasia, maybe I should make a top-level post of my experiences once I see what works and what doesn't.
  2. But wait, that would be straying into the territory of traditional self-help, and I'm sure there are already plenty of blogs and communities for that. It isn't about rationality anymore.
  3. But then, we have already discussed akrasia several times, isn't this then also on-topical?
  4. (Even if this was topical, wouldn't a simple recount of "what worked for me" be too Kaj-optimized to work for very many others?)

Part of the problem seems to stem from the fact that we have a two-fold definition of rationality:

  1. Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed "truth" or "accuracy", and we're happy to call it that.
  2. Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

If this community was only about epistemic rationality, there would be no problem. Akrasia isn't related to epistemic rationality, and neither are most self-help tricks. Case closed.

However, by including instrumental rationality, we have expanded the sphere of potential topics to cover practically anything. Productivity tips, seduction techniques, the best ways for grooming your physical appearance, the most effective ways to relax (and by extension, listing the best movies / books / video games of all time), how you can most effectively combine different rebate coupons and where you can get them from... all of those can be useful in achieving your values.

Expanding our focus isn't necessarily a bad thing, by itself. It will allow us to attract a wider audience, and some of the people who then get drawn here might afterwards also become interested in e-rationality. And many of us would probably find the new kinds of discussions useful in their personal lives. The problem, of course, is that epistemic rationality is a relatively narrow subset of instrumental rationality - if we allow all instrumental rationality topics, we'll be drowned in them, and might soon lose our original focus entirely.

There are several different approaches as far as I can see (as well as others I can't see):

I honestly don't know which approach would be the best. Do any of you?

51 comments

Comments sorted by top scores.

comment by steven0461 · 2009-07-22T06:40:45.869Z · LW(p) · GW(p)

Upvoted for not being about gender.

If you ask me, the term "instrumental rationality" has been subject to inflation. It's not supposed to mean better achieving your goals, it's supposed to mean better achieving your goals by improving your decision algorithm itself, as opposed to by improving the knowledge, intelligence, skills, possessions, and other inputs that your decision algorithm works from. Where to draw the line is a matter of judgment but not therefore meaningless.

Replies from: Eliezer_Yudkowsky, Fetterkey, timtyler
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-22T07:04:54.117Z · LW(p) · GW(p)

Agreed. Systematic instrumental rationality is what we're interested in. Better general methods. Akrasia and the problem of internal conflicts, fits this template; but making better coffee does not, however useful you may find it.

Replies from: temp532522, Jonathan_Graehl
comment by temp532522 · 2009-07-22T08:25:14.643Z · LW(p) · GW(p)

This is indeed a deep rabbit hole.

Could anyone here recommend areas where one could attempt to discuss some of society's more pressing issues using the very general methods described here? Politics and making better coffee?

While I agree such posts would not fit here, such discussions would serve as practice. If the community were similar to this one, ideally hard evidence and constructive criticism would be the norm.

comment by Jonathan_Graehl · 2009-07-22T08:04:26.364Z · LW(p) · GW(p)

Assuming promoted articles are subject to your veto, I don't see much harm in original posts of exceptional quality, even if they are either overly meta-LW or overly domain specific. Of course, one must draw the line at pictures of kittens.

Replies from: thomblake
comment by thomblake · 2009-07-22T13:43:44.509Z · LW(p) · GW(p)

Are we really bad enough at voting that we can't be trusted to downvote pictures of kittens?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2009-07-22T16:45:39.474Z · LW(p) · GW(p)

I'm not certain that anyone can very reliably be trusted to downvote e.g. the failcat sequence.

Replies from: thomblake
comment by thomblake · 2009-07-22T16:59:14.333Z · LW(p) · GW(p)

Come on, if that doesn't demonstrate a relevant failure of rationality I don't know what does.

ETA: (insert standard convention for tagging this as an attempt at humor)

comment by Fetterkey · 2009-07-22T06:45:30.946Z · LW(p) · GW(p)

I strongly agree, and I'd like to add that I definitely see a place for this sort of instrumental rationality here.

comment by timtyler · 2009-07-22T10:31:11.622Z · LW(p) · GW(p)

Isn't "reason" the best name for what you are talking about there?

http://en.wikipedia.org/wiki/Reason can be thought of as including induction and deduction - but not empirical generate-and-test procedures.

Is is a good idea to redefine "instrumental rationality" to mean the same as this existing term?

Replies from: thomblake
comment by thomblake · 2009-07-22T13:42:33.720Z · LW(p) · GW(p)

I'm unclear on why you'd think there'd be a bright line distinction between "reason" and "rationality". They seem to in most cases be usable interchangably in ordinary language.

Replies from: timtyler
comment by timtyler · 2009-07-22T16:06:22.128Z · LW(p) · GW(p)

The combination of inductive reasoning and deductive reasoning seems like a natural category - which I think needs a name. You could call this "logical reasoning" - but "reasoning" seems better to me. The term covers both logical and illogical reasoning - though of course the latter sort is not of very much use. What is doesn't cover is perception, goals, or experimental generate-and-test.

If it is out there, better terminology would be welcomed.

Replies from: Cyan
comment by Cyan · 2009-07-22T17:07:57.803Z · LW(p) · GW(p)

Well, Bayesian probability

comment by djcb · 2009-07-22T17:45:12.715Z · LW(p) · GW(p)

I'd like to add a third kind of rationality that seems popular in these circles: ''recreational rationality''. This would include things like the Prisoner's Dilemma, Newcomb's Paradox, the Monty Hall Problem, the "rationality quotes"-series and so forth.

Even though they are seldom useful in real-world decision making, they are simply ''interesting'' for many people that visit LW, I suspect -- including myself.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2009-07-22T18:03:16.609Z · LW(p) · GW(p)

the Prisoner's Dilemma

comment by JamesAndrix · 2009-07-23T07:51:35.322Z · LW(p) · GW(p)

Upon reflection, this is exactly what sub-reddits are for. It should be trivial to turn on this functionality in the LW code, if that's a path we want to go down.

Replies from: jimrandomh, matt, RobinZ
comment by jimrandomh · 2009-07-26T14:49:21.992Z · LW(p) · GW(p)

I don't think LW has a large enough volume of posts to start subdividing it yet.

comment by matt · 2009-07-26T10:20:56.226Z · LW(p) · GW(p)

Technical agreement from one of the devs. If you can get more upvotes on your comment or Eliezer's attention, we can turn this on quickly.

comment by RobinZ · 2009-07-23T15:17:27.860Z · LW(p) · GW(p)

How does that work, exactly? I can't find the info on it looking at Reddit.com.

Replies from: JamesAndrix
comment by JamesAndrix · 2009-07-23T17:31:44.265Z · LW(p) · GW(p)

Across the top is a bar that will take you to some of the more popular subreddits: politics, pics, funny, etc. On reddit anyone can create a subreddit, here we would probably just use some preset categories. The default front page draws from some set of the subreddits (here it would probably be all of them, but users could go to the subreddit pages to see only posts in that category. users on reddit can subscribe and unsubscribe to subreddits, to determine what reddits are eligible to show on their individual front pages. A logged in user might go to reddit and only see posts from the science, technology, and programming subreddits.

Replies from: RobinZ, Alicorn
comment by RobinZ · 2009-07-23T17:46:13.766Z · LW(p) · GW(p)

Understood, thanks!

comment by Alicorn · 2009-07-26T15:38:04.352Z · LW(p) · GW(p)

How would adding this to Less Wrong interact with RSS feeds?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2009-07-26T18:38:22.836Z · LW(p) · GW(p)

The grandparent explains how subreddits work. Meanwhile over here one of the software developers working on Less Wrong says, "If you can get more upvotes on your comment or Eliezer's attention, we can turn [subreddits] on quickly".

In the parent of this comment, Alicorn asks, "How would adding this to Less Wrong interact with RSS feeds?"

Until one of the software developers gives a definitive answer, allow me to give a speculative answer. It would probably create more RSS feeds. Pro: more choices of what feeds to subscribe to. Con: subscribing and unsubscribing to feeds has a cost, namely the time and attention of the reader.

comment by colinmarshall · 2009-07-22T13:40:12.816Z · LW(p) · GW(p)

We seem to have a population here that already cares, and deeply, about rationality. I do trust them to upvote whatever has a lot to do with rationality and downvote whatever has too little to do with it. In fact, I'd go so far as to submit that we're doing something wrong if there aren't enough off-topic-ish, net-negative-karma posts; it would show that posters aren't taking quite enough risks as regards widening rationality's domain. I'm weary of the PUA and overly self-help-y talk, sure, but seeing nothing like it around here would be the dead canary in the coal mine.

comment by John_Maxwell (John_Maxwell_IV) · 2009-07-23T02:03:54.142Z · LW(p) · GW(p)

Most self-help type stuff sucks. I consider self-help stuff at its best more useful than epistemic rationality stuff at its best, and I am optimistic that less wrong can produce self-help stuff at its best.

comment by CannibalSmith · 2009-07-22T14:34:40.664Z · LW(p) · GW(p)

Just post it and see if it gets upvoted. We've got voting for a reason.

Replies from: eirenicon
comment by eirenicon · 2009-07-22T15:09:37.432Z · LW(p) · GW(p)

Voting is not moderation. It signals that an article is of interest, not that it is on-topic. AFAIK, you can't vote an article below 0, which means you can't distinguish between lack of interest, controversy, and inappropriateness. With that in mind, accurately assessing if an article is suitable to post is crucial to maintaining a healthy baseline of relevance so we can at least eliminate the latter as a factor.

Replies from: John_Maxwell_IV, CannibalSmith
comment by John_Maxwell (John_Maxwell_IV) · 2009-07-23T04:00:17.204Z · LW(p) · GW(p)

Voting is not moderation. It signals that an article is of interest, not that it is on-topic.

I don't see the problem with a community originally formed to discuss one issue discovering that all their members are also interested in some other issue and beginning to intensively discuss it. If an article is of interest, why isn't that sufficient?

comment by CannibalSmith · 2009-07-22T19:37:18.534Z · LW(p) · GW(p)

If we want detailed information about how our article is faring we can read the comments.

Replies from: eirenicon
comment by eirenicon · 2009-07-22T19:56:53.936Z · LW(p) · GW(p)

Subjects that are totally inappropriate for Less Wrong can still garner a vast and potentially interesting number of comments (I've seen comments on PUA that really don't belong here, and yet sparked massive threads). There may even be a complete lack of criticism about how misplaced it may be, if it only attracts a certain audience. However, just because it generates interest here doesn't mean it belongs here.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-07-23T04:05:47.818Z · LW(p) · GW(p)

By "does not belong" you presumably mean "does not fit the general pattern of content that came before it as I perceived it". Why do you think it is valuable for sites to maintain the same general pattern of content?

It's not that hard to hide threads. There is a little minus button next to every author's byline. The amount of damage an off-topic thread can do those who are not interested is quite minimal, but the amount it could help those who are interested is practically unbounded.

Replies from: JamesAndrix
comment by JamesAndrix · 2009-07-23T08:03:29.311Z · LW(p) · GW(p)

It's easy to think that any one thread is easy to hide, but online forums tend to devolve. We are a species that watches breakups on TV, if you don't maintain SOME pattern of content, then common content will bleed in and eventually dominate.

comment by kurige · 2009-07-22T08:42:10.693Z · LW(p) · GW(p)

I would prefer a variation of bullet point number 3:

  • Allow i-rationality discussion, but promote only when it is an application of a related, tightly coupled e-rationality concept.

I am here for e-rationality discussion. It's "cool" to know that deodorant is most effective when applied at night, before I go to bed, but that doesn't do anything to fundamentally change the way I think.

comment by cousin_it · 2009-07-22T06:53:56.970Z · LW(p) · GW(p)

Upon some thought, I'd like the site to concentrate on e-rationality and throw out i-rationality. PUA, marketing, akrasia etc. have plenty of other sites dedicated to them. We should strive to make our own unique contribution in the general direction set by Eliezer and Robin on OB. I feel such a decision would make LW more interesting on average.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-07-23T03:54:41.962Z · LW(p) · GW(p)

Most akrasia sites suck.

comment by Jonii · 2009-07-22T06:52:39.734Z · LW(p) · GW(p)

I thought of making a post of agreeing and disagreeing(and overall discussion), but then I encountered same kinda problem: I wasn't sure if it was on-topic, and when thinking further, I had trouble pin-pointing the exact topic I should be posting things on.

I'm still going to make that post, though. Eventually. I trust that you'll protect your garden, on case that one of the many possible things goes wrong.

My two cents on the actual topic here(Disclaimer, I'm a newb): I think difference with i and e is kinda like math and physics. Even if you knew everything about the world as it is(complete understanding of the physics), you'd wanna have efficient math to make use of that information. And even if you have all the mathematical understanding there ever could be, but you lacked any information about the world, the math would be useless. Winning is everything, and for that you need both instrumental and epistemic rationality.

comment by [deleted] · 2009-07-22T07:15:32.461Z · LW(p) · GW(p)

The concept "achieving your values" doesn't deserve the term "instrumental rationality". If it does, then, as you point out, works about instrumental rationality are merely works about how to do stuff. You're giving a fancy new name to an old concept.

ETA: Not that that's always exactly what we mean when we say "instrumental rationality", of course ...

How about the given definition of "epistemic rationality"? This is also really general: it's how to know stuff. Granted, that's precisely what being less wrong means, but we're not interested in general education. Granted again, the top-rated post of all time, "Generalizing From One Example", is definitely epistemic rationality but not obviously any other type of rationality.

So, here I propose some other definitions of "rationality":

Aumann rationality: a person is Aumann rational if they are rational (don't interpret this circularly!), they believe other people are Aumann rational, and other people believe they are Aumann rational. Perfect Aumann rationality causes people to never disagree with each other, but it's a spectrum. Eliezer Yudkowsky is relatively Aumann rational; people on Less Wrong are expected to be quite Aumann rational with each other; people in political debates have very little Aumann rationality.

Rational neutrality: though people who are rational-neutral discard evidence regarding statements, as any intelligent being must, their decision whether to discard a piece of evidence or not is not based on the direction/magnitude of it--if they ignore an observation, they do so without first seeing what it is.

Krasia: quite unrelated to any other type of rationality, people with high krasia are good at going from believing that an action would result in high expected utility to actually taking that action.

Replies from: pjeby, FrankAdamek, thomblake
comment by pjeby · 2009-07-22T18:01:05.854Z · LW(p) · GW(p)

people on Less Wrong are expected to be quite Aumann rational with each other

I expect that anyone who expected this has already been quite disappointed. ;-)

comment by FrankAdamek · 2009-07-22T15:47:20.417Z · LW(p) · GW(p)

people who are rational-neutral discard evidence regarding statements, as any intelligent being must

I feel like I should be able to find this out on my own, but I've had no success so far. Does "evidence regarding statements" refer to statements that are evidence-regarding, or evidence that regards statements? Either way I can't figure out an obvious reason to reject such things. Is it the idea that evidence shouldn't be discussed on any aspect beyond validity? I feel I'm missing something, many thanks to anyone who can throw me a link or other resource.

Replies from: None
comment by [deleted] · 2009-07-22T21:55:27.213Z · LW(p) · GW(p)

(If this post is too long, read only the last paragraph.)

Evidence that regards statements. I guess the "regarding statements" bit was redundant. Anyway, let me try to give some examples.

First, let me postulate a guy named Delta. Delta is an extremely rational robot who, given the evidence, always comes up with the best possible conclusion.

Andy the Apathetic is presented with a court case. Before he ever looks at the case, he decides that the probability the defendant is guilty is 50%. In fact, he never looks at the case; he tosses it aside and gives that 50% as his final judgement. Andy is rational-neutral, as he discarded evidence regardless of its direction; his probability is useless, but if I told Delta how Andy works and Andy's final judgement, Delta would agree with it.

Barney the Biased is presented with the same court case. Before he ever looks at the case, he decides that the probability that the defendant is guilty is 50%. Looking through the evidence, he decides to discard everything suggesting that the defendant is innocent; he concludes that the defendant has a 99.99% chance of being guilty and gives that as his final judgement. Barney is not rational-neutral, as he discarded evidence with regard to its direction; his probability is almost useless (but not as useless as Andy's), and if I told Delta how Barney works and Barney's final judgement, Delta might give a probability of only 45%.

Finally, Charlie the Careful is presented with the same court case. Before he ever looks at the case, he decides that the probability that the defendant is guilty is 50%. Looking through the evidence, he takes absolutely everything into account, running the numbers and keeping Bayes' law between his eyes at all times; eventually, after running a complete analysis, he decides that the probability that the defendant is guilty is 23.14159265%. Charlie is rational-neutral, as he discarded evidence regardless of its direction (in fact, he discarded no evidence); if I told Delta how Charliie works and Charlie's final judgement, Delta would agree with it.

So, here's another definition of rational neutrality I came up with by writing this: you are rational-neutral if, given only your source code, it's impossible to come up with a function that takes one of your probability estimates and returns a better probability estimate.

Replies from: ilyas, None, FrankAdamek
comment by ilyas · 2009-07-22T23:55:12.362Z · LW(p) · GW(p)

It might be useful to revise this concept to account for computational resources (see AI work on 'limited rationality', e.g. Russell and Wefald's "Do the Right Thing" book).

Replies from: None
comment by [deleted] · 2009-07-23T20:58:09.571Z · LW(p) · GW(p)

I'll try my best to get my hands on a copy of that book.

comment by [deleted] · 2009-07-23T06:08:28.977Z · LW(p) · GW(p)

Upon thinking about that second definition of rational neutrality, I find myself thinking that that can't be right. It's identical to calibration. And even a rational-neutral agent that's been "repaired" by applying the best possible probability estimate adjustment function will still return the same ordinal probabilities: Barney the Biased, even after adjustment, will return higher probabilities for statements he is biased toward than statements he is biased against.

I would have said this:

So, here's another definition of rational neutrality I came up with by writing this: you are rational-neutral if, given only your source code and your probability estimates, it's impossible for someone to come up with better probability estimates.

...but that definition doesn't rule out the possibility that an agent would look at your probability estimates, figure out what the problem is, and come up with a better solution on its own. In the extreme case, no agent would be considered rational-neutral unless it had a full knowledge of all mathematical results. That's not what I want; therefore, I stick by my original definition.

comment by FrankAdamek · 2009-07-22T23:30:46.043Z · LW(p) · GW(p)

It took two read-throughs to get this, but I'm fairly sure that's the concept and not your handling.Thanks for the explanation!

comment by thomblake · 2009-07-22T14:03:51.618Z · LW(p) · GW(p)

I'm going to start using "krasia". I hadn't encountered it before but apparently it's had some currency in epistemology.

comment by Dagon · 2009-07-22T17:48:49.663Z · LW(p) · GW(p)

I'm interested in both flavors of discussion, and especially in the relationship between them. It's hard to understand WHY consistent and world-matching epistemic beliefs don't AUTOMATICALLY cause winning instrumental behavior, and I think the difference is vital in figuring out the limits (and how to extend them) of our puny brains.

I'm also leery of too much meta control, especially when it seems to only hide rather than solve the underlying conflicts that are causing discomfort among some participants.

For these reasons, I prefer to avoid official policy and see what gets up-voted. There are interesting and thought-provoking posts on a wider variety of topics than I can predict in advance.

Replies from: pjeby, Jonii
comment by pjeby · 2009-07-22T18:16:30.338Z · LW(p) · GW(p)

It's hard to understand WHY consistent and world-matching epistemic beliefs don't AUTOMATICALLY cause winning instrumental behavior

Because beliefs are individually like little machines that operate independently. Just because you have a belief that's true, doesn't mean you don't also have another ten thousand false beliefs you're not even aware of having.

comment by Jonii · 2009-07-23T07:47:53.113Z · LW(p) · GW(p)

It's hard to understand WHY consistent and world-matching epistemic beliefs don't AUTOMATICALLY cause winning instrumental behavior

Knowing the location of each house is necessary to solve the problem of traveling salesman, but the step of solving it is not trivial

comment by JamesAndrix · 2009-07-22T16:56:26.851Z · LW(p) · GW(p)

I think that it's pretty easy to sort out Instrumental rationality from general how-to's: Instrumental rationality is about your preferences, options, and decision making process.

If a how to article is primarily about how to do things in your skull, then it probably fits here. If the coupon article couldn't be rewritten to talk about arbitrary credit widgets used by pebblesorters, then it may not fit.

comment by MichaelVassar · 2009-07-22T16:50:24.022Z · LW(p) · GW(p)

Honestly, ordinary self-help doesn't do anything like cost-benefit analysis even implicitly so it doesn't try to help people to achieve their values. Business literature does often do implicit cost-benefit analysis. The best video games are very unlikely to make any list of

Replies from: djcb
comment by djcb · 2009-07-22T20:54:36.996Z · LW(p) · GW(p)

Indeed; ordinary self-help books seem to be specifically written to match what people like: anyone can achieve anything and it takes not really that much effort. Support for that is usually in the form of anecdotes or quotes from famous people. A favorite is Einstein's "Imagination is more important than knowledge", which sums up the genre pretty good: it refers to some smart person, it tells somethings people like to hear -- but it is really misleading.

Of course you can pick up ideas from self-help book and see what works for you. Fight akrasia with PCT or the 7 Habits or whatever; that might be quite useful. It has however nothing to do (I hope) with the kind of LW-rationality.

comment by CarlShulman · 2009-07-22T12:56:02.829Z · LW(p) · GW(p)

"Allow i-rationality discussions, but require a stricter criteria for promoting top-level posts on the topic." I buy this.

"Allow i-rationality discussions, but try to somehow define the term so that silly things like listing the best video games of all time get excluded." I want generality in i-rationality discussions.