Making Rationality General-Interest

post by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-24T22:02:55.576Z · LW · GW · Legacy · 114 comments

Contents

  Introduction
  Is making rationality general-interest a good goal?
  What has to change for this to happen?
  What's already happening? 
  Conclusion
None
114 comments

Introduction

Less Wrong currently represents a tiny, tiny, tiny segment of the population. In its current form, it might only appeal to a tiny, tiny segment of the population. Basically, the people who have a strong need for cognition, who are INTx on the Myers-Briggs (65% of us as per 2012 survey data), etc.

Raising the sanity waterline seems like a generally good idea. Smart people who believe stupid things, and go on to invest resources in stupid ways because of it, are frustrating. Trying to learn rationality skills in my 20s, when a bunch of thought patterns are already overlearned, is even more frustrating.

I have an intuition that a better future would be one where the concept of rationality (maybe called something different, but the same idea) is normal. Where it's as obvious as the idea that you shouldn't spend more money than you earn, or that you should live a healthy lifestyle, etc. The point isn't that everyone currently lives debt-free, eats decently well and exercises; that isn't the case; but they are normal things to do if you're a minimally proactive person who cares a bit about your future. No one has ever told me that doing taekwondo to stay fit is weird and culty, or that keeping a budget will make me unhappy because I'm overthinking thing.

I think the questions of "whether we should try to do this" and "if so, how do we do it in practice?" are both valuable to discuss, and interesting.

 

Is making rationality general-interest a good goal?

My intuitions are far from 100% reliable. I can think of a few reasons why this might be a bad idea:

1. A little bit of rationality can be damaging; it might push people in the direction of too much contrarianism, or something else I haven't thought of. Since introspection is imperfect, knowing a bit about cognitive biases and the mistakes that other people make might make people actually less likely to change their mind–they see other people making those well-known mistakes, but not themselves. Likewise, rationality taught only as a tool or skill, without any kind of underlying philosophy of why you should want to believe true things, might cause problems of a similar nature to martial art skills taught without the traditional, often non-violent philosophies–it could result in people abusing the skill to win fights/debates, making the larger community worse off overall. (Credit to Yan Zhang for martial arts metaphor). 

2. Making the concepts general-interest, or just growing too fast, might involve watering them down or changing them in some way that the value of the LW microcommunity is lost. This could be worse for the people who currently enjoy LW even if it isn't worse overall. I don't know how easy it would be to avoid, or whether

3. It turns out that rationalists don't actually win, and x-rationality, as Yvain terms it, just isn't that amazing over-and-above already being proactive and doing stuff like keeping a budget. Yeah, you can say stuff like "the definition of rationality is that it helps you win", but if in real life, all the people who deliberately try to increase their rationality do worse off overall, by their own standards (or even equally well, but with less time left over for other fun pursuits) than the people who aim for their life goals directly, I want to know that. 

4. Making rationality general-interest is a good idea, but not the best thing to be spending time and energy on right now because of Mysterious Reasons X, Y, Z. Maybe I only think it is because of my personal bias towards liking community stuff (and wishing all of my friends were also friends with each other and liked the same activities, which would simplify my social life, but probably shouldn't happen for good reasons). 

Obviously, if any of these are the case, I want to know about it. I also want to know about it if there are other reasons, off my radar, why this is a terrible idea.

 

What has to change for this to happen?

I don't really know, or I would be doing those things already (maybe, akrasia allowing). I have some ideas, though.

1. The jargon thing. I'm currently trying to compile a list of LW/CFAR jargon as a project for CFAR, and there are lots of terms I don't know. There are terms that I've realized in retrospect that I was using incorrectly all along. This presents both a large initial effort for someone interested in learning about rationality via the LW route, and also might contribute to the looking-like-a-cult thing.

2. The gender ratio thing. This has been discussed before, and it's a controversial thing to discuss, and I don't know how much arguing about it in comments will present any solutions. It seems pretty clear that if you want to appeal to the whole population, and a group that represents 50% of the general population only represents 10% of your participants (also as per 2012 survey data, see link above), there's going to be a problem somewhere down the road.

My data point: as a female on LW, I haven't experienced any discrimination, and I'm a bit baffled as to why the gender ratio is so skewed in the first place. Then again, I've already been through the filter of not caring if I'm the only girl at a meetup group. And I do hang out in female-dominated groups (i.e. the entire field of nursing), and fit in okay, but I'm probably not all that good as a typical example to generalize from. 

3. LW currently appeals to intelligent people, or at least people who self-identify as intelligent; according to the 2012 survey data, the self-reported IQ median is 138. This wouldn't be surprising, and isn't a problem until you want to appeal to more than 1% of the population. But intelligence and rationality are, in theory, orthogonal, or at least not the same thing. If I suffered a brain injury that reduced my IQ significantly but didn't otherwise affects my likes and dislikes, I expect I would still be interested in improving my rationality and think it was important, perhaps even more so, but I also think I would find it frustrating. And I might feel horribly out of place.

4. Rationality in general has a bad rap; specifically, the Spock thing. And this isn't just affecting whether or not people thing Less Wrong the site is weird; it's affecting whether they want to think about their own decision-making.

This is only what I can think of in 5 minutes...

 

What's already happening?

Meetup groups are happening. CFAR is happening. And there are groups out there practicing skills similar or related to rationality, whether or not they call it the same thing.

 

Conclusion

Rationality, Less Wrong and CFAR have, gradually over the last 2-3 years, become a big part of my life. It's been fun, and I think it's made me stronger, and I would prefer a world where as many other people as possible have that. I'd like to know if people think that's a) a good idea, b) feasible, and c) how to do it practically.

114 comments

Comments sorted by top scores.

comment by Rob Bensinger (RobbBB) · 2013-07-24T23:30:13.242Z · LW(p) · GW(p)

HPMoR is very very popular and broadly appealing (as rationality lit goes), so that seems to be our biggest leverage point for spreading LW to people who aren't already academics or programmers, like the secularist and wider geeknerd communities.

Currently, we seem to be making only a little use of that resource for sustained, active, explicit community-building outreach. LW is not optimized for community discussions between any people who haven't already spent a few months or years studying mathematics, programming, a very specific flavor of analytic philosophy, or past LW posts like the Sequences. We're catching tons of fish friends, and throwing nearly all of them back in the ocean. The only non-LW community that seems targeted at HPMoR people is the reddit, but we're doing almost nothing to make that reddit useful for rationality training, or appealing to any people who want to do more than geek out about the details of the plot of HPMoR itself. Plus reddit is not a great environment in general if we want to experiment, or to appeal to whoever LW doesn't appeal to.

I suggest: Start a new website as a community hub for HPMoR fans, and more generally for the demographic 'I'm not very mathy but I think science is neato and want advice and social support for self-improvement and for making the world a better place.' Perhaps the website could be to CFAR what LW is to MIRI. Whereas (future-)LW focuses on high-level rationality techniques, speculative philosophical and mathematical innovation, and programming/AI, the MoR site focuses more on the low-hanging fruit of rationality, the stuff that's relatively well-established or at least ready for beta or late-stage-alpha testing, with a stronger emphasis on community, niceness, skill cultivation, and MoR geekery.

We could call it, say, Reason Academy, and capitalize on the 'I wish I were at Hogwarts!' HP impulse without doing anything that explicitly raises copyright problems. ('Rationality Academy' makes sense for an HPMoR tie-in, but I think has limited crossover appeal because of the Spocky connotations and because it sounds clunky.) More message boards, more centralized easy-to-access low-barrier-to-understanding reads, a happier and friendlier aesthetic, more games and (eventually) a more structured, reward-centered learning environment. Is this a good thing to shoot for?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-25T08:38:28.524Z · LW(p) · GW(p)

I think that this sounds like a great idea, though it also seems like one that would take a lot of effort to put together.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-25T08:56:17.969Z · LW(p) · GW(p)

Two thoughts:

  1. We can start small, maybe with just a message board (to replace and expand the functionality of the reddit, and perhaps of LW's open threads). A few message boards aren't hard to maintain. Then once the boards are active enough, start experimenting with expanding the functionality.

  2. We can wait before doing much. Launching something like this right after (or right before) HPMoR concludes strikes me as a rather good idea. The site would then function as HPMoR's Pottermore.

It's worth thinking in more detail about what exactly we'd want out of something like this, and about risks (e.g., making LW look even more foreboding). Also, we should brainstorm features we'd implement on forums or games if we could, that aren't easily implemented on LW or the reddit. E.g., rules that encourage people more to ask questions (and get answers where on LW we might just default to 'go read the Sequences'), be friendly and goofy, express positive thoughts/feelings, and build strong emotional social connections, including ones that might be too cumbersome to make general practice on LW.

comment by Qiaochu_Yuan · 2013-07-24T23:40:10.838Z · LW(p) · GW(p)

I have an intuition that a better future would be one where the concept of rationality (maybe called something different, but the same idea) is normal.

I am highly skeptical of this happening with human psychology kept constant, basically because I think rationality is de facto impossible for humans who are not at least ~2 standard deviations smarter than the mean. (I also suspect that most LWers have bad priors about what mean intelligence looks like, including me.)

I think a more achievable goal is to make the concept of rationality cool. Being a movie star, for example, is cool but not normal. Rationality not being cool prevents otherwise sufficiently smart people from exploring it. My model of what raising the sanity waterline looks like in the short- to medium-term is to start from the smartest people (these are simultaneously the easiest and the highest-value people to make more rational) and work down the intelligence ladder from there.

Replies from: RobbBB, Swimmer963, johnswentworth, satt
comment by Rob Bensinger (RobbBB) · 2013-07-24T23:59:10.492Z · LW(p) · GW(p)

I think 'can we make everyone rational?' is probably the wrong question. Better questions:

  1. How much more rational could we make 2013 average-IQ people, by modifying their cultural environment and education? (That is, without relying on things like surgical or genetic modification.) What's the realistic limit of improvement, and when would diminishing returns make investing in further education a waste?

  2. How do specific rationality skills vary in teachability? Are there some skills that are especially easy to culturally transmit (i.e., 'make cool' in a behavior-modifying way) or to instill in ordinary people?

  3. How hard would the above approaches be? How costly is the required research and execution?

  4. In addition to the obvious direct benefits of being more rational (which by definition means 'people make decisions that get them more of what they want' and 'people's beliefs are better maps'), how big are indirect benefits like Qiaochu's 'smart people see rationality as more valuable', or 'governments and individuals fund altruism (including rationality training) more effectively', or 'purchasing and voting habits are more globally beneficial'?

Suppose we were having this discussion 200 or 500 or 1000 years ago instead, and the topic was not 'Can we make everyone rational?' but 'Can we make everyone literate?' or 'Can we make everyone a non-racist?' or 'Can we make everyone irreligious?'. I think it's clear in retrospect that those aren't quite the right questions to be asking, and it's also clear in retrospect that appeals to intelligence levels, as grounds for cynical skepticism, would have been very naïve.

At this point I don't think we have nearly enough data to know all the rationality skills IQ sets a hard limit on, or whether people at a given IQ level are anywhere near those limits. Given that uncertainty, we should think seriously about the virtues and dangers of a world where LW-level rationality is as common as, today, literacy or religious disengagement is.

Replies from: Qiaochu_Yuan, Swimmer963
comment by Qiaochu_Yuan · 2013-07-25T01:14:47.952Z · LW(p) · GW(p)

I think we can go very far in the direction of spreading habits and memes that cause more life success than current habits and memes, but I want to distinguish this from spreading rationality. The difference I see between them is analogous to the difference between converting people to a religion and training religious authority figures (although this analogy might prime someone reading this comment in an unproductive direction, and if so, ignore it).

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-25T01:41:14.275Z · LW(p) · GW(p)

Could you say more about what distinguishes 'religious authority figures' in this analogy? Are they much more effective and truth-bearing than most people? Is their effectiveness much more obvious and dramatic (and squick-free), making them better role models? Are they more self-aware and reflective about how and why their rationality skills work? Are they better at teaching the stuff to others?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-25T01:43:36.412Z · LW(p) · GW(p)

The distinction I'm trying to make is between giving people optimized habits and memes as a package that they don't examine and giving people the skills to optimize their own habits and memes (by examining their current habits and memes). It's the latter I mean when I refer to spreading rationality, and it's the latter I expect to be quite difficult to do to people who aren't above a certain level of intelligence. It's the former I don't want to call spreading rationality; I want to call it something like "optimizing culture."

Replies from: RobbBB, Lumifer
comment by Rob Bensinger (RobbBB) · 2013-07-25T02:14:59.105Z · LW(p) · GW(p)

What you call "rationality" is what I'd call "metarationality". Conflating the two is understandable at this point because (a) we'd expect the people who explicitly talk about 'rationality' to be the people interested in metarationality, and (b) our understanding of measuring and increasing rationality is so weak right now (probably even weaker than our understanding of measuring and increasing metarationality) that we default to thinking more about metarationality than about rationality. Still, I'd like to keep the two separate.

I'm not sure which of the two is more important for us to spread. Does CFAR see better results from the metarationality it teaches (i.e., forming more accurate beliefs about one's rationality, picking the right habits and affirmations for improving one's rationality), or from the object-level rationality it teaches?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-25T02:23:54.039Z · LW(p) · GW(p)

I don't think I'm talking about metarationality, but I might be (or maybe I think that rationality just is metarationality). Let me be more specific: let's pretend, for the sake of argument, that the rationalist community finds out that jogging is an optimal habit for various reasons. I would not call telling people they should jog (e.g. by teaching it in gym class in schools) spreading rationality. Spreading rationality to me is more like giving people the general tools to find out what object-level habits, such as jogging, are worth adopting.

The biggest difference between what I'm calling "rationality" and what I'm calling "optimized habits and memes" is that the former is self-correcting in a way that the latter isn't. Suppose the rationalist community later finds out that jogging is in fact not an optimal habit for various reasons. To propagate that change through a community of people who had been given a round of optimal habits and memes looks very different from propagating that change through a community of people who had been given general rationality tools.

Replies from: Kaj_Sotala, RobbBB
comment by Kaj_Sotala · 2013-07-25T09:02:02.022Z · LW(p) · GW(p)

How about habits and norms like:

  • Consider it high status to change one's mind when presented with strong evidence against their old position
  • Offer people bets on beliefs that are verifiable and which they hold very strongly
  • When asked a question, state the facts that led you to your conclusion, not the conclusion itself
  • Encourage people to present the strongest cases they can against their own ideas
  • Be upfront about when you don't remember the source of your claim

(more)

It feels like it would be possible to get ordinary people to adopt at least some of these, and that their adoption would actually increase the general level of rationality.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-25T19:07:58.350Z · LW(p) · GW(p)

I'm skeptical that these kinds of habits and norms can actually be successfully installed in ordinary people. I think they would get distorted for various reasons:

  • The hard part of using the first habit is figuring out what constitutes strong evidence. You can always rationalize to yourself that some piece of evidence is actually weak if you don't feel, on a gut level, like knowing the truth is more important than winning arguments.

  • There are several hard parts of using the second habit, like not getting addicted to gambling. Also, when people with inaccurate beliefs are consistently getting swindled by people with accurate beliefs, you're training the former to stop accepting bets, not to update their beliefs. This might still be useful for weeding out bad pundits, but then the pundit community doesn't actually have an incentive to adopt this habit.

  • The hard part of using the third habit is remembering what facts led you to your conclusion. Also, you can always cherrypick.

And so forth. These are all barriers I expect people with high IQ to deal with better than people with average IQ.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-26T06:36:16.416Z · LW(p) · GW(p)

You're probably right, but even distorted versions of the habits could be more useful than not having any at all, especially if the high-IQ people were more likely to actually follow their "correct" versions. Of course, there's the possibility of some of the distorted versions being bad enough to make the habits into net negatives.

comment by Rob Bensinger (RobbBB) · 2013-07-25T02:31:04.970Z · LW(p) · GW(p)

I would not call telling people they should meditate (e.g. by teaching it in health class in schools) spreading rationality. Spreading rationality to me is more like giving people the general tools to find out what object-level habits, such as meditating, are worth adopting.

I think it's an (unanswered) empirical question whether meta-level (or general) or object-level (or specific) instruction is the best way to make people rational. Meditation might be an indispensable part of making people more rational, and it might be more efficient (both for epistemic and instrumental rationality) than teaching people more intellectualized skills or explicit doctrines. Rationality needn't involve reasoning, unless reasoning happens to be the best way to acquire truth or victory.

On the other hand, if meditation isn't very beneficial, or if the benefits it confers can be better acquired by other means, or if it's more efficient to get people to meditate by teaching them metarationality (i.e., teaching them how to accurately assess and usefully enhance their belief-forming and decision-making practices) and letting them figure out meditation's a good idea on their own, then I wouldn't include meditation practice in my canonical Rationality Lesson Plan.

But if that's so it's just because teaching meditation is (relatively) inefficient for making people better map-drawers and agents. It's not because meditation is intrinsically unlike the Core Rationality Skills by virtue of being too specific, too non-intellectualized, or too non-discursive.

ETA: Meditation might even be an important metarational skill. For instance, meditation might make us better at accurately assessing our own rationality, or at selecting good rationality habits. Being metarational is just about being good at improving your general truth-finding and goal-attaining practices; it needn't be purely intellectual either. (Though I expect much more of metarationality than object-level rationality to be intellectualized.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-25T02:37:15.679Z · LW(p) · GW(p)

Meditation was probably an unusually bad example for me to make the point I wanted with; sorry about that. I'm going to replace it with jogging.

comment by Lumifer · 2013-07-25T01:47:52.429Z · LW(p) · GW(p)

"Give man a fish..." ?

I think the big stumbling block is the desire and capability (in terms of allocating attention and willpower) to optimize one's habits and memes, not the skills to do so.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-25T01:50:46.203Z · LW(p) · GW(p)

Learning how to allocate attention and willpower is a skill.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T01:59:35.169Z · LW(p) · GW(p)

Yes, but (a) if that skill is below a certain threshold you probably won't be able to improve it; (b) empirically it's a very hard skill to acquire/practice (see all the akrasia issues with the highly intelligent LW crowd).

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-25T02:27:33.032Z · LW(p) · GW(p)

Yep. Neither of those things are evidence against anything I've said.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T01:54:46.552Z · LW(p) · GW(p)

Given that uncertainty, we should think seriously about the virtues and dangers of a world where LW-level rationality is as common as, today, literacy or religious disengagement is.

Yes, this is exactly what I'm trying to think about. You can't know long-term historical trends in advance...but you have to make informed-ish decisions about what to try doing, and how to try doing it, anyway.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T01:52:23.207Z · LW(p) · GW(p)

Making rationality cool = an excellent starting point. I still disagree on the rationality-intelligence thing, though; I think you could teach skills that could still meaningfully be called epistemic/instrumental rationality to people with IQ 100 and below. Not everyone, anymore than it's possible to persuade everyone from childhood that it's a good idea to spend money sensibly. (Gaah this is a pet peeve for me). But enough to make the world more awesome.

I'm going to register that disagreement as a bet, and if in 10 years LW is still around and enough has happened that we know who's right, I will find this comment and collect/lose a Bayes point.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-25T02:34:09.141Z · LW(p) · GW(p)

Let's make a more specific bet: I anticipate that any attempts by CFAR in the next 10 years to broaden the demographic that attends its workshops to include people with IQ within a standard deviation of mean (say in the United States) will fail by their standards. Agree or disagree?

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T03:10:52.461Z · LW(p) · GW(p)

Agree. But "workshops" includes any future instructor-led activities they might do, including shorter formats i.e. 3-hour or 1-day, larger groups, etc.

comment by johnswentworth · 2013-07-25T01:17:13.258Z · LW(p) · GW(p)

Make rationality cool? Don't worry, I got this.

Puts on sunglasses

comment by satt · 2013-07-25T02:35:51.464Z · LW(p) · GW(p)

I am highly skeptical of this happening with human psychology kept constant, basically because I think rationality is de facto impossible for humans who are not at least ~2 standard deviations smarter than the mean.

I don't think I agree but I may be interpreting "rationality" differently to you.

Treating "rationality" as a qualitative trait, so that people are simply either rational or irrational, I'd say no one is rational, regardless of IQ; no one meets the impossibly stringent standard of making their every inference and/or decision optimally.

Treating "rationality" as a quantitative trait, so that some people are simply more rational than others, I expect IQ helps cultivate rationality everywhere along the IQ scale (except maybe the extremes). I wouldn't expect a threshold effect around an IQ of 130, but a gradual increase in feasibility-of-being-rational as IQ goes up.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-25T02:52:06.386Z · LW(p) · GW(p)

Treating "rationality" as a qualitative trait, so that people are simply either rational or irrational,

That is not what "qualitative" means. The word you want is "binary."

To be more specific, what I am highly skeptical of is people with IQ within a standard deviation or two of the mean being capable of updating their beliefs in a way noticeably saner than baseline or acting noticeably more strategic than baseline. "Noticeable" means, for example, that if you hired a group of such people for similar jobs and looked at their performance reviews after a year you'd be able to guess, with a reasonable level of accuracy, which ones did or did not have rationality training.

Replies from: satt
comment by satt · 2013-07-26T03:12:21.514Z · LW(p) · GW(p)

That is not what "qualitative" means. The word you want is "binary."

I'm fairly sure I used "qualitative" with a standard meaning. Namely, as an adjective indicating "descriptions or distinctions based on some quality rather than on some quantity", a quality being a discrete feature that distinguishes one thing from another by its presence or absence (as opposed to its degree or extent). Granted, it would've been better to use the word "binary"; substitute that word and I think my point stands.

To be more specific, what I am highly skeptical of is people with IQ within a standard deviation or two of the mean being capable of [...]

Thanks for elaborating. That (and this subthread) clarify where you're coming from. I think we agree that someone one or two SDs below the mean would be hard to mould into a noticeably saner or more strategic person. The lingering bit of disagreement is for people that far above the mean, with IQs of 120-125, say.

"Noticeable" means, for example, that if you hired a group of such people for similar jobs and looked at their performance reviews after a year you'd be able to guess, with a reasonable level of accuracy, which ones did or did not have rationality training.

While I wouldn't expect to see such a stark effect of rationality training for people with IQs of 120-125, I doubt I'd see it for people with even higher IQs, either. If one randomly assigns half of a sample of workers to undergo intervention X, and X raises job performance by (e.g.) a standard deviation, job performance is still a pretty imperfect predictor of which workers experienced X. (And that's assuming job performance can be observed without noise!) So I predict rationality training wouldn't have an effect that's "noticeable" in the sense you operationalize it here, even if it successfully boosted job performance among people with IQs of 120-125.

comment by Vaniver · 2013-07-24T23:07:35.029Z · LW(p) · GW(p)

The jargon thing.

I'm not sure this is avoidable, because precise concepts need precise terms. One of my favorite passages from Three Worlds Collide is:

But the Lady 3rd was shaking her head. "You confuse a high conditional likelihood from your hypothesis to the evidence with a high posterior probability of the hypothesis given the evidence," she said, as if that were all one short phrase in her own language.

That is the sort of concept which should be one short phrase in a language used by people who evaluate hypotheses by Bayesian thinking. Inaccessibility of jargon is oftentimes a sign of real inferential distance- someone needs to know what those two concepts are mathematically for that sentence-long explanation of a single phrase to make any sense, and explaining what those concepts are mathematically is a lecture or two by itself.

(That said, I agree that in areas where a professional community has a technical term for a concept and LW has a different technical term for that concept, replacing LW's term with the professional community's term is probably a good move.)

But intelligence and rationality are, in theory, orthogonal, or at least not the same thing.

It seems to me that while intelligence is not sufficient for rationality, it might be necessary for rationality. (As rationality testing becomes more common, we'll be able to investigate that empirical claim.) I often describe rationality as "living deliberately," and that seems like the sort of thing that appeals much more to people with more intellectual horsepower because it's much easier for them to be deliberate.

Replies from: Swimmer963, Kaj_Sotala, Peterdjones
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T01:49:29.552Z · LW(p) · GW(p)

I agree with you on the jargon thing; it's so much easier to have a conversation about rationality-cluster with LW people because of it. (It's also fun and ingroupy). But I do think it's a problem overall, and partly avoidable.

comment by Kaj_Sotala · 2013-07-25T09:09:30.249Z · LW(p) · GW(p)

We really should have a short phrase for that. Suggestions? "The evidence would be likely given the hypothesis, but the hypothesis isn't as likely given the evidence" would at least be a bit shorter.

Replies from: Vaniver
comment by Vaniver · 2013-07-25T10:26:30.862Z · LW(p) · GW(p)

I would probably express it as something like "you're confusing a high likelihood with a high posterior," which is less precise but I suspect would be understood by a Bayesian.

comment by Peterdjones · 2013-07-25T03:41:52.287Z · LW(p) · GW(p)

There are already precise terms for most of the concepts LW discusses. It's that LW uses its own jargon.

Replies from: ArisKatsaris, Viliam_Bur, wedrifid, Vaniver
comment by ArisKatsaris · 2013-07-25T08:20:02.873Z · LW(p) · GW(p)

There are already precise terms for most of the concepts LW discusses.

State three examples?

comment by Viliam_Bur · 2013-07-25T08:18:16.032Z · LW(p) · GW(p)

I guess that for some LW jargon there already are precise terms, but for other LW jargon there are not. Or sometimes there is a term that means something similar, but can also be misleading. Still, it could be good to reduce the unnecessary jargon.

How to do it? Perhaps by making this a separate topic -- find the equivalents to LW jargon, disuss whether they really mean the same thing, and if yes, propose using the traditional expression.

What I am saying here is that (1) merely saying "there are precise terms" is not helpful without specific examples, and (2) each term should be discussed, because it may seem like the same thing to one person, but another person may find a difference.

comment by wedrifid · 2013-07-25T12:44:14.467Z · LW(p) · GW(p)

There are already precise terms for most of the concepts LW discusses. It's that LW uses its own jargon.

I don't believe you, for most part when there is already a precise term we just use that term already. For most LW jargon it is far more likely that you are confused about the concepts and propose using a wrong term than that there are already precise terms that have the same meaning.

comment by Vaniver · 2013-07-25T09:56:28.585Z · LW(p) · GW(p)

From the grandparent:

That said, I agree that in areas where a professional community has a technical term for a concept and LW has a different technical term for that concept, replacing LW's term with the professional community's term is probably a good move.

comment by Grant · 2013-07-24T23:45:31.693Z · LW(p) · GW(p)

INTP male programmer here. I've never posted an article and rarely comment.

One thing which keeps me from doing is actually HPMoR, and EY's posts and sequences. They're all really long, and seem to be considered required reading. I know its EY's style; he seems to prefer narratives. Unfortunately I don't have a lot of time to read all that, and much prefer a terser Hansonian style.

A shorter "getting started" guide would help me. Would it help others?

Replies from: Benito, RobbBB, Adele_L, army1987
comment by Ben Pace (Benito) · 2013-07-25T07:20:27.406Z · LW(p) · GW(p)

I've been giving the following post list to friends, to get them into LW:

1) The Twelve Virtues

2) Cognitive Biases Potentially Affecting Judgements of Global Risk

3) The Simple Truth

4) The Useful Idea of Truth

5) What do we mean by rationality?

6) What is evidence?

7) Rationality: Appreciating Cognitive Algorithms

8) Skill: The Map is Not the Territory

9) Firewalling the Optimal from the Rational

10) The Lens that Sees its Flaws

11) The Martial Art of Rationality

12) No One Can Exempt You From Rationality's Laws

I'll occasionally adapt it, e.g. Swapping the first two posts around, for someone who has an academic background and will be more interested in an academic paper to start with.

Anyone looking for my current, extended version, can find it here.

comment by Rob Bensinger (RobbBB) · 2013-07-25T00:14:11.535Z · LW(p) · GW(p)

I've been thinking about this problem lately, and I agree it's a problem. I have some tentative ideas for starting to address it, which I'll post to Discussion next week. I'd like more data on where the stumbling blocks are, though.

  1. Are there LW posts (by Eliezer or whoever) that you have found helpful, readable, concise, etc.? If so, what are some of the better examples? Would you say, for example, that Lukeprog and Hanson's styles work for you about equally well?

  2. What are some examples of specific posts (or series of posts) you haven't gotten through? How much was a result of length, how much a result of content (e.g., too difficult or boring or mathy), and how much a result of style (e.g., too narrative or unstructured or jargony)?

  3. What are specific ideas, perspectives, approaches, or terms you feel (or have been told) you're currently missing out on? The more examples of this the better.

ETA: I'd be interested in others' responses to this too.

Replies from: Grant, palladias
comment by Grant · 2013-08-06T16:49:50.677Z · LW(p) · GW(p)

From the articles linked from Welcome to Less Wrong:

1) http://lesswrong.com/lw/jx/we_change_our_minds_less_often_than_we_think/

The title is descriptive and the text is short and to the point. Empirical support is present and clearly stated. Of course it could be shortened quite a bit more without losing any information, but I don't find it excessively verbose.

2) http://lesswrong.com/lw/qk/that_alien_message

Its a long post, not trivial to follow, and when reading its not clear how the effort will pay off. Perhaps this is evidence of a short attention span, but I've generally found that most concepts can be expressed succinctly. It might also be a habit of my profession that I try and make writings as terse and general as possible.

I suspect status and article length are highly correlated (e.g., people read autobiographies of famous people), and so longer writings might be ways to signal status.

I can produce more examples, but the above two are archetypal for me.

3) Well, I don't know what I don't know ;) But to list a few things:

  • Pros and cons of frequentist vs. bayesian approaches. Everything I read here seems pro-bayesian, but other (statistics) sites I look at promote a mix of the two approaches.
  • Why so little discussion of mechanisms which improve the rationality of group action and decision-making? Is that topic too close to the mind-killer, or have I missed those articles?
  • I find appeals to rationality during strictly normative argument irrational, because people don't seem to adopt ethics on the basis of rationality or consistency. Thus I'm confused by the frequency of ethical discussions here. Am I missing something about ethics and rationality? Or just wrong? Something on a general rationalist approach to ethics would be helpful to me.
comment by palladias · 2013-07-28T01:41:08.683Z · LW(p) · GW(p)

Agreed. I'd be quite interested which post non-LW friends found useful (and if they passed them on to anyone else). My mom ended up using Twelve Virtues as a discussion reading in the first day of her eled class (elementary education).

comment by Adele_L · 2013-07-25T03:37:13.094Z · LW(p) · GW(p)

I don't think think HPMoR is required reading to learn rationality from LW and related places, and is one of the few things making rationality general-interest at this point.

I do agree that a short "getting started" guide would be helpful, though.

comment by A1987dM (army1987) · 2013-07-26T19:15:18.118Z · LW(p) · GW(p)

Only a minority of respondents to the 2012 survey had read “about 75%” or “nearly all” of the sequences. So long as you've read the links in the welcome thread and you're prepared to be corrected you should be fine.

comment by cousin_it · 2013-07-24T23:53:37.199Z · LW(p) · GW(p)

How does a doubt about the usefulness of rationality coexist with a desire to spread rationality? I see that many people can reconcile these two feelings just fine, but my mind just doesn't seem to work that way...

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T01:45:15.704Z · LW(p) · GW(p)

Well, there aren't many things that I don't doubt a little bit. I don't think this a bad thing. However, in order to get anything done in life, instead of sitting in my room thinking about how much I don't know, I have to act on a lot of things that I'm a bit doubtful about.

Replies from: cousin_it
comment by cousin_it · 2013-07-25T07:30:31.603Z · LW(p) · GW(p)

I thought you doubted it more than a little bit, because you linked to Yvain's post that says there's not much evidence. If "a little bit" means, say, 10%, then can you describe the arguments that made you 90% confident?

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T10:29:23.871Z · LW(p) · GW(p)
  1. Yvain said that clarity of mind was one benefit he'd had. I think clarity of mind is awesome and rare, and makes it less likely that people will do stupid things for bad reasons.

  2. I've met Yvain and I think he's fairly awesome. Likewise, of the other LW people I've met in real life, they seem disproportionately awesome–they have clarity of mind, yes, but it seems to lead into other things, like doing things differently because you're actually able to recognize the reason why you were doing non-useful things in the first place. Correlation not causation, of course, and I didn't know these people 5 years ago, and even if I had, people progress in awesomeness anyway. But still. Correlation = data in a direction, and it's not the direction of rationality being useless.

  3. In his post Yvain distinguishes between regular rationality, which he thinks a lot of people have, and "x-rationality" that you get from long study of the Sequences' concepts. I think a lot fewer people have even regular rationality, that it's a continuum not a divide, and that strategically placed and worded LW concepts could push almost anymore further towards the 'rationality' side.

  4. I've changed a lot because of my exposure to the rationality community, and in ways that I don't think I could have attained otherwise. A lot of this is due to clarity of mind–in particular my allowing myself to think thoughts that are embarrassing or otherwise painful. Some of it's due to specific ideas, like "notice that you're confused" or "taboo word X". Some of it's due to just hanging out with a social group who think differently than my parents. See this post a year and a half ago, and this post from lately.

Replies from: cousin_it
comment by cousin_it · 2013-07-25T12:40:26.256Z · LW(p) · GW(p)

If such evidence is enough, then rationality would probably recommend you to spread religion instead of rationality :-) Religious people also often talk about how religion gave them wonderful feelings and improved their lives, and there are actual studies showing religious people are happier and healthier.

I feel that you haven't mentioned an important factor, which is that LW-rationality sounds very attractive in some sense. If that's correct, then you're not alone in this, it took me years to learn to honestly subtract that factor.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T15:59:00.454Z · LW(p) · GW(p)

I feel that you haven't mentioned an important factor, which is that LW-rationality sounds very attractive in some sense.

Noted. However, before I subtract that factor, I would like to learn whether LW-style rationality seems so attractive for a good reason: long-term, averaged over many people, does it make a difference? It has for a few people. I don't think you can conclusively say, yet, whether it's worthwhile teaching to to everyone. In 5-10 years, when CFAR longer-term data starts coming in, then I'll know. In the meantime, trying to spread it to other people provides data, too.

If it turns out it doesn't help most people, I won't keep trying to show it to other people, although I'll probably try to stick with the community. I would still want to keep looking for something else to try to teach the other people who keep doing stupid things. Call me an idealist...

Religion, AFAICT, does not teach clarity of mind. In many cases it teaches people to follow their intuitions and gut feelings because "God is looking out for them." This sometimes turns out well for the individual, and sometimes badly (which you'd expect; intuitions are valuable data but can be wrong if the heuristics are applied out of context). Overall I think it's bad for society, and better if people notice that their hunches are in fact hunches and try to fact-check them. (This isn't always possible; sometimes you have no outside-view data and you have to go with your gut feeling. But rationality would teach that, too.)

And yeah, I'm picking rationality as a thing to try to spread without having looked at all the possible alternatives. I think that's okay. There are other people in society who are trying to spread other things for similar reasons. If there are 10 people like me, all with different agendas but for the same reasons, and we're all paying attention to the data of the next 5 years, and it turns out that one of our methods is actually effective, I would consider that a success. I just don't know which one of the 10 people I am yet. (If I meet one of the others, and they convince me their agenda has a higher chance of success, I would think about it and then probably agree to help them.)

Replies from: cousin_it
comment by cousin_it · 2013-07-25T17:41:36.643Z · LW(p) · GW(p)

Is it just me, or does your comment sound like a retreat from "we need to spread rationality because it's a good idea" to "we need to spread rationality to figure out if it's a good idea"?

If yes, then note that LW has existed for years and has thousands of users. Yvain was among the first contributors to LW and his early posts were already excellent. Many other good contributors, like Wei Dai (invented UDT, independently invented cryptocurrency) or Paul Christiano (IMO participant), were also good before they joined... As Yvain's post said, it seems hard to find people who benefited a lot from LW-rationality.

I'm not sure we need more information about the usefulness of LW-rationality before we can make a conclusion. We already have a lot of evidence pointing one way, look at all the LWers who didn't benefit. Besides, what makes you think that a study with more participants and longer duration would give different results? If anything, it's probably going to be closer to the mean, because LW folks are self-selected, not randomly selected from the population.

Replies from: Wei_Dai, Swimmer963, palladias
comment by Wei Dai (Wei_Dai) · 2013-08-03T17:12:32.931Z · LW(p) · GW(p)

As Yvain's post said, it seems hard to find people who benefited a lot from LW-rationality.

I think LW has at least made me better at handling disagreements with others. For example I'm rather embarrassed when I look back on my early discussions with Nick Szabo on cryptocurrency and other topics, and I think a disagreement I had a few years ago with my business partner was also helped greatly by both of us having followed LW (or maybe it was still OB at that point).

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T20:34:15.559Z · LW(p) · GW(p)

I would say that rationality is worth trying to spread because it may be a good idea, and because it's something I know about and can think and plan about. Do you know of another community that has a similar level of development to LW (i.e. fairly cohesive but still quite obscure) that I should also investigate? (AFAIK, CFAR is looking for such organizations for new ideas anyway.)

Also, I'm going to update from your comment in the direction of rationality outreach turning out not being the best use of my time.

Replies from: cousin_it
comment by cousin_it · 2013-07-28T09:31:26.366Z · LW(p) · GW(p)

For a while I satisfied my idea-spreading urges by teaching math to talented kids on a volunteer basis. If you're very good at something (e.g. swimming), you could try teaching that, it's a lot of fun.

Or you could spend some effort on figuring out how to measure rationality and check if someone is making progress. That's much harder though, once you get past the obvious wrong answers like "give them a multiple choice test about rationality". Eliezer and Anna have written a lot about this problem.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-29T15:51:26.382Z · LW(p) · GW(p)

I do teach swimming; I did for many years as a job, and now I do it for fun (and for free) to the kids of my friends (and several of the CFAR staff when I was in San Francisco). It's something I'm very good at (I may be more at teaching swimming to others than swimming myself), and it fulfills an urge, but not the idea-spreading one.

If CFAR is looking for help trying to make a rationality test, I would be happy to help, too...

comment by palladias · 2013-07-28T05:11:18.198Z · LW(p) · GW(p)

Well, if the criteria for success is inventing cryptocurrency, I don't predict that teaching rationality will have that effect on people. It's a lot more small usefulness that compounds over time. So understanding Bayes makes it easier to assemble what you know coherently, learning to install habits helps you remember to use the skill when you're most likely to need it... etc. That habit of reasoning might save you money, or social capital, or time. And, over the course of your life, it gives you more time and scope to act.

That's pretty much what it does for me, so far, and it's been a worthwhile level up. It did make a difference for me to learn and practice in a community (built in spaced repetition, yay!) rather than just reading. The reading helped, but once I have a tool, it takes practice to remember to use it, instead of my old default.

comment by Lumifer · 2013-07-25T00:49:19.871Z · LW(p) · GW(p)

You want to find out how to spread rationality?

Read this

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-25T05:00:08.893Z · LW(p) · GW(p)

This article points out a pretty important obstruction to the general spread of rationality:

So what were the key differences [between the slow spread of antiseptics and the fast spread of anesthesia]? First, one combatted a visible and immediate problem (pain); the other combatted an invisible problem (germs) whose effects wouldn’t be manifest until well after the operation.

Rationality training does not combat a visible and immediate problem because people do not have a sense that more is possible along this dimension.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T05:02:13.147Z · LW(p) · GW(p)

The article also points out how rationality spreads by social contact and through becoming emotionally trusted.

It's really a very good article on how rationality spreads (or does not) in the real world.

comment by katydee · 2013-07-25T00:15:18.279Z · LW(p) · GW(p)

I'm going to post multiple comments here because I have several separate thoughts about these issues and I want them to be voted on separately so I can get a better idea of people's thoughts on this matter. My comments on this post will be posted as comments to this comment-- that way, people can also vote on the concept of posting multiple thoughts as separate comments.

Replies from: katydee, katydee, katydee, RobbBB
comment by katydee · 2013-07-25T00:20:57.259Z · LW(p) · GW(p)

Another problem is that there isn't really any standard "rationality test" or other ability to actually determine how rational someone is, though some limited steps have been taken in that direction. Stanovich is working on one, but it can't be expected for 3+ years at this stage.

This obviously limits the extent to which we can determine whether rationalists "actually win" (my impression, incidentally, is that they do but that there are a lot of skills that help more than current "rationality training" for the average person), what forms of rationality practice yield the most benefits, and so on.

comment by katydee · 2013-07-25T00:23:53.451Z · LW(p) · GW(p)

When it comes to raising the sanity waterline, I can't help but think that the intelligence issue is likely to be a paper tiger. In fact I think LessWrong as a whole cares far too much about unusually intelligent people and that this is one of the biggest flaws of the community as a general-interest project. However, I also recognize that multiple purposes are at work here and such goal conflict may be inevitable.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-25T01:10:28.706Z · LW(p) · GW(p)

Can you elaborate on this? I think intelligence is a really important component of rationality in practice (although by "unusually intelligent" you might mean a higher number of standard deviations above the mean than I do).

Replies from: katydee
comment by katydee · 2013-07-26T19:22:39.593Z · LW(p) · GW(p)

Sure. Most rationality "in the wild" appears to be tacit rationality and building good habits, and I don't think that intelligence is particularly important for that. I would definitely predict, for instance, that rationality training could be accessible to people with IQs 0-1 standard deviations above the mean.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-28T02:31:11.847Z · LW(p) · GW(p)

I agree that this kind of rationality exists, but I think it tends to be domain-specific and suffer from transfer issues, and I'm also skeptical that it's easily teachable.

Replies from: katydee
comment by katydee · 2013-07-28T06:27:56.114Z · LW(p) · GW(p)

I agree on all points, but I don't see strong evidence for an easily teachable form of general rationality either, regardless of how intelligent the audience may be.

One other issue is that most people who have currently worked on developing rationality are themselves very intelligent. This sounds like it wouldn't particularly be a problem-- but as Eliezer wrote in My Way:

"If there are parts of my rationality that are visibly male, then there are probably other parts—perhaps harder to identify—that are tightly bound to growing up with Orthodox Jewish parents, or (cough) certain other unusual features of my life."

Intelligence definitely strikes me as one of those unusual features.

Perhaps it could be said that current rationality practices, designed by the highly intelligent and largely practiced by the same, require high intelligence, but it nevertheless seems far from clear that all rationality practices require high intelligence.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-28T23:57:50.298Z · LW(p) · GW(p)

Fair point.

comment by katydee · 2013-07-25T00:17:11.740Z · LW(p) · GW(p)

First off, one potential problem is the term "rationality" itself. MIRI found that the term "singularity" was too corrupted by other associations to be useful, so they changed their name to avoid being associated with this. I believe that "rational" may be similarly corrupted ("logical" certainly is) and finding another term altogether might be a good tactic.

Replies from: Nornagest, Qiaochu_Yuan
comment by Nornagest · 2013-07-25T00:30:36.391Z · LW(p) · GW(p)

I think "rational" is probably fine. "Rationalist" may not be, but that's more thanks to having the connotations of an *ism than because of its stem.

comment by Qiaochu_Yuan · 2013-07-25T01:10:53.903Z · LW(p) · GW(p)

Agreed. What about "effective"?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-25T08:44:46.113Z · LW(p) · GW(p)

That does not include the "map corresponding to territory" idea, which is very important for us. Also, it has its now negative connotation. Like "rational" has Spock, "effective" has all kinds of effective villains. At least the Spock seems harmless.

Replies from: RobbBB, army1987
comment by Rob Bensinger (RobbBB) · 2013-07-25T09:02:44.723Z · LW(p) · GW(p)

I think having two different words for epistemic and instrumental rationality would be a feature, not a bug. There's already plenty of overlap between the two (knowing truths is useful, and can easily be subsumed in a discussion of instrumental rationality), but since they do come into conflict sometimes, it would be very valuable to have a concise way to specify which kind of rationality we're talking about. This would also make our replacing 'rationality' with some other term have a function beyond euphemism treadmilling, which makes it easier to justify to the anti-PR crowd.

But I agree "effective" kind of falls flat. Is there an adjective/noun set derivable from "wins" that doesn't make us sound like Charlie Sheen? (It can be a protologism.)

Replies from: nshepperd
comment by nshepperd · 2013-07-29T05:32:22.767Z · LW(p) · GW(p)

Something derived from "success"? If you don't mind sounding like a self-help guru. "Achievement" if you don't mind sounding like a primary school teacher. "Optimisation" is pretty accurate but I guess only really works for AI programmers or mathematicians who already have a technical understanding of it.

comment by A1987dM (army1987) · 2013-07-28T18:51:27.270Z · LW(p) · GW(p)

Also, it has its now negative connotation. Like "rational" has Spock, "effective" has all kinds of effective villains.

Huh. I don't get that connotation at all. OTOH, this is possibly due to me not being a native speaker or consuming unusually little mainstream mass media.

comment by Rob Bensinger (RobbBB) · 2013-07-25T00:19:24.369Z · LW(p) · GW(p)

I think the idea of posting multiple comments is good, as long as none of the comments is even a little bit a prerequisite for the others. I personally don't think it's worth voting on the idea. (Just try it out for a while and see whether you like it and whether you get any complaints.) I suggest posting the separate comments at the base level so they'll be in their proper karmic order as independent posts; otherwise you lose most of the value of this approach, and you'll be testing a different idea than the one you intend to ultimately implement.

comment by JonahS (JonahSinick) · 2013-07-27T02:00:26.966Z · LW(p) · GW(p)

Trying to learn rationality skills in my 20s, when a bunch of thought patterns are already overlearned, is even more frustrating.

I may be reading a non-existent connotation into this line, but to me it pattern matches with the belief that the human mind is a blank slate, as though you would have been rational if you hadn't been corrupted by society.

Humans are, at bottom, animals, and structured around uncritical stimulus-response type behavior. It's mysterious that humans are capable of transcending these things to achieve some sort of global rationality altogether. I think that learning to do so is inevitably going to be a lot of hard work, regardless of the stage of life at which one attempts it.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-27T12:41:48.770Z · LW(p) · GW(p)

No no no. Not at all. I was obviously less rational as a baby than I am now. But childhood neuroplasticity is a thing; it's easier to learn languages before age 10, and preferably before age 5. And kids have time. As a kid, when I did competitive swimming,I used to be in the pool >10 hours a week. Now, as an adult, I do taekwondo, and although there are 10 hours of class a week available, I only make 2-3.

I did learn some maladaptive thought patterns: i.e. my social anxiety spiral around "you just don't have enough natural talent to do X", and the kicker, "you aren't good enough." I know this is a pretty meaningless phrase, but it has emotional power because it's been around so long.

Replies from: JonahSinick, BT_Uytya, MondSemmel
comment by JonahS (JonahSinick) · 2013-07-27T17:01:24.082Z · LW(p) · GW(p)

Ok, thanks for clarifying. I understand.

I'm sympathetic to the points about neuroplasticity and time.

I teach math to exceptionally talented children. Something very exciting about it is that basically no such children have had the chance to be taught by a mathematician who's a dedicated teacher, so the experiment hadn't been performed. Some of these children are eager to and capable of learning advanced undergraduate level math at the age of 10 or so, and if they have to chance to as opposed to withering away in school, the results could be amazing.

I also had a recent shift in perspective such that I now believe that environmental factors when defined very broadly dominate genetic factors by far in determining behavior. I'm 2-4 standard deviations from the mean on a large number of ostensibly independent dimensions. Upon reflection, I realize that these may all be traceable to only ~3-4 ways in which I was unusual genetically, which then interacted and compounded over the course of my life, resulting in me being very different in so many ways. My home, school, etc. may not have been unusual, but I was interacting with the world through a different lens than other people were, with profound consequences.

So yes, I can see how learning rationality at an early age could make a big difference. For my own part, I don't have the sense of having had to unlearn maladaptive thought patterns (even though I've had maladaptive thought patterns) – it's hard to place a finger on why. I do wish that I had learned these things at a younger age. If I had learned many weak arguments style reasoning in my teens, my emotional well-being would have been significantly higher for ~10 additional years.

comment by BT_Uytya · 2013-07-27T13:21:49.341Z · LW(p) · GW(p)

On the other hand, you are probably have more raw intelligence now.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-28T19:35:21.823Z · LW(p) · GW(p)

Yes. But I probably had close to my current raw intelligence at age 15-16, and I was definitely reading hard books at age 8-9.

comment by MondSemmel · 2013-07-29T08:40:48.826Z · LW(p) · GW(p)

Kids definitely have more time, but otherwise they don't necessarily learn languages easier. Or at least, secondary languages.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-29T10:39:15.022Z · LW(p) · GW(p)

Wow. This article managed to surprise me. Not the fact that kids aren't any better than adults at learning things deliberately, class-room style–I suppose I thought they would be worse at this, but better at unstructured learning-from-stuff-happening-around-them. (I suppose I thought this because the way that young children learn to speak a first language isn't related to, or helped by, classroom instruction). But the fact that kids who started French Immersion in 7th grade are just as good as those who started in kindergarten surprised me a lot. This is a program that deliberately tries to teach less in a structured classroom way, and more the way you would learn a first language. (It doesn't do it incredibly well, though–I went through French immersion, could read and write competently and speak stiltedly by the end of eighth grade, backslid a bit during high school due to limited class hours in French, and only became fluently bilingual in university when "immersed" among actual Quebecois Francophones.) I had massively more trouble trying to learn a third language, but this is probably mostly because a) it was Chinese (more linguistically unrelated), and b) the time thing–I thought an hour a day was a ridiculous and unrealistic amount of time to spend, and what I actually spent was more like fifteen minutes.

Thank you for new information!

comment by Lumifer · 2013-07-25T00:23:54.594Z · LW(p) · GW(p)

Some notes/reactions in random order.

First, how do you understand rationality? Can you explain it in a couple of sentences without using links and/or references to lengthier texts?

Second, there are generally reasons for why things happen the way they happen. I don't want to make an absolute out of that, but if a person's behavior is seemingly irrational to you, there's still some reason, maybe understood, maybe not understood, maybe misunderstood why the person behaves that way. Rationality will not necessarily fix that reason.

Third, consider domains like finance or trading. There is immediate, obvious feedback on how successful your decisions/actions were. Moreover, people who are consistently unsuccessful are washed out (because they don't have any more money to trade/invest). If you define rationality as the way to win, finance and trading should be populated by high-performance rationalists. Does it look like that in real life?

Fourth, on LW at least there is much confounding between rationality and utilitarianism. The idea is that if you're truly rational you must be a utilitarian. I don't think so. And I suspect that making rationality popular is going to be much easier without the utilitarian baggage (in particular, that "Spock thing").

intelligence and rationality are, in theory, orthogonal, or at least not the same thing.

They are not the same thing, but I don't think they're orthogonal. I would probably say that your intelligence puts a cap on how rational you can be. People won't necessarily reach their cap, but it's very hard to go beyond it. I have strong doubts about stupid people's capabilities to be rational.

Replies from: Swimmer963, RobbBB
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T01:22:36.133Z · LW(p) · GW(p)

My weak definition of rationality: thinking about your own knowledge and beliefs from an outside perspective and updating/changing them if they are not helpful and don't make sense (epistemic); noticing your goals, thinking a bit about how to achieve them, and then doing that on purpose to see if works, while paying attention if it's not working so you can try something else; thinking about and trying to notice the actual consequences of your actions (instrumental).

Short: epistemic=believing things on purpose, instrumental=doing things on purpose for thought-out reasons.

I say weak because this isn't a superpower; you can do it without being amazingly good at that (i.e. if you have an IQ of 90). But you can exercise without being amazingly good at any sport, and you still benefit from it. I think that also stands for basic rationality.

if a person's behavior is seemingly irrational to you, there's still some reason, maybe understood, maybe not understood, maybe misunderstood why the person behaves that way.

In a general sense, yeah. People operate inside causality. But people do things for a reason that they haven't noticed, haven't thought about, and might not agree with if they did think about. For example, Bob might find himself well on the path to alcoholism without realizing that his original, harmless-and-normal-seeming craving for a drink in the evening happened because it helped with his insomnia; a problem that could more healthily be addressed by booking a doctor's appointment. (I pick this example because I recently caught myself in the early stages of this process). But from the inside, it doesn't feel like the brain is fallible, and so even people who've come across research to the contrary feel like their introspection is always correct–let alone people who've never seen those ideas. I don't think the IQ ceiling on understanding and benefiting from "I might be wrong about why I do this" is very high.

Replies from: Lumifer, RobbBB
comment by Lumifer · 2013-07-25T01:37:57.493Z · LW(p) · GW(p)

thinking about your own knowledge and beliefs from an outside perspective

Interesting. I'd probably call this self-reflection. I am also wary of the "if they are not helpful and don't make sense" criterion -- it seems to depend way too much on the way a person is primed (aka strong priors). For example, if I am a strongly believing Christian, live in a Christian community, have personal experiences of sensing the godhead, etc. any attempts to explain atheism to me will be met by "not helpful and doesn't make sense". And "believing things on purpose" also goes there -- the same person purposefully believes in Lord Jesus.

Epistemic rationality should depend on comparison to reality, not to what makes sense to me at the moment.

For instrumental here are some things possibly missing: Cost-benefit analysis. Forecasting consequences of actions. Planning (in particular, long-term planning).

But I don't know that you can't find all that on the self-help shelf at B&N...

comment by Rob Bensinger (RobbBB) · 2013-07-25T01:51:37.384Z · LW(p) · GW(p)

Short: epistemic=believing things on purpose, instrumental=doing things on purpose for thought-out reasons.

It's worth noting that this is different from how CFAR and the Sequences tend to think about rationality. They would say that someone whose beliefs are relatively unreflective and unexamined but more reliably true is more epistemically rational than someone who less reliably true beliefs who has examined and evaluated those beliefs much more carefully. I believe they'd also say that someone who acts with less deliberation and has fewer explicit reasons, but reliably gets better results, is more rational than a more reflective but ineffective individual.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T03:13:15.559Z · LW(p) · GW(p)

Agreed. And that makes sense as a way to compare a number of individuals at a single point in time. However, if you are starting at rationality level x, and you want to progress to rationality level y over time z, I'm not sure of a better way to do it than to think deliberately about your beliefs and actions. (This may include 'copying people who appear to do better in life'; that constitutes 'thinking about your beliefs/goals'). Although there may well be better ways.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-25T03:19:53.154Z · LW(p) · GW(p)

Right. I'm making a point about the definition of 'rationality', not about the best way to become rational, which might very well be heavily reflective and intellectualizing. The distinction is important because the things we intuitively associate with 'rationality' (e.g., explicit reasoning) might empirically turn out not to always be useful, whereas (instrumental) rationality itself is, stipulatively, maximally useful. We want to insulate ourselves against regrets of rationality.

If having accurate beliefs about yourself reliably makes you lose, then those beliefs are (instrumentally) irrational to hold. If deliberating over what to do reliably makes you lose, then such deliberation is (instrumentally) irrational. If reflecting on your preferences and coming to understand your goals better reliably makes you lose, then such practices are (instrumentally) irrational.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T03:46:11.361Z · LW(p) · GW(p)

Agreed that it's a good distinction to make.

comment by Rob Bensinger (RobbBB) · 2013-07-25T00:33:50.526Z · LW(p) · GW(p)

Rationality decomposes into instrumental rationality ('winning', or effectiveness; reliably achieving one's goals) and epistemic rationality (accuracy; reliably forming true beliefs).

How do you understand 'utilitarianism'? What are the things about it that you think are unimportant or counterproductive for systematic rationality? (I'll hold off on asking about what things make utilitarianism unpopular or difficult to market, for the moment.)

Replies from: Lumifer, Lumifer
comment by Lumifer · 2013-07-25T00:46:39.930Z · LW(p) · GW(p)

Rationality decomposes into instrumental rationality ('winning', or effectiveness; reliably achieving one's goals) and epistemic rationality (accuracy; reliably forming true beliefs).

And this need popularizing? You mean you'll tell people "I can teach you how the world really works and how to win" and they run away screaming "Nooooo!" ? :-D

Replies from: Nornagest, RobbBB
comment by Nornagest · 2013-07-25T00:48:15.892Z · LW(p) · GW(p)

You mean you'll tell people "I can teach you how the world really works and how to win" and they run away screaming "Nooooo!"

If I said that to some random stranger, I wouldn't expect "noooooooo", but I might expect "get in line".

comment by Rob Bensinger (RobbBB) · 2013-07-25T00:50:55.560Z · LW(p) · GW(p)

Yes, probably for most people. First, it sounds arrogant. Second, people underestimate the possibility of dramatically improved instrumental rationality. Third, a lot of people underestimate the desirability of dramatically improved epistemic rationality, and it's especially hard to recognize that there are falsehoods you think you know. (As opposed to thinking there are truths you don't know, which is easier to recognize but much more trivial.)

But that's missing the point, methinks. Even if offering to teach people those things in the abstract were the easiest sell in the world, the specific tricks that actually add up to rationality are often difficult sells, and even when they're easy sells in principle there's insufficient research on how best to make that sell, and insufficient funding into, y'know, making it.

Replies from: Jayson_Virissimo, Peterdjones, Lumifer
comment by Jayson_Virissimo · 2013-07-25T04:48:47.715Z · LW(p) · GW(p)

How do you know that people "underestimate the desirability of dramatically improved epistemic rationality"?

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2013-07-30T05:11:58.311Z · LW(p) · GW(p)

Yvain (and others) have argued that people around here make precisely the opposite mistake.

comment by Peterdjones · 2013-07-25T01:13:58.311Z · LW(p) · GW(p)

Or maybe there's a lot of utility in not coming accross geeky and selfish, so they are already being instementally rational.

comment by Lumifer · 2013-07-25T01:06:43.753Z · LW(p) · GW(p)

First, it sounds arrogant.

First, actually, comes credibility. You want to teach me how the world really works? Prove to me that your views are correct and not mistaken.You want to teach me how to win? Show me a million bucks in your bank account.

Even if offering to teach people those things in the abstract were the easiest sell in the world

Keep in mind that in non-LW terms you're basically trying to teach people logic and science. The idea that by teaching common people logic and science the world can be made a better place is a very old idea, probably dating back to Enlightenment. It wasn't an overwhelming success. It is probably an interesting and relevant question why not.

Replies from: CronoDAS, Swimmer963, Viliam_Bur, RobbBB, Jayson_Virissimo
comment by CronoDAS · 2013-07-25T05:41:38.675Z · LW(p) · GW(p)

The idea that by teaching common people logic and science the world can be made a better place is a very old idea, probably dating back to Enlightenment. It wasn't an overwhelming success. It is probably an interesting and relevant question why not.

Considering that, between then and now, we've had an Industrial Revolution in addition to many political ones, maybe it actually was?

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-07-25T01:41:01.097Z · LW(p) · GW(p)

It is probably an interesting and relevant question why not.

I agree; this is an idea I would like to hear someone else's opinion on. My intuition is that teaching people logic and science has nothing to do with making them better people; it makes them more effective at whatever they want to do, max. Trying to teach "being a better person" has been attempted (for thousands of years in religious organizations), but maybe not enough in the same places/times as teaching science.

Also, the study of cognitive biases and how intuitions can be wrong is much more recent than the Enlightenment. Thinking that you know science and all your thoughts that feel right are right is dangerous.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T01:50:43.026Z · LW(p) · GW(p)

...teaching people logic and science has nothing to do with making them better people; it makes them more effective at whatever they want to do

Correct, but that's what spreading the rationality into the masses aims to accomplish, no?

I don't think teaching people rationality implies giving them a new and improved value system.

comment by Viliam_Bur · 2013-07-25T08:39:38.126Z · LW(p) · GW(p)

You want to teach me how to win? Show me a million bucks in your bank account.

I guess CFAR should let Peter Thiel teach in their workshops. Or, more seriously, use his name (assuming he agrees with this) when promoting their workshops.

When I think about this more, there is a deeper objection. Something like this: "So, you believe you are super rational and super winning. And you are talking to me, although I don't believe you, so you are wasting your time. Is that really the best use of your time? Why don't you do something... I don't know exactly what... but something thousand times more useful now, instead of talking to me? Because this kind of optimization is precisely the one you claim to be good at; obviously you're not."

And this is an objection that makes sense. I mean, it's like if someone is trying to convince me that if I invest my money in his plan, my money will double... it's not that I don't believe in a possibility of doubling the money; it's more like: why doesn't this guy double his own money instead? -- Analogicaly, if you have superpowers that allow you to win, then why the heck are you not winning right now instead of talking to me?

EDIT: I guess we should reflect on our actions when we are trying to convince someone else about usefulness of rationality. I mean, if someone resists the idea of LW-style rationality, is it rational (is it winning on average) to spend my time trying to convince them, or should I just say "next" and move to another person? I mean, there are seven billion people on this planet, half million in the city where I live, so if one person does not like this idea, it's not like my efforts are doomed... but I may doom them by wasting all my energy and time on trying to convince this specific person. Some people just aren't interested, and that's it.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T14:39:05.458Z · LW(p) · GW(p)

Yep, that's a valid and serious objection, especially in the utilitarian context.

A couple of ways to try to deal with it: (a) point out the difference between instrumentality and goals. (b) point out that rationality is not binary but a spectrum, it's not a choice between winning all and winning nothing.

You can probably also reformulate the whole issue as "help you to deal with the life's problems -- let me show you how can you go about it without too much aggravation and hassle"...

comment by Rob Bensinger (RobbBB) · 2013-07-25T01:29:11.116Z · LW(p) · GW(p)

I agree that's an interesting and important question. If we're looking for vaguely applicable academic terms for what's being taught, 'philosophy, mathematics and science' is a better fit than 'logic and science', since it's not completely obvious to me that traditional logic is very important to what we want to teach to the general public. A lot of the stuff it's being proposed we teach is still poorly understood, and a lot of the well-understood stuff was not well-understood a hundred years ago, or even 50 years ago, or even 25 years ago. So history is a weak guide here; Enlightenment reformers shared a lot of our ideals but very little of our content.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T01:44:03.971Z · LW(p) · GW(p)

'philosophy, mathematics and science' is a better fit than 'logic and science'

I don't agree. You want to teach philosophy as rationality? There are a great deal of different philosophies, which one will you teach? Or you'll teach history of philosophy? Or meta-philosophy (which very quickly becomes yet-another-philosophy-in-the-long-list-of-those-which-tried-to-be-meta)?

And I really don't see what math has to do with this at all. If anything, statistics is going to be more useful than math because statistics is basically a toolbox for dealing with uncertainty and that's the really important part.

Replies from: Peterdjones
comment by Peterdjones · 2013-07-25T03:31:05.957Z · LW(p) · GW(p)

You want to teach philosophy as rationality?

Philosophy includes epistemology, which is kind of important to epistemic ratioanlity.

Philosophy is a toolbox as well as a set of doctrines.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T03:34:54.679Z · LW(p) · GW(p)

Philosophy includes epistemology, which is kind of important to epistemic ratioanlity.

Various philosophies include different approaches to epistemology. Which one do you want to teach?

I agree that philosophy can be a toolbox, but so can pretty much any field of human study -- from physics to literary criticism. And here we're talking about teaching rationality, not about the virtues of a comprehensive education.

comment by Jayson_Virissimo · 2013-07-25T04:51:05.521Z · LW(p) · GW(p)

The Enlightenment? Try Ancient Greece.

Replies from: Lumifer
comment by Lumifer · 2013-07-25T04:58:16.892Z · LW(p) · GW(p)

I don't think the Greeks aimed to teach hoi polloi logic and science. They were the province of a select group of philosophers.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-07-25T07:36:57.153Z · LW(p) · GW(p)

(Pedantic upvote for not saying "the hoi polloi".)

comment by Lumifer · 2013-07-25T00:44:53.921Z · LW(p) · GW(p)

How do you understand 'utilitarianism'?

In the usual way: a system of normative morality which focuses on outcomes (as opposed to means) and posits that the moral outcome is the one that maximizes utility (usually understood as happiness providing positive utility and suffering/unhappiness providing negative utility).

comment by Armok_GoB · 2013-07-25T23:43:50.109Z · LW(p) · GW(p)

Thing you might work from to get an elevator pitch on the spock problem thing: "Rationality = Ratios = Relative. Generally involves becoming less 'logical' (arguing)."

(yes, I know this isn't actualy correct, but you have to start somewhere, and I'm not good enough with words to take it further)

comment by Peterdjones · 2013-07-25T03:55:22.979Z · LW(p) · GW(p)

Suggestion: teach rationality as an open spirit of enquiry, not as a secular rleigion that will turn you into a clone of Richard Dawkins.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-25T06:15:15.035Z · LW(p) · GW(p)

Instead of down voting maybe we should instead be asking what within the LW community caused Peterdjones to say that?

Replies from: None
comment by [deleted] · 2013-07-25T06:42:01.008Z · LW(p) · GW(p)

How about "in addition to"? I don't know anyone on LW who's been turned into a Dawkins clone.

Replies from: None
comment by [deleted] · 2013-07-26T13:17:44.969Z · LW(p) · GW(p)

I am not sure that being a Dawkins clone is a completely bad thing either.