When is rationality useful?

post by Richard_Ngo (ricraz) · 2019-04-24T22:40:01.316Z · LW · GW · 26 comments

In addition to my skepticism about the foundations of epistemic rationality [LW · GW], I’ve long had doubts about the effectiveness of instrumental rationality. In particular, I’m inclined to attribute the successes of highly competent people primarily to traits like intelligence, personality and work ethic, rather than specific habits of thought. But I’ve been unsure how to reconcile that with the fact that rationality techniques have proved useful to many people (including me).

Here’s one very simple (and very leaky) abstraction for doing so. We can model success as a combination of doing useful things and avoiding making mistakes. As a particular example, we can model intellectual success as a combination of coming up with good ideas and avoiding bad ideas. I claim that rationality helps us avoid mistakes and bad ideas, but doesn’t help much in generating good ideas and useful work.

Here I’m using a fairly intuitive and fuzzy notion of the seeking good/avoiding bad dichotomy. Obviously if you spend all your time thinking about bad ideas, you won’t have time to come up with good ideas. But I think the mental motion of dismissing bad ideas is quite distinct from that of generating good ones. As another example, if you procrastinate all day, that’s a mistake, and rationality can help you avoid it. If you aim to work productively for 12 hours a day, I think there’s very little rationality can do to help you manage that, compared with having a strong work ethic and a passion for the topic. More generally, a mistake is doing unusually badly at something, but not failing to do unusually well at it.

This framework tells us when rationality is most and least useful. It’s least useful in domains where making mistakes is a more effective way to learn than reasoning things out in advance, and so there’s less advantage in avoiding them. This might be because mistakes are very cheap (as in learning how to play chess) or because you have to engage with many unpredictable complexities of the real world (as in being an entrepreneur). It’s also less useful in domains where success requires a lot of dedicated work, and so having intrinsic motivation for that work is crucial. Being a musician is one extreme of this; more relevantly, getting deep expertise in a field often also looks like this.

It’s most useful in domains where there’s very little feedback either from other people or from reality, so you can’t tell whether you’re making a mistake except by analysing your own ideas. Philosophy is one of these - my recent post details how astronomy was thrown off track for millennia by a few bad philosophical assumptions. It’s also most useful in domains where there’s high downside risk, such that you want to avoid making any mistakes. You might think that a field like AI safety research is one of the latter, but actually I think that in almost all research, the quality of your few best ideas is the crucial thing, and it doesn’t really matter how many other mistakes you make. This argument is less applicable to AI safety research to the extent that it relies on long chains of reasoning about extreme hypotheticals (i.e. to the extent that it’s philosophy) but I still think that the claim is broadly true.

Another lens through which to think about when rationality is most useful is that it’s a (partial) substitute for belonging to a community. In a knowledge-seeking community, being forced to articulate our ideas makes it clearer what their weak spots are, and allows others to criticise them. We are generally much harsher on other people’s ideas than our own, due to biases like anchoring and confirmation bias (for more on this, see The Enigma of Reason). The main benefit I’ve gained from rationality has been the ability to internally replicate that process, by getting into the habit of noticing when I slip into dangerous patterns of thought. However, that usually doesn’t help me generate novel ideas, or expand them into useful work. In a working community (such as a company), there’s external pressure to be productive, and feedback loops to help keep people motivated. Productivity techniques can substitute for those when they’re not available.

Lastly, we should be careful to break down domains into their constituent requirements where possible. For example, the effective altruism movement is about doing the most good. Part of that requires philosophy - and EA is indeed very effective at identifying important cause areas. However, I don’t think this tells us very much about its ability to actually do useful things in those cause areas, or organise itself and expand its influence. This may seem like an obvious distinction, but in cases like these I think it’s quite easy [LW · GW] to transfer confidence about the philosophical step of deciding what to do to confidence about the practical step of actually doing it.

26 comments

Comments sorted by top scores.

comment by Raemon · 2019-04-25T00:54:09.591Z · LW(p) · GW(p)

I do disagree about rationality not being useful to for "positive things" (such as creativity). I think I have become more intellectually generative as I've gained rationality skill. A lot of this does have to do with avoiding mistakes in the creative/generative process, but

a) that seems close enough to "just getting better at generation" that I'm not sure it makes much sense to refer to it as "just fixing mistakes",

b) I also think rationality involves reinforcing the skills about what to do right.

See Tuning Your Cognitive Strategies for what I think of as the most distilled essence of that skill.

I also think using rationality to clarify your goals, and to figure out which subgoals are most relevant to them, is once of the most important skills. That's not (quite) being creative/generative, but seems related.

comment by riceissa · 2019-04-25T14:33:54.363Z · LW(p) · GW(p)

We can model success as a combination of doing useful things and avoiding making mistakes. As a particular example, we can model intellectual success as a combination of coming up with good ideas and avoiding bad ideas. I claim that rationality helps us avoid mistakes and bad ideas, but doesn’t help much in generating good ideas and useful work.

Eliezer Yudkowsky has made similar points in e.g. "Unteachable Excellence" [LW · GW] ("much of the most important information we can learn from history is about how to not lose, rather than how to win", "It's easier to avoid duplicating spectacular failures than to duplicate spectacular successes. And it's often easier to generalize failure between domains.") and "Teaching the Unteachable" [LW · GW].

comment by DanielFilan · 2019-04-25T04:12:59.699Z · LW(p) · GW(p)

We can model success as a combination of doing useful things and avoiding making mistakes. As a particular example, we can model intellectual success as a combination of coming up with good ideas and avoiding bad ideas. I claim that rationality helps us avoid mistakes and bad ideas, but doesn’t help much in generating good ideas and useful work.

I think this might be sort of right, but note that since plans are hierarchical, rationality can help you avoid mistakes (e.g. failing to spend 5 minutes thinking about good ways to do something important, or assuming that your first impression of a field is right) that would have prevented you from generating good ideas.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2019-04-25T05:40:46.530Z · LW(p) · GW(p)

I agree with this. I think the EA example I mentioned fits this pattern fairly well - the more rational you are, the more likely you are to consider what careers and cause areas actually lead to the outcomes you care about, and go into one of those. But then you need the different skill of actually being good at it.

comment by ChristianKl · 2019-04-25T16:45:00.280Z · LW(p) · GW(p)
It’s least useful in domains where making mistakes is a more effective way to learn than reasoning things out in advance, and so there’s less advantage in avoiding them.
[...]
It’s least useful in domains where making mistakes is a more effective way to learn than reasoning things out in advance, and so there’s less advantage in avoiding them.

This seems to me like you equate instrumental rationality with long explicit reasoning.

If you look at rationality!CFAR there are many techniques that are of a different nature. Resolving internal conflicts via Gendlin's Focusing or Internal Double Crux is helpful for building intrinsic motivation.

comment by moses · 2019-04-25T10:14:40.647Z · LW(p) · GW(p)

I think my views are somewhat similar. Let me crosspost a comment I made in a private conversation a while ago:

I think the main reason why people are asking "Why aren't Rationalists winning?" is because Rationality was simply being oversold.

Yeah, seems like it. I was thinking: why would you expect rationality to make you exceptionally high status and high income?[1] And I think rationality was sold as general-purpose optimal decision-making, so once you have that, you can reach any goals which are theoretically reachable from your starting point by some hypothetical optimal decision-maker—and if not, that's only because the Art is not fully mature yet.

Now, in reality, rationality was something like:

  • a collection of mental movements centered around answering difficult/philosophical questions—with the soft implication that you should ingrain them, but not a clear guide on how (aside from CFAR workshops);
  • a mindset of transhumanism and literally-saving-the-world, doing-the-impossible ambition, delivered via powerfully motivational writing;
  • a community of (1) nerds who (2) pathologically overthink absolutely everything.

I definitely would expect rationalists to do better at some things than the reference class of {nerds who pathologically overthink everything}:

I would expect them not to get tripped up if explicitly prompted to consider confusing philosophical topics like meaning or free will, because the mental movement of {difficult philosophical question → activate Rationality™} is pretty easy and straightforward.

Same thing if they encounter e.g. a different political opinions or worldviews: I'd expect them to be much better at reconsidering their dogmas, if, again, externally prompted. I'd even expect them to do better evaluating strategies.

But I don't think there's a good reason to expect rationalists to do better unprompted—to have more unprompted imagination, creativity, to generate strategies—or to notice things better: their blind spots, additional dimensions in the solution space.

Rationality also won't help you with inherent traits like conscientiousness, recklessness, tendency for leadership, the biological component of charisma (beyond what reading self-help literature might do for you).

I also wouldn't expect rationalists to be able to dig their way through arbitrarily many layers of Resistance on their own. They might notice that they want to do a thing T and are not doing it, but then instead of doing it, they might start brainstorming ways how to make themselves do T. And then they might notice that they're overthinking things, but instead of doing T, they start thinking about how to stop overthinking and instead start doing. And then they might notice that and pat themselves on the back and everything and think, "hey, that would make a great post on LW", and so they write a post on LW about overthinking things instead of fucking doing the fucking thing already.

Rationality is great for critical thinking, for evaluating whatever inputs you get; so that helps you to productively consider good external ideas, not get tripped by bad ideas, and not waste your time being confused. In the ideal case. (It might even make you receptive to personal feedback in the extreme case. Depending on your personality traits, I guess.)

On the other hand, rationality doesn't help you with exactly those things that might lead to status and wealth: generating new ideas, changing your biological proclivities, noticing massive gaps in your epistemology, or overturning that heavily selected-for tendency to overthink and just stumbling ass-first out into the world and doing things.


  1. "High status and high income" is a definition of "winning" that you get if you read all the LW posts about "why aren't Rationalists winning?", look at what the author defines as "winning", then do an intersection of those. ↩︎

Replies from: moses, Viliam
comment by moses · 2019-04-25T10:22:43.588Z · LW(p) · GW(p)

In other words: Rationality (if used well) protects you against shooting your foot off, and almost everyone does shoot their foot off, so if you ask me, all the Rationalists who walk around with both their feet are winning hard at life, but having both feet doesn't automatically make you Jeff Bezos.

comment by Viliam · 2019-04-26T19:40:35.714Z · LW(p) · GW(p)
But I don't think there's a good reason to expect rationalists to do better unprompted—to have more unprompted imagination, creativity, to generate strategies—or to notice things better: their blind spots, additional dimensions in the solution space.

I wonder if it would help to build a habit about this. Something like dedicating 15 minutes every day to a rationalist ritual, which would contain tasks like "spend 5 minutes listing your current problem, 5 minutes choosing the most important one, and 5 minutes actually thinking about the problem".

Another task could be "here is a list of important topics in human life { health, wealth, relationships... }, spend 5 minutes writing a short idea for each of them how to improve, choose one topic, and spend 5 minutes expanding the idea into specific plan". Or perhaps "make a list of your strengths, now think how you could apply them to your current problems" or "make a list of your weaknesses, now think how you could fix them at least a little" or... Seven tasks for seven days of the week. Or maybe six tasks, and one day should be spent reviewing the week and planning the next one.

The idea is to have a system that has a chance to give you the prompt to actually think about something.

Replies from: ricraz, moses
comment by Richard_Ngo (ricraz) · 2019-04-26T21:42:47.987Z · LW(p) · GW(p)

When I talk about doing useful work, I mean something much more substantial than what you outline above. Obviously 15 minutes every day thinking about your problems is helpful, but the people at the leading edges of most fields spend all day thinking about their problems.

Perhaps doing this ritual makes you think about the problem in a more meta way. If so, there's an empirical question about how much being meta can spark clever solutions. Here I have an intuition that it can, but when I look at any particular subfield that intuition becomes much weaker. How much could a leading mathematician gain by being more meta, for example?

Replies from: Viliam
comment by Viliam · 2019-04-27T20:03:58.495Z · LW(p) · GW(p)

I guess we are talking about two different things, both of them useful. One is excellence in a given field, where the success could be described like "you got a Nobel price, bunch of stuff is named after you, and kids learn your name at high school". Other is keeping all aspects of your life in good shape, where the success could be described like "you lived until age 100, fit and mostly healthy, with a ton of money, surrounded by a harem of girlfriends". In other words, it can refer to being at top 0.0001 % of one thing, or at top 1-10 % at many things that matter personally.

One can be successful at both (I am thinking about Richard Feynman now), but it is also possible to excel at something while your life sucks otherwise, or to live a great life that leaves no impact on history.

My advice was specifically meant for the latter (the general goodness of personal life). I agree that achieving extraordinary results at one thing requires spending extraordinary amounts of time and attention on it. And you probably need to put emphasis on different rationality techniques; I assume that everyday life would benefit greatly from "spend 5 minutes actually thinking about it" (especially when it is a thing you habitually avoid thinking about), while scientists may benefit relatively more from recognizing "teachers' passwords" and "mysterious answers".

How much could a leading mathematician gain by being more meta, for example?

If you are leading, then what you are already doing works fine, and you don't need my advice. But in general, according to some rumors, category theory is the part of mathematics where you go more meta than usual. I am not going to pretend having any actual knowledge in this area, though.

In physics, I believe it is sometimes fruitful (or at least it was, a few decades ago) to think about "the nature of the physical law". Like, instead of just trying to find a law that would explain the experimental results, looking at the already known laws, asking what they have in common, and using these parts as building blocks of the area you research. I am not an expert here, either.

In computer science, a simple example of going meta is "design patterns", a more complex example would be thinking about programming languages and what are their desirable traits (as opposed to simply being an "X developer"), in extreme cases creating your own framework or programming language. Lisp or TeX would be among high-status examples here, but even JQuery in its era revolutionized writing JavaScript code. You may want to be the kind of developer who looks at JavaScript and invents JQuery, or looks at book publishing and invents TeX.

comment by moses · 2019-04-27T15:01:49.838Z · LW(p) · GW(p)

Hm. Yes, rationality gave us such timeless techniques like "think about the problem for at least 5 minutes by the clock", but I'm saying that nothing in the LW canon helps you make sure that what you come up with in those 5 minutes will be useful.

Not to mention, this sounds to me like "trying to solve the problem" rather than "solving the problem" (more precisely, "acting out the role of someone making a dutiful attempt to solve the problem", I'm sure there's a Sequence post about this). I feel like people who want to do X (in the sense of the word "want" where it's an actual desire, no Elephant-in-the-brain bullshit) do X, so they don't have time to set timers to think about how to do X.

What I'm saying here about rationality is that it doesn't help you figure out, on your own, unprompted, whether what you're doing is acting out a role to yourself rather than taking action. (Meditation helps, just in case anyone thought I would ever shut up about meditation.)

But rationality does help you to swallow your pride and listen when someone else points it out to you, prompts you to think about it, which is why I think rationality is very useful.

I don't think you can devise a system for yourself which prompts you in this way, because the prompt must come from someone who sees the additional dimension of the solution space. They must point you to the additional dimension. That might be hard. Like explaining 3D to a 2-dimensional being.

On the other hand, pointing out when you're shooting yourself in the foot (e.g. eating unhealthy, not working out, spending money on bullshit) is easy for other people and rationality gives you the tools to listen and consider. Hence, rationality protects you against shooting yourself in the foot, because the information about health etc. is out there in abundance, most people just don't use their ears.

I might be just repeating myself over and over again, I don't know, anyway, these are the things that splosh around in my head.

Replies from: Viliam
comment by Viliam · 2019-04-27T20:18:45.535Z · LW(p) · GW(p)
I feel like people who want to do X (in the sense of the word "want" where it's an actual desire, no Elephant-in-the-brain bullshit) do X, so they don't have time to set timers to think about how to do X.

Yeah. When someone does not do X, they probably have a psychological problem, most likely involving lying to themselves. Setting up the timer won't make the problem go away. (The rebelling part of the brain will find a way to undermine the progress.) See a therapist instead, or change your peer group.

The proper moment to go meta is when you are already doing X, already achieving some outcomes, and your question is how to make the already existing process more efficient. Then, 5 minutes of thinking can make you realize e.g. that some parts of the process can be outsourced or done differently or skipped completely. Which can translate to immediate gains.

In other words, you should not go meta to skip doing your ABC, but rather to progress from ABC to D.

If instead you believe that by enough armchair thinking you can skip directly to Z, you are using "rationality" as a substitute for prayer. Also, as another excuse for why you are not moving your ass.

comment by Raemon · 2019-04-25T00:40:06.094Z · LW(p) · GW(p)

I personally define rationality as "the study of how to make robustly good decisions" (with epistemic and instrumental rationality being two arenas that you need to be skilled at for it to matter)

Some things I see rationality for are:

  • Performing robustly even if you situation changes. You can make a bunch of money by finding a locally good job and getting skills to do it pretty well, without relying much on rationality. But, if you are dissatisfied with the prospect that your situation might change (your job stops being lucrative, you stop being emotionally satisfied with it, you realize that you have other goals, you realize that the world might be about to end), you want rationality, so that you can figure out what other goals to pursue, and how to pursue them, and what facts about reality might help or hinder you.
  • Thinking about problems when it's hard to find good evidence
  • Gaining a deeper understanding of thinking, well enough to contribute to the cutting edge of how to think well, which can then either be used to think about particularly-tricky problems, or distilled down into more general lessons than can be applied with less effort.

For example, I'm glad someone studied Bayes theorem well enough to distill it down into the general two heuristics of "remember base rates" and "remember relative likelihood ratios" when considering evidence. That person/people may or may not have gotten benefits for themselves commesurate with the effort they put in.

Replies from: romeostevensit
comment by romeostevensit · 2019-04-25T02:16:16.702Z · LW(p) · GW(p)

Escaping local minima by reasoning even when local evidence should keep you in it.

comment by romeostevensit · 2019-04-25T00:02:27.668Z · LW(p) · GW(p)

Many life skills don't show benefit until they become internalized enough to be deployed organically in response to circumstance. Probabilistic reasoning, factor analysis, noticing selection effects, noticing type errors, etc. are 'rationalist' examples of this, but it applies to many if not most skills.

https://en.wikipedia.org/wiki/Shuhari

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2019-04-25T05:39:54.137Z · LW(p) · GW(p)

This seems to be roughly orthogonal to what I'm claiming? Whether you get the benefits from rationality quickly or slowly is distinct from what those benefits actually are.

comment by norswap · 2019-04-30T14:45:54.466Z · LW(p) · GW(p)

I think rationality ought to encompass more than explicit decision making (and I think there are plenty of writing on this website that show it does even within the community).

If you think of instrumental rationality of the science of how to win, then necessarily it entails considering things like how to setup your environment, unthinking habits, how to "hack" into your psyche/emotions.

Put otherwise, it seems you share your definition of Rationality with David Chapman (of https://meaningness.com/ ) — and I'm thinking of that + what he calls "meta-rationality".

So when is rationality relevant? Always! It's literally the science of how to make your life better / achieving your values.

Of course I'm setting that up by definition... And if you look at what's actually available community-wise, we still have a long way to go. But still, there is quite a bit of content about fundamentals ways in which to improve not all of which have to do with explicit decision making or an explicit step-by-step plan where each step is an action to carry explicitly.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2019-05-01T01:27:55.792Z · LW(p) · GW(p)
So when is rationality relevant? Always! It's literally the science of how to make your life better / achieving your values.

Sometimes science isn't helpful or useful. The science of how music works may be totally irrelevant to actual musicians.

If you think of instrumental rationality of the science of how to win, then necessarily it entails considering things like how to setup your environment, unthinking habits, how to "hack" into your psyche/emotions.

It's an empirical question when and whether these things are very useful; my post gives cases in which they are, and in which they aren't.

comment by Dagon · 2019-04-24T23:12:14.019Z · LW(p) · GW(p)

Interesting take on things. I think I'd want a more specific definition of "rationality" to really debate, but I'll make a few counter-arguments that the study and practice of rationality can improve one's capability to choose and achieve desirable outcomes.

"Doing good things and avoiding mistakes" doesn't really match my model, but let's run with it. I'll even grant that achieving this by luck (including the luck of having the right personality traits and being inexplicably drawn to the good things) is probably just as good as doing it by choice. I do _NOT_ grant that it happens by luck with the same probability as by choice (or by choice + luck). Some effort spent in determining which things are good, and in which things lead to more opportunity for good is going to be rewarded (statistically) with better outcomes.

The question you don't ask, but should, is "what does rationality cost, and in what cases is the cost higher than the benefit"? I'll grant that this set may be non-zero.

I'll also wave at the recursion problem: "when is rationality useful" is a fundamentally rationalist question.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2019-04-26T21:46:24.590Z · LW(p) · GW(p)
Some effort spent in determining which things are good, and in which things lead to more opportunity for good is going to be rewarded (statistically) with better outcomes.

All else equal, do you think a rationalist mathematician will become more successful in their field than a non-rationalist mathematician? My guess is that if they spent the (fairly significant) time taken to learn and do rationalist things on just learning more maths, they'd do better.

(Here I'm ignoring the possibility that learning rationality makes them decide to leave the field).

I'll also wave at your wave at the recursion problem: "when is rationality useful" is a fundamentally rationalist question both in the sense of being philosophical, and in the sense that answering it is probably not very useful for actually improving your work in most fields.

Replies from: habryka4, DanielFilan
comment by habryka (habryka4) · 2019-04-27T00:54:19.202Z · LW(p) · GW(p)
My guess is that if they spent the (fairly significant) time taken to learn and do rationalist things on just learning more maths, they'd do better.

I would take a bet against that, and do think that studying top mathematicians roughly confirms that. My model is that many of the top mathematicians have very explicitly invested significant resources into metacognitive skills, and reflected a lot on the epistemology and methodology behind mathematical proofs.

The problem for resolving the bet would likely be what we define as "rationality" here, but I would say that someone who has written or thought explicitly for a significant fraction of their time about questions like "what kind of evidence is compelling to me?" and "what kind of cognitive strategies that I have tend to reliably cause me to make mistakes?" and "what concrete drills and practice exercises can I design to get better at deriving true conclusions from true premises?" would count as "studying rationality".

comment by DanielFilan · 2019-04-27T01:26:53.281Z · LW(p) · GW(p)

All else equal, do you think a rationalist mathematician will become more successful in their field than a non-rationalist mathematician?

This post by Jacob Steinhardt seems relevant: it's a sequence of models of research, and describes what good research strategies look like in them. He says, of the final model:

Before implementing this approach, I made little research progress for over a year; afterwards, I completed one project every four months on average. Other changes also contributed, but I expect the ideas here to at least double your productivity if you aren't already employing a similar process.

comment by Donald Hobson (donald-hobson) · 2019-04-30T11:58:00.552Z · LW(p) · GW(p)

This seems largely correct, so long as by "rationality", you mean the social movement. The sort of stuff taught on this website, within the context of human society and psychology. Human rationality would not apply to aliens or arbitrary AI's.

Some people use the word "rationality" to refer to the abstract logical structure of expected utility maximization, baysian updating, ect, as exemplified by AIXI, mathematical rationality does not have anything to do with humans in particular.

Your post is quite good at describing the usefulness of human rationality. Although I would say it was more useful in research. Without being good at spotting wrong Ideas, you can make a mistake on the first line, and produce a Lot of nonsense. (See most branches of philosophy, and all theology)

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2019-05-01T01:31:40.027Z · LW(p) · GW(p)

Depends what type of research. If you're doing experimental cell biology, it's less likely that your research will be ruined by abstract philosophical assumptions which can't be overcome by looking at the data.

Replies from: ChristianKl, habryka4
comment by ChristianKl · 2019-05-06T13:47:09.712Z · LW(p) · GW(p)

Philosophic assumptions about what it means for a gene to have a given function aren't trivial. It's quite easy to fall victim to think about genes as platonic concepts that have an inherent function in a way that macro-phenomena don't have in our world.

If you are dealing with complex system it's quite easy to get mislead by bad philosophic assumptions.

In bioinformatics philosophers like Barry Smith made very important contributions to think better about ontology.

Apart from ontology epistomology is also hard. A lot of experimental cell biology papers don't replicate. Thinking well about epistomology would allow the field to reduce the amount of papers that draw wrong conclusion from the data.

Distinguishing correlation from causation requires reasoning about the underlying reality and it's easy to get wrong.

comment by habryka (habryka4) · 2019-05-01T01:44:52.746Z · LW(p) · GW(p)

That actually seems false to me. My current model is that cell biology is more bottlenecked on good philosophical assumptions than empirical data. Just flagging disagreement, not necessary to hash this out.